report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Since it started development in 2003, FCS has been at the center of the Army’s efforts to modernize into a lighter, more agile, and more capable combat force. The FCS concept involved replacing existing combat systems with a family of manned and unmanned vehicles and systems linked by an advanced information network. The Army anticipated that the FCS systems, along with the soldier and enabling complementary systems, would work together in a system of systems wherein the whole provided greater capability than the sum of the individual parts. The Army expected to develop this equipment in 10 years, procure it over 13 years, and field it to 15 FCS-unique brigades—about one-third of the active force at that time. The Army also had planned to spin out selected FCS technologies and systems to current Army forces throughout the system development and demonstration phase. As we reported in 2009, the FCS program was immature and unable to meet the Department of Defense’s (DOD) own standards for technology and design from the start. Although adjustments were made, such as adding time and reducing requirements, vehicle weights and software code grew, key network systems were delayed, and technologies took longer to mature than anticipated (see fig. 1). By 2009, after an investment of 6 years and an estimated $18 billion, the viability of the FCS concept was still unknown. As such, we concluded that the maturity of the development efforts was insufficient and the program could not be developed and produced within existing resources. In April 2009, the Secretary of Defense proposed a significant restructuring of the FCS program to lower risk and address more near-term combat needs. The Secretary noted significant concerns that the FCS program’s vehicle designs—where greater information awareness was expected to compensate for less armor, resulting in lower weight and higher fuel efficiency—did not adequately reflect the lessons of counterinsurgency and close-quarters combat operations in Iraq and Afghanistan. As such, the Secretary recommended accelerating fielding of ready-to-go systems and capabilities to all combat canceling the vehicle component of the FCS program, reevaluating the requirements, technology, and approach, and relaunching the Army’s vehicle modernization program; and addressing fee structure and other concerns with current FCS contracting arrangements. In June 2009, the Under Secretary of Defense for Acquisition, Technology and Logistics issued an acquisition decision memorandum that canceled the FCS acquisition program, terminated manned ground vehicle development efforts, and laid out plans for follow-on Army Brigade Combat Team Modernization efforts. DOD directed the Army to transition to an Army-wide modernization plan consisting of a number of integrated acquisition programs, including one to develop ground combat vehicles (GCV). Subsequently, the Army has been defining its ground force modernization efforts per the Secretary’s decisions and the June 2009 acquisition decision memorandum. Although the details are not yet complete, the Army took several actions through the end of calendar year 2009. It stopped all development work on the FCS manned ground vehicles—including the non-line-of-sight cannon—in the summer of 2009 and recently terminated development of the Class IV unmanned aerial vehicle and the countermine and transport variants of the Multifunction Utility/Logistics and Equipment unmanned ground vehicle. For the time being, the Army is continuing selected development work under the existing FCS development contract, primarily residual FCS system and network development. In October 2009, the Army negotiated a modification to the existing contract that clarified the development work needed for the brigade modernization efforts. The Army is implementing DOD direction and redefining its overall modernization strategy as a result of the Secretary of Defense’s decisions to significantly restructure the FCS program. It is transitioning from the FCS long-term acquisition orientation to a shorter-term approach that biennially develops and fields new increments of capability within capability packages. It now has an approved acquisition program that will produce and field the initial increment of the FCS spinout equipment, which includes unmanned aerial and ground vehicles as well as unattended sensors and munitions, and preliminary plans for two other major defense acquisition programs to define and develop follow-on increments and develop a new GCV. The Army also plans to integrate network capabilities across its brigade structure and to develop and field upgrades to other existing ground force equipment. The first program, Increment 1, is a continuation of previous FCS-related efforts to spin out emerging capabilities and technologies to current forces. Of the Army’s post-FCS modernization initiatives, Increment 1, which includes such FCS remnants as unmanned air and ground systems, unattended ground sensors, the non-line-of-sight launch system, and a network integration kit, is the furthest along in the acquisition development cycle (see fig. 2). The network integration kit includes, among other things, the integrated computer system, an initial version of the system-of-systems common operating environment, early models of the Joint Tactical Radio System, and a range extension relay. In December 2009, the Army requested and DOD approved, with a number of restrictions, the low-rate initial production of Increment 1 systems that are expected to be fielded in the fiscal year 2011-12 capability package. The Army will be continuing Increment 1 development over the next 2 years while low-rate initial production proceeds. The projected development and production cost to equip nine brigades with the Increment 1 network and systems, supported by an independent cost estimate, would be about $3.5 billion. Provides enhanced situational awareness and force protection through reduced exposure to hazards during soldier-intensive and/or high-risk functions. Provides enhanced communications and situational awareness through radios with multiple software waveforms, connections to unattended sensors, and links to existing networking capabilities. Provides force protection in an urban setting through a leave- behind, network-enabled reporting system of movement and/or activity in cleared areas. Provides independent, soldier-level aerial reconnaissance, surveillance, and target acquisition capability. Provides the ability to precisely attack armored, lightly armored, and stationary or moving targets at extended ranges despite weather/environmental conditions and/or presence of countermeasures. Provides enhanced situational awareness, force protection, and early warnings in a tactical setting through cross-cues to sensors and weapon systems. For the second acquisition program, Increment 2 of brigade modernization, the Army has preliminary plans to mature Increment 1 capabilities—potentially demonstrating full FCS threshold requirements— as well as contributing further developments of the system-of-systems common operating environment and battle command software, and demonstrating and fielding additional capabilities. For example, these may include the Armed Robotic Vehicle Assault (Light)—an unmanned ground vehicle configured for security and assault support missions—and the Common Controller, which will provide the dismounted soldier a handheld device capable of controlling, connecting, and providing data transfer from unmanned vehicles and ground sensors. Army officials indicated that they are currently working to define the content, cost, and schedule for Increment 2 with a low-rate initial production decision planned for fiscal year 2013 and a Defense Acquisition Board review expected later in fiscal year 2010. The third acquisition program would develop a new GCV. The Army reviewed current fighting vehicles across the force structure to determine whether to sustain, improve, divest, or pursue new vehicles based on operational value, capability shortfalls, and resource availability. Per DOD direction, the Army also collaborated with the Marine Corps to identify capability gaps related to fighting vehicles. For development of a new GCV, the Army’s preliminary plans indicate the use of an open architecture design to enable incremental improvements in modular armor; network architecture; and subcomponent size, weight, power, and cooling. DOD and the Army met in February 2010 to make a materiel development decision on the GCV, and the Army was subsequently authorized to release a request for proposals for GCV technology development. Over the next several months, the Army will be conducting an analysis of alternatives to assess potential materiel solutions for the GCV. The Army expects to follow the analysis with a Milestone A decision review on whether to begin technology development in September 2010. After Milestone A, Army officials are proposing the use of competitive prototyping with multiple contractors—the number of which will depend on available funding— during the technology development phase, which will feature the use of mature technologies and the fabrication and testing of prototype subsystems. In the technology development phase, the contractors will be expected to fabricate and evaluate several subsystem prototypes, including an automotive test rig and a mine blast test asset. The contractors will also be expected to develop a near-critical design review level design for their integrated vehicle and, in the process, inform the GCV concept development document. That document is expected to be finalized at the Milestone A decision point. Competitive prototypes will be fabricated and tested during the engineering and manufacturing development phase. A preliminary design review would be used to validate contractor readiness to enter detailed design at Milestone B in fiscal year 2013. The Army’s preliminary plans indicate that the first production vehicles could be delivered in late fiscal year 2017, about 7 years from Milestone A. The Army is planning to incrementally develop and field an information network to all of its brigades in a decentralized fashion, that is, not as a separate acquisition program. The Army has defined a preliminary network strategy and is in the process of defining what the end state of the network will need to be, as well as how it may build up that network over an undefined period of time. In the near term, the Army is working to establish a common network foundation to build on and to define a common network architecture based on what is currently available and expected to become available in the near future. Current communications, command and control, and networking acquisition programs will continue and will be expected to build upon the current network foundation and architecture over time. Networking capabilities will be expected to meet specific standards and interface requirements. According to Army officials, the ongoing incremental network and software development activities and requirements will be dispersed to these acquisition programs, where they will be considered for further development and possible fielding. The only original FCS network development activities that the Army plans to continue under the FCS development contract are those supporting the network integration kit for Increment 1 and whatever additional networking capabilities may be needed for Increment 2. DOD expects the Army to present its network development plans later in 2010. (See table 1.) As shown in table 1, the Army is proposing to make substantial investments in its post-FCS acquisition initiatives. For fiscal year 2011, the Army is proposing research and development funding of about $2.5 billion and procurement funding of about $683 million. For the following 4 years (fiscal years 2012-2015), the Army plans additional research and development investments of about $10.4 billion and procurement investments of about $10.7 billion. For the time being, the Army is continuing selected development work— primarily that related to Increment 1, Increment 2, and network development—under the existing FCS development contract. In October 2009, the Army negotiated a modification to the existing contract that clarified the development work needed for the brigade modernization efforts. The Army previously awarded a contract for long lead item procurement for Increment 1. A modification to that contract was recently issued to begin low-rate initial production of the Increment 1 systems. The Army has also recently released a request for proposals for the technology development phase of the proposed GCV development effort. Contractor proposals for GCV are expected to include plans, solutions, or both for, among other things, survivability (hit avoidance system, armor, and vehicle layout) and mobility (propulsion and power generation and cooling). According to the request for proposals, the proposals can utilize prior Army investment in armor recipes, but the contractors will not get an inherent advantage for doing so. Each solution will be based on its own merits. Contractor proposals are to be submitted in April 2010 and contract awards, for cost-plus type contracts, are to be awarded after the Milestone A decision in September 2010. The challenge facing both DOD and the Army is to set these ground force modernization efforts on the best footing possible by buying the right capabilities at the best value. In many ways, DOD and the Army have set modernization efforts on a positive course, and they have an opportunity to reduce risks by adhering to the body of acquisition legislation and policy reforms—which incorporate knowledge-based best practices we identified in our previous work—that have been introduced since FCS started in 2003. The new legislation and policy reforms emphasize a knowledge-based acquisition approach, a cumulative process in which certain knowledge is acquired by key decision points before proceeding. In essence, knowledge supplants risk over time. Additionally, DOD and the Army can further reduce risks by considering lessons learned from problems that emerged during the FCS development effort. Initial indications are that the Army is moving in that direction. However, in the first major acquisition decision for the Army’s post-FCS initiatives, DOD and the Army—because they want to support the warfighter quickly—are proceeding with low-rate initial production of one brigade set of Increment 1 systems despite having acknowledged that the systems are immature and unreliable and cannot perform as required. The body of acquisition legislation and DOD policy reforms introduced since FCS started in 2003 incorporates nearly all of the knowledge-based practices we identified in our previous work (see table 2). For example, DOD acquisition policy includes controls to ensure that programs have demonstrated a certain level of technology maturity, design stability, and production maturity before proceeding into the next phase of the acquisition process. As such, if the Army proceeds with preliminary plans for new acquisition programs, then adherence to the acquisition direction in each of its new acquisition efforts provides an opportunity to improve the odds for successful outcomes, reduce risks for follow-on Army ground force modernization efforts, and deliver needed equipment more quickly and at lower costs. Conversely, acquisition efforts that proceed with less technology, design, and manufacturing knowledge than best practices suggest face a higher risk of cost increases and schedule delays. As shown in table 2, the cumulative building of knowledge consists of information that should be gathered at three critical points over the course of a program: Knowledge point 1 (at the program launch or Milestone B decision): Establishing a business case that balances requirements with resources. At this point, a match must be made between the customer’s needs and the developer’s available resources—technology, engineering, knowledge, time, and funding. A high level of technology maturity, demonstrated via a prototype in its intended environment, indicates whether resources and requirements match. Also, the developer completes a preliminary design of the product that shows that the design is feasible and that requirements are predictable and doable. Knowledge point 2 (at the critical design review between design integration and demonstration): Gaining design knowledge and reducing integration risk. At this point, the product design is stable because it has been demonstrated to meet the customer’s requirements as well as cost, schedule, and reliability targets. The best practice is to achieve design stability at the system-level critical design review, usually held midway through system development. Completion of at least 90 percent of engineering drawings at this point provides tangible evidence that the product’s design is stable, and a prototype demonstration shows that the design is capable of meeting performance requirements. Knowledge point 3 (at production commitment or the Milestone C decision): Achieving predictable production. This point is achieved when it has been demonstrated that the developer can manufacture the product within cost, schedule, and quality targets. The best practice is to ensure that all critical manufacturing processes are in statistical control— that is, they are repeatable, sustainable, and capable of consistently producing parts within the product’s quality tolerances and standards—at the start of production. The Army did not position the FCS program for success because it did not establish a knowledge-based acquisition approach—a strategy consistent with DOD policy and best acquisition practices—to develop FCS. The Army started the FCS program in 2003 before defining what the systems were going to be required to do and how they were going to interact. It moved ahead without determining whether the FCS concept could be developed in accordance with a sound business case. Specifically, at the FCS program’s start, the Army had not established firm requirements, mature technologies, a realistic cost estimate, or an acquisition strategy wherein knowledge drives schedule. By 2009, the Army still had not shown that emerging FCS system designs could meet requirements, that critical technologies were at minimally acceptable maturity levels, and that the acquisition strategy was executable within estimated resources. With one notable exception, there are initial indications that DOD and the Army are moving forward to implement the acquisition policy reforms as they proceed with ground force modernization, including the Secretary of Defense’s statement about the ground vehicle modernization program—to “get the acquisition right, even at the cost of delay.” In addition, DOD anticipates that the GCV program will comply with DOD acquisition policy in terms of utilizing competitive system or subsystem prototypes. According to a DOD official, a meeting was recently held to consider a materiel development decision for the GCV, and the Army is proposing to conduct a preliminary design review on GCV before its planned Milestone B decision point. Additionally, a configuration steering board is planned for later in 2010 to address reliability and military utility of infantry brigade systems. In the first major acquisition decision for the Army’s post-FCS initiatives, DOD and the Army—because they want to support the warfighter quickly—are proceeding with low-rate initial production of Increment 1 systems despite having acknowledged that systems are immature, are unreliable, and cannot perform as required. In December 2009, the Under Secretary of Defense for Acquisition, Technology and Logistics approved low-rate initial production of Increment 1 equipment for one infantry brigade but noted that there is an aggressive risk reduction plan to grow and demonstrate the network maturity and reliability to support continued Increment 1 production and fielding. In the associated acquisition decision memorandum, the Under Secretary acknowledged the risks of pursuing Increment 1 production, including early network immaturity; lack of a clear operational perspective of the early network’s value; and large reliability shortfalls of the network, systems, and sensors. The Under Secretary also said that he was aware of the importance of fielding systems to the current warfighter and that the flexibility to deploy components as available would allow DOD to “best support” the Secretary of Defense’s direction to “win the wars we are in.” Because of that, the Under Secretary specified that a number of actions be taken over the next year or more and directed the Army to work toward having all components for the program fielded as soon as possible and to deploy components of the program as they are ready. However, the Under Secretary did not specify the improvements that the Army needed to make or that those improvements are a prerequisite for the Under Secretary approving additional production lots of Increment 1. The approval for low-rate initial production is at variance with DOD policy and Army expectations. DOD’s current acquisition policy requires that systems be demonstrated in their intended environments using the selected production-representative articles before the production decision occurs. However, the testing that formed the basis for the Increment 1 production decision included surrogates and non-production- representative systems, including the communications radios. As we have previously noted, testing with surrogates and non-production- representative systems is problematic because it does not conclusively show how well the systems can address current force capability gaps. Furthermore, Increment 1 systems—which are slated for a fiscal year 2011-12 fielding—do not yet meet the Army’s expectations that new capabilities would be tested and their performance validated before being deployed in a capability package. As noted in 2009 test results, system performance and reliability during testing was marginal at best. For example, the demonstrated reliability of the Class I unmanned aerial vehicle was about 5 hours between failure, compared to a requirement for 23 hours between failure. The Army asserts that Increment 1 systems’ maturity will improve rapidly but admits that it will be a “steep climb” and not a low-risk effort. While the Under Secretary took current warfighter needs into account in his decision to approve Increment 1 low-rate initial production, it is questionable whether the equipment can meet one of the main principles underpinning knowledge-based acquisition—whether the warfighter needs can best be met with the chosen concept. Test reports from late 2009 showed conclusively that the systems had limited performance, and that this reduced the test unit’s ability to assess and refine tactics, techniques, and procedures associated with employment of the equipment. The Director, Operational Test and Evaluation recently reported that none of the Increment 1 systems have demonstrated an adequate level of performance to be fielded to units and employed in combat. Specifically, the report noted that reliability is poor and falls short of the level expected of an acquisition system at this stage of development. Shortfalls in meeting reliability requirements may adversely affect Increment 1’s overall operational effectiveness and suitability and may increase life-cycle costs. In addition, in its 2009 assessment of the increment’s limited user test—the last test before the production decision was made—the Army’s Test and Evaluation Command indicated that the Increment 1 systems would be challenged to meet warfighter needs. It concluded that, with the exception of the non-line-of-sight launch system, which had not yet undergone flight testing, all the systems were considered operationally effective and survivable, but with limitations, because they were immature and had entered the test as pre-production-representative systems, pre-engineering design models, or both. Additionally, the command noted that these same systems were not operationally suitable because they did not meet required reliability expectations. In recent testimony before a House subcommittee, the Director, Operational Test and Evaluation stated that flight testing of the non-line- of-sight launch system was conducted in January and February 2010. In that testing, two of six missiles fired achieved target hits and four missed their targets. The Army informed the Director that Failure Review Board investigations of the flight failures are under way. Army and DOD officials made a very difficult decision when they canceled what was the centerpiece of Army modernization—the FCS program. As they transition away from the FCS concept, both the Army and DOD have an opportunity to improve the likely outcomes for the Army’s ground force modernization initiatives by adhering closely to recently enacted acquisition reforms and by seeking to avoid the numerous acquisition pitfalls that plagued FCS. As DOD and the Army proceed with these significant financial investments, they should keep in mind the Secretary of Defense’s admonition about the new ground vehicle modernization program: “get the acquisition right, even at the cost of delay.” Based on the preliminary plans, we see a number of good features, such as the Army’s decision to pursue an incremental acquisition approach for its post-FCS efforts. However, it is vitally important that each of those incremental efforts adheres to knowledge-based acquisition principles and strikes a balance between what is needed, how fast it can be fielded, and how much it will cost. Moreover, the acquisition community needs to be held accountable for expected results, and DOD and the Army must not be willing to accept whatever results are delivered regardless of military utility. We are concerned that in their desire for speedy delivery of emerging equipment to our warfighters in the field, DOD and the Army did not strike the right balance in prematurely approving low-rate initial production of Increment 1 of brigade modernization. Although the Army argues that it needs to field these capabilities as soon as possible, none of these systems have been designated as urgent and it is not helpful to provide early capability to the warfighter if those capabilities are not technically mature and reliable. If the Army moves forward too fast with immature Increment 1 designs, then that could cause additional delays as the Army and its contractors concurrently address technology, design, and production issues. Production and fielding is not the appropriate phase of acquisition to be working on such basic design issues. In our recent report, we made recommendations intended to reduce the risk of proceeding into production with immature technologies. In that regard, we recommended that the Secretary of Defense mandate that the Army correct the identified maturity and reliability issues with the Increment 1 network and systems prior to approving any additional lots of the Increment 1 network and systems for production. Specifically, the Army should ensure that the network and the individual systems have been independently assessed as fully mature, meet reliability goals, and have been demonstrated to perform as expected using production- representative prototypes. We also recommended that the Secretary of the Army should not allow fielding of the Increment 1 network or any of the Increment 1 systems until the identified maturity and reliability issues have been corrected. In response, DOD concurred with our recommendations and stated that the need to correct those issues has been communicated to the Army. DOD also asserted that Increment 1 systems will be tested in their production configuration, and performance will be independently assessed against capability requirements prior to DOD approving production of any additional lots of Increment 1 systems. The Army has many Increment 1 development and testing activities planned for the coming months and we intend to monitor their progress closely. DOD also stated that Increment 1 systems would not be fielded until performance is sufficient to satisfy the warfighter’s capability requirements. It is essential that (1) Increment 1 network and systems clearly demonstrate their ability to fully satisfy the needs of the warfighter and (2) DOD and the Army not be willing to accept whatever acquisition results are delivered regardless of their military utility. Again, we intend to follow the Army and DOD’s activities and actions in the coming months. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or members of the subcommittee may have. For future questions about this statement, please contact Michael J. Sullivan at (202) 512-4841 or [email protected]. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this statement include William R. Graveline, Assistant Director; William C. Allbritton; Andrea M. Bivens; Noah B. Bleicher; Tana M. Davis; Marcus C. Ferguson; and Robert S. Swierczek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 2003, the Future Combat System (FCS) program has been the centerpiece of the Army's efforts to transition to a lighter, more agile, and more capable combat force. In 2009, however, concerns over the program's performance led to the Secretary of Defense's decision to significantly restructure and ultimately cancel the acquisition program. As a result, the Army is outlining a new approach to ground force modernization. This statement outlines the Army's preliminary post-FCS actions and identifies the challenges the Department of Defense (DOD) and the Army must address as they proceed. This testimony is based on GAO's report on the Army's Ground Force Modernization effort released on March 15, 2010. It emphasizes the December 2009 decision to begin low-rate initial production for Increment 1 of the Brigade Combat Team Modernization. The Army is implementing DOD direction and redefining its overall modernization strategy as a result of the Secretary of Defense's decision to significantly restructure the FCS program. It is transitioning from the FCS long-term acquisition orientation to a shorter-term approach that biennially develops and fields new increments of capability within capability packages. It now has an approved acquisition program that will produce and field the initial increment of the FCS spinout equipment, which includes unmanned aerial and ground vehicles as well as unattended sensors and munitions. It has preliminary plans for two other major defense acquisition programs to (1) define and develop follow-on increments and (2) develop a new ground combat vehicle (GCV). The individual systems within Increments 1 and 2 are to be integrated with a preliminary version of an information network. Currently, the Army is continuing selected development work--primarily that related to Increments 1 and 2 and the information network--under the existing FCS development contract. The Army has recently released a request for proposals for the technology development phase of the proposed GCV development effort. The Army's projected investment in Increments 1 and 2 and GCV is estimated to be over $24 billion through fiscal year 2015. With these modernization efforts at an early stage, DOD and the Army face the immediate challenge of setting them on the best possible footing by buying the right capabilities at the best value. DOD and the Army have an opportunity to better position these efforts by utilizing an enhanced body of acquisition legislation and DOD policy reforms--which now incorporate many of the knowledge-based practices that GAO has previously identified--as well as lessons learned from the FCS program. Preliminary plans suggest that the Army and DOD are strongly considering lessons learned. However, DOD recently approved the first of several planned low-rate initial production lots of Increment 1 despite having acknowledged that the systems and network were immature, unreliable, and not performing as required. That decision reflects DOD's emphasis on quickly providing new capabilities to combat units. This decision did not follow knowledge-based acquisition practices and runs the risk of delivering unacceptable equipment to the warfighter and trading off acquisition principles whose validity has been so recently underscored. The Army needs to seize the opportunity of integrating acquisition reforms, knowledge-based acquisition practices, and lessons learned from FCS into future modernization efforts to increase the likelihood of successful outcomes.
Egypt is a key strategic partner of the United States and is among the top recipients of U.S. security-related assistance. According to U.S. officials, the U.S.-Egypt strategic partnership is based on shared interests of promoting a stable and prosperous Egypt, securing regional peace and maintaining peace between Egypt and Israel, and countering violent extremism throughout the region. For example, Egypt has been a member of the U.S. coalition against ISIL since September 2014. In support of this strategic partnership, the U.S. government provides security assistance to Egypt through a number of accounts. Table 1 describes these accounts, the agencies responsible for funding and implementing programs under these accounts, and the goals of these programs. By law, Foreign Military Financing (FMF) funds are obligated upon apportionment from the Office of Management and Budget. DOD therefore refers to the subsequent designation of FMF funds for a particular program or contract as a “commitment.” For programs funded with appropriations from the Nonproliferation, Anti-terrorism, Demining, and Related Programs (NADR); International Narcotics Control and Law Enforcement (INCLE); and International Military Education and Training (IMET) accounts, funds are considered to be obligated once a legal liability of the U.S. government for the payment of goods and services ordered or received has been created. An unobligated balance is the amount of budget authority that has not yet been obligated. Unliquidated obligations, also known as obligated balances, are the amount of obligations already incurred for which payment has not yet been made. Disbursements are the amounts paid by federal agencies to liquidate government obligations. The Arms Export Control Act of 1976 authorizes the President to control the export of defense articles and services. The act authorizes the sale of defense articles and services to foreign countries through Foreign Military Sales and authorizes commercial exports of U.S. defense articles and services to foreign countries through direct commercial sales. State makes policy determinations for Foreign Military Sales, including which countries are eligible to participate, and DOD’s Defense Security Cooperation Agency administers the Foreign Military Sales program. State’s Directorate of Defense Trade Controls administers direct commercial sales by licensing exports of U.S. defense articles and services from U.S. companies to foreign entities. In 1996, Congress amended the Arms Export Control Act to require the President to establish a program for monitoring the end use of defense articles and services sold, leased, or exported under the Arms Export Control Act or the Foreign Assistance Act of 1961, including through Foreign Military Sales and direct commercial sales. The law required that, to the extent practicable, the program should be designed to provide reasonable assurances that recipients comply with restrictions imposed by the U.S. government on the use, transfer, and security of defense articles and defense services and that such articles and services are being used for the purposes for which they are provided. DOD’s Defense Security Cooperation Agency administers the Golden Sentry program, which was established to monitor the end use of defense articles and defense services transferred through Foreign Military Sales, and officials at the Office of Military Cooperation–Egypt (OMC-E) implement monitoring in Egypt. Under the Golden Sentry program, DOD implements two levels of end-use monitoring—enhanced and routine—and conducts periodic Compliance Assessment Visits. DOD requires enhanced end-use monitoring for sensitive defense articles, services, or technologies specifically designated by the military departments’ export policy, by the interagency release process, or by DOD policy as a result of consultation with Congress. DOD requires routine end-use monitoring for all defense articles and services provided through government-to- government programs. Routine end-use monitoring is conducted in conjunction with other security cooperation functions and uses any readily available source of information. State’s Directorate of Defense Trade Controls administers the Blue Lantern program, which was established to monitor the end use of defense articles and services exported through direct commercial sales. State officials at the U.S. embassy in Cairo, Egypt—in this report, “Embassy Cairo”—are primarily responsible for conducting Blue Lantern checks in Egypt. Under its Blue Lantern program, State is required to conduct end-use monitoring checks on the basis of a case-by-case review of export license applications against established criteria for determining potential risks. To determine whether to conduct a Blue Lantern check, State considers 20 indicators, such as unfamiliar end users, foreign intermediate consignees with no apparent connection to the end user, and requests for sensitive commodities whose diversion or illicit retransfer could have a negative impact on U.S. national security. See appendix II for an overview of the Blue Lantern and Golden Sentry end-use monitoring programs and for more details on DOD and State accountability efforts. To help ensure that U.S. assistance is not used to support human rights violators, Congress prohibits the provision of certain types of assistance to foreign security forces implicated in human rights abuses. Section 620M of the Foreign Assistance Act of 1961, known colloquially as the State Leahy law, prohibits the United States from providing assistance under the Foreign Assistance Act or the Arms Export Control Act to any unit of the security forces of a foreign country if the Secretary of State has credible information that such unit has committed a gross violation of human rights. Section 1204(a)(1) of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015, known colloquially as the DOD Leahy law, prohibits the use of DOD funds for any training, equipment, or other assistance for a unit of a foreign security force if the Secretary of Defense has credible information that the unit committed a gross violation of human rights. According to State, the Leahy laws and the corresponding policies developed to enforce and supplement these laws (see text box) are intended to leverage U.S. assistance to encourage foreign governments to prevent their security forces from committing human rights violations and to hold their forces accountable when violations occur. Key Terms of the Leahy Laws as Defined in State and DOD Policy The State and DOD Leahy laws (22 U.S.C. § 2378d, 10 U.S.C. § 2249e) do not define several key terms used in the laws. State and DOD have sought to define these terms in policy documents. Security forces of a foreign country. State guidance defines a “security force” as any division or entity (including an individual) authorized by a state or political subdivision to use force (including but not limited to the power to search, detain, and arrest) to accomplish its mission. Therefore, the guidance states that “security forces” could be units of law enforcement or the military. According to DOD’s Office of General Counsel, DOD also adheres to this definition. However, DOD may sometimes request vetting for individuals or groups that would not constitute foreign security forces, such as a government bureaucrat. Credible information. State guidance notes that the legislative history indicates that credible information is not intended to mean only evidence that would be admissible in a court of law and that the standard should generally be regarded as low. The guidance provides latitude in evaluating the credibility of information and advises personnel conducting human rights vetting to exercise good judgment and common sense. It notes that major international nongovernmental organizations and most independent newspapers are considered to be relatively credible, whereas credibility among opposition groups and smaller nongovernmental organizations varies. According to DOD’s Office of General Counsel, while DOD retains legal authority for final decisions regarding specific cases funded with DOD appropriations, it relies on State’s judgment in assessing the credibility of available information. Gross violation of human rights. State guidance notes that the Leahy laws do not contain a definition of “gross violations of human rights.” State therefore uses the definition included in Section 502B(d) of the Foreign Assistance Act of 1961 as its working standard: “Gross violations of internationally recognized human rights include torture or cruel, inhuman, or degrading treatment or punishment; prolonged detention without charges and trial; causing the disappearance of persons by the abduction and clandestine detention of those persons; and other flagrant denial of the right to life, liberty, or the security of person.” State guidance further clarifies that this definition includes extrajudicial killing and politically motivated rape. For a comparison of the provisions in the State and DOD Leahy laws, see appendix III. To determine whether there is credible information of a gross violation of human rights in accordance with both the State and DOD Leahy laws, State has established a human rights vetting process. As illustrated in figure 1, State’s process for human rights vetting in Egypt consists of vetting by personnel representing selected agencies and State offices at Embassy Cairo and at State headquarters in Washington, D.C., by State’s Bureau of Democracy, Human Rights, and Labor (DRL) and Bureau of Near Eastern Affairs (NEA). According to State officials, the State offices and other U.S. government agencies at Embassy Cairo that participate in the vetting process for Egypt are State’s Consular Section, Political Section, and Regional Security Office; the Department of Justice’s Office of the Legal Attaché and Drug Enforcement Administration; and the Department of Homeland Security. The embassy and headquarters personnel screen prospective recipients of assistance by searching relevant files, databases, and other sources of information for credible information about gross violations of human rights. State processes, documents, and tracks human rights vetting requests and results through its International Vetting and Security Tracking (INVEST) system, a web-based database. DRL is responsible for overseeing the vetting process and for developing human rights vetting policies, among other duties. U.S. agencies have committed or disbursed almost all of the approximately $6.5 billion allocated for security-related assistance for Egypt in fiscal years 2011 through 2015. The U.S. government allocated security-related assistance for Egypt from a number of accounts during this period; however, almost all of this funding—99.5 percent—was from the FMF account. As of September 30, 2015, State had committed 100 percent of the FMF funds allocated for Egypt in fiscal years 2011 through 2015. Of the total of almost $6.5 billion, State had disbursed about 40 percent of more than $32 million allocated for Egypt from four other security-related assistance accounts during this period, as of the same date. Of the almost $6.5 billion in security-related assistance funds allocated for Egypt in fiscal years 2011 through 2015, U.S. agencies had committed or disbursed more than $6.4 billion, or almost 100 percent, as of September 30, 2015. Table 2 shows the status of U.S. security-related assistance funds allocated for Egypt over the 5 fiscal years from 2011 through 2015, as well as totals for the period, as of September 30, 2015. Some of the unobligated balances shown in table 2 are no longer available to incur new obligations. However, many of these unobligated balances remain available for obligation for an additional 4 years beyond their initial period of availability, by operation of law. For the full disposition of these unobligated balances for each account, see appendix IV. In fiscal years 2011 through 2015, the U.S. government funded bilateral security-related assistance to Egypt from a number of accounts; however, almost all of this allocated funding (99.5 percent) was from the FMF account. As shown in table 3, State committed all of the approximately $6.4 billion in FMF funding allocated for Egypt in fiscal years 2011 through 2015, as of September 30, 2015. State was able to commit all of the FMF funding allocated for Egypt in fiscal years 2011 through 2015 in part due to unique authorities associated with FMF funding for Egypt. Annual appropriations acts for fiscal years 2011 through 2015 contain language stating that FMF funds shall be obligated upon apportionment. In addition, the U.S. government has historically provided Egypt with FMF assistance through a statutory cash flow financing arrangement that provides Egypt the ability to agree to the purchase of defense goods and services in a given year and then pay for them over time, using FMF funds allocated from future appropriations. Cash flow financing gives Egypt the flexibility to commit to major acquisitions in one year that will be paid for over time, similar to installment payments. Because of Egypt’s payment schedules on existing contracts, much of its FMF funding for fiscal years 2011 through 2015 is committed shortly after being obligated, according to State and DOD officials. Egypt generally uses the majority of its allocated FMF funds to purchase defense goods and services through the Foreign Military Sales program. However, it also uses FMF funds to make some direct commercial sales purchases. Egypt is one of only 10 countries that Congress has made eligible to use FMF funds to make direct commercial sales purchases. Egypt has used FMF funding to purchase and sustain a wide array of military systems, including major systems such as F-16 aircraft, Apache helicopters, and M1A1 tanks (see fig. 2). As of September 30, 2015, State had disbursed less than half of the funding allocated for Egypt in fiscal years 2011 through 2015 from the IMET, INCLE, and NADR Antiterrorism Assistance (ATA) accounts and had disbursed about 56 percent of NADR Export Control and Related Border Security (EXBS) account funding (see table 4). In total, State disbursed about 40 percent of the more than $32 million allocated for Egypt from these four accounts during this period. Table 4 provides detailed information on the amount of funds allocated, unobligated, and disbursed for each of these accounts for fiscal years 2011 through 2015. The majority of the funding that State had not disbursed as of September 30, 2015, was appropriated in fiscal years 2014 and 2015. However, each of the accounts also had funding dating back to fiscal years 2011 through 2013 that had not been disbursed. For example, as of September 30, 2015, State had not disbursed 22 percent of IMET funding, 34 percent of INCLE funding, 58 percent of ATA funding, and 27 percent of EXBS funding appropriated for Egypt in fiscal years 2011 through 2013. Of the amounts not disbursed from the four accounts, 95 percent of the funds allocated in fiscal years 2011 through 2015 remained available for obligation as of September 30, 2015. The majority of the remaining 5 percent had expired and was no longer available for disbursement, as of September 30, 2015. State officials noted various challenges since the beginning of fiscal year 2011 that affected State’s ability to obligate and disburse funds from these accounts for Egypt, including Egypt’s political transitions, the security situation in Egypt, and various legal and policy restrictions on assistance for Egypt. For example, Embassy Cairo officials noted that concerns about the ability to clear key Egyptian interlocutors through the Leahy vetting process affected their ability to obligate and disburse funds from some accounts, such as NADR ATA. Appendix V provides more details on the status of funds for these four accounts. The U.S. government has used funding from these accounts for a range of security assistance activities. For example, the U.S. government has used IMET funding to provide training to Egyptian military personnel on U.S. military doctrine and values, INCLE funding to train the Egyptian police on forensic investigative techniques and community policing models, and NADR funding to expand cooperation with the Egyptian government related to efforts to target and disrupt international terrorism and weapons smuggling groups. DOD and State implemented end-use monitoring for equipment transferred to Egyptian security forces; however, challenges associated with obtaining Egyptian government cooperation sometimes hampered these monitoring efforts, and a lack of agency documentation limited accountability for some of them. Under its Golden Sentry program, DOD conducted required serial number inventories and physical security inspections for sensitive equipment in 2015 as well as routine end-use monitoring for less sensitive items, but the department lacked documentation of some prior year monitoring efforts. DOD officials also faced challenges in gaining access to Egyptian government storage facilities to conduct physical security inspections. Under its Blue Lantern program, State conducted 12 end-use monitoring checks in fiscal years 2011 to 2015, but slow and incomplete responses from the Egyptian government and periods of limited staffing at Embassy Cairo limited the effectiveness of some checks. Under the Golden Sentry program in Egypt, DOD implements two levels of end-use monitoring and conducts compliance visits. In fiscal years 2011 through 2015, DOD conducted annual serial number inventories for sensitive equipment provided to Egypt, including Harpoon Block II missiles, night vision devices (NVD), and Stinger missile systems, as required by its enhanced end-use monitoring policy. In fiscal year 2015, DOD also completed physical inspections of storage sites for these items, as required by its enhanced end-use monitoring policy, but lacked evidence of having completed these required inspections in prior years. DOD officials in Cairo also noted challenges in gaining access to an Egyptian government storage facility for NVDs prior to 2015 to verify the physical security for these items. For less sensitive items, DOD documented 49 routine end-use monitoring observations since 2012, including observations of M1A1 tanks and Apache helicopters. Under its Golden Sentry program, DOD has implemented two levels of end-use monitoring—enhanced and routine—and is to conduct periodic Compliance Assessment Visits. Enhanced end-use monitoring. DOD requires enhanced end-use monitoring for sensitive defense articles, services, or technologies specifically designated by the military departments’ export policy, by the interagency release process, or by DOD policy as a result of consultation with Congress. As of November 2015, Egypt had three types of sensitive military equipment that require enhanced end-use monitoring—Harpoon Block II missiles, certain types of NVDs, and Stinger missile systems (see fig. 3). DOD’s policy in the Security Assistance Management Manual and corresponding standard operating procedures for end-use monitoring at the Office of Military Cooperation-Egypt (OMC-E) require DOD officials annually to physically inventory designated equipment by serial number and conduct physical security checks of storage sites where designated equipment is kept. DOD policy requires DOD officials to conduct enhanced end-use monitoring using physical security and accountability checklists. Inventory results must be recorded in DOD’s Security Cooperation Information Portal database—a web-based database that DOD designed to manage various security assistance activities, including Golden Sentry end-use monitoring. Completed checklists must be attached to inventory records and maintained for 5 years. Routine end-use monitoring. DOD requires routine end-use monitoring for all defense articles and services provided through government-to- government programs. OMC-E personnel are required to observe and report any potential misuse, or unapproved transfer, of U.S. defense articles. Routine end-use monitoring is to be conducted in conjunction with other security cooperation functions and uses any readily available source of information. For example, when visiting a military installation on other business, U.S. officials might observe how a host country’s military is using U.S. equipment. DOD policy states that routine end-use monitoring must be documented at least quarterly and records maintained for 5 years. Inventories and physical security checks are not required as part of routine end-use monitoring. Compliance Assessment Visits. In addition to enhanced and routine end-use monitoring, DOD is required to conduct periodic Compliance Assessment Visits to review and evaluate OMC-E’s compliance with Golden Sentry end-use monitoring policy and the Egyptian government’s compliance with specific physical security and accountability requirements and other terms of sale. Compliance Assessment Visits may include facility visits, records inspections, reviews of routine and enhanced end-use monitoring policies and procedures, and inventories of U.S.-origin defense articles. DOD may consider various factors when determining countries to be scheduled for visits, including the types and quantities of defense articles requiring enhanced end-use monitoring, the host nation’s history of compliance with transfer agreements, and the region’s political or military stability. DOD conducted a Compliance Assessment Visit in Egypt in February 2012. DOD’s overall assessment based on this visit was “needs improvement.” According to the manager of the Golden Sentry program, DOD planned to conduct another Compliance Assessment Visit in Egypt in February 2016. Under the Golden Sentry program in Egypt, DOD policy requires that its personnel conduct annual serial number inventories for certain sensitive equipment, including Harpoon Block II missiles, certain types of NVDs, and Stinger missile systems. As of June 1, 2015, DOD data indicate that OMC-E was compliant with required annual inventory requirements for fiscal year 2015 and was able to account for 100 percent of the items subject to enhanced end-use monitoring, including Harpoon missiles, Stinger missiles, and NVDs. Furthermore, DOD data indicate that OMC- E was at least 98 percent compliant with annual inventory requirements at the end of each of fiscal years 2011 through 2014. In an effort to further assess the quality of DOD’s annual inventories, during our June 2015 audit work in Cairo, Egypt, we conducted an inventory check of serial numbers for a random sample of Stinger missiles and were able to account for all of the missiles in our sample. We physically verified serial numbers for 95 percent of the missiles in our sample. For the remaining 5 percent, Egyptian officials provided documentation showing that, since DOD’s previous inventory, the Egyptian Armed Forces had either fired the missiles in testing or deployed them, rendering them unavailable for observation. In advance of our trip, we also requested to inventory a sample of NVDs subject to enhanced end-use monitoring inventory requirements in Egypt. According to the OMC-E official responsible for implementing the Golden Sentry program in Egypt, OMC-E communicated our request to the Egyptian government along with a list of serial numbers for NVDs subject to annual inventory requirements. However, the NVD storage facility that the Egyptian government arranged for us to visit did not house these NVDs, but rather NVDs that did not require annual inventories. As a result, we were unable to complete the planned inventory of NVDs subject to enhanced end-use monitoring requirements. According to the OMC-E official responsible for Golden Sentry in Egypt, Egyptian government officials may have been confused by our request and arranged for us to visit this facility because the NVDs stored there were subject to annual inventory requirements until December 2014. Under Golden Sentry enhanced end-use monitoring, DOD personnel also are required to conduct annual physical security inspections of storage sites where sensitive equipment subject to enhanced end-use monitoring is housed. OMC-E officials said that they generally conduct physical security inspections when they make facility visits to complete required serial number inventories. DOD policy requires that a checklist must be completed and attached to inventory records and maintained for 5 years. OMC-E provided completed checklists to us showing that it had conducted all required physical security checks in fiscal year 2015. Completed checklists showed that DOD personnel verified the physical security of the two facilities housing Harpoon Block II missiles, in March 2015; the two facilities housing NVDs, in February and March 2015; and the facility housing Stinger missiles, in March and June 2015. During our June 2015 fieldwork in Egypt, we visited the storage facility for Stinger missiles and independently verified the physical security of this facility. For example, we verified that the facility had clear zones and fences and had established procedures for accessing the bunkers where the missiles are housed, including three-key entry and sign-in, sign-out procedures. We also noted certain deficiencies. For example, the alarm system and closed circuit TVs for the two bunkers were installed but not operational. According to OMC-E officials, they were aware of these issues, which were due to FMF funding shortfalls. As of October 2015, an OMC-E official stated that the Egyptian Ministry of Defense had allocated funds to resolve these issues and was working on a contract with the U.S. Army Corps of Engineers to complete the necessary work. OMC-E lacked evidence (i.e., completed checklists), required for equipment subject to enhanced end-use monitoring, documenting any physical security inspections conducted during facility visits in fiscal years 2013 and 2014. The OMC-E official responsible for Golden Sentry end- use monitoring in Egypt was unsure why checklists that may have been used to conduct physical security checks during these years had not been maintained along with the serial number inventory records, as required by DOD’s Security Assistance Management Manual. OMC-E updated its operating instruction in June 2015 to note, among other things, the requirement to maintain records of completed physical security checks. According to OMC-E officials and documentation, on at least two occasions before fiscal year 2015, Egyptian officials prevented U.S. personnel from conducting required physical security inspections at a storage site housing many of Egypt’s U.S.-origin NVDs that at the time were subject to Golden Sentry enhanced end-use monitoring. In each instance, Egyptian officials brought the NVDs to a central location, enabling DOD personnel to conduct serial number inventories. While Egypt, as a sovereign nation, is not subject to U.S. government requirements unless it has agreed to them with the U.S. government, the Egyptian government has committed in writing to permit inspections of NVD storage facilities. The Security Assistance Management Manual includes standard terms and conditions that must be included in a Letter of Offer and Acceptance, including a provision in which the purchaser agrees to permit scheduled inspections or physical inventories upon the U.S. government’s request, except when other means of end-use monitoring verification shall have been mutually agreed. Five of the six Letters of Offer and Acceptance covering NVDs subject to enhanced end- use monitoring physical security inspections as of December 1, 2014, included such a provision. According to the manager of the Golden Sentry program at the Defense Security Cooperation Agency, one Letter of Offer and Acceptance did not contain this provision because the NVDs were provided on a grant basis and the Egyptian government agreed to permit observation of the items under a separate exchange of letters. In addition to the terms and conditions in the Letters of Offer and Acceptance, a June 2012 control plan for the physical security and accountability of NVDs signed by the Egyptian Ministry of Defense notes that NVD storage facilities will be subject to compliance assessments, audits, and inventories by U.S. representatives. Nonetheless, during a February 2012 Compliance Assessment Visit, Egyptian officials prohibited DOD inspectors from accessing an NVD storage facility, which prevented the inspectors from assessing whether the proper physical security measures were in place, according to DOD officials. This contributed to DOD’s assessment that Egypt’s procedures to comply with the conditions of the transfer agreements for U.S.-provided defense articles needed improvement. In October 2014, Egyptian officials again prohibited OMC-E personnel from accessing the same NVD storage facility to verify physical security, according to DOD documentation. A senior OMC-E official stated that he asked Egyptian officials to comply with the requirement to permit physical inspections of NVD storage facilities but the officials did not comply. According to another OMC-E official responsible for Golden Sentry end-use monitoring in Egypt, the Egyptian Armed Forces told OMC-E officials that they brought the NVDs to a central location to be inventoried for OMC-E’s convenience and security because the NVDs were deployed with units located around the country, including some unsafe locations. The same official noted that the OMC-E explained to its Egyptian counterparts that DOD’s policy required them to either verify the physical security of the facilities where the NVDs were housed or review the Egyptian Armed Forces’ log books to confirm that the NVDs were deployed with military units in the field and were no longer in storage. However, Egyptian officials were not responsive to either of these requests for reasons that are unclear, according to the OMC-E official. DOD personnel are required to conduct routine end-use monitoring in conjunction with other security cooperation functions. According to DOD policy, routine end-use monitoring must be documented on a quarterly basis and the records must be maintained for 5 years. From July 2012 through June 2015, OMC-E documented 49 routine end-use monitoring observations for a variety of military equipment, including M1A1 tanks, Apache helicopters, and various types of fixed-wing aircraft. As shown in table 5, OMC-E documented these observations during 9 of the 12 quarters of this period, with most occurring in fiscal years 2014 and 2015. According to the manager of the Golden Sentry End-Use Monitoring program at the Defense Security Cooperation Agency, there may be circumstances when DOD personnel are not able to perform routine end- use monitoring, as was the case when Embassy Cairo was under ordered departure status from July to November 2013. During our fieldwork in Egypt in June 2015, we accompanied OMC-E personnel on a routine end-use monitoring visit to observe F-16 aircraft at an airbase outside of Cairo. During this visit, we observed six aircraft parked on the tarmac and an emergency shelter constructed to house a new aircraft upon delivery from the United States. OMC-E officials conducted this visit in accordance with DOD policy, and we did not observe any end-use monitoring violations. Under the Blue Lantern program, in fiscal years 2011 through 2015, State conducted 12 end-use monitoring checks of Egyptian government entities that purchased U.S. equipment through direct commercial sales. However, State was unable to complete all but two of these checks within its self-imposed time frames, in part because of the Egyptian government’s slow responses to Blue Lantern inquiries. In addition, for some Blue Lantern checks, State received no response or partial responses from the Egyptian government to inquiries about the end use of equipment transferred through direct commercial sales. Although State has outreach programs to foster cooperation and compliance with Blue Lantern checks, it did not conduct any such outreach in Egypt in fiscal years 2011 through 2015. Under its Blue Lantern program, State is required to conduct end-use monitoring checks based on a case-by-case review of export license applications against established criteria for determining potential risks. To determine whether to conduct a Blue Lantern check, State considers 20 indicators that may trigger a check, such as unfamiliar end users, foreign intermediate consignees with no apparent connection to the end user, and requests for sensitive commodities whose diversion or illicit retransfer could have a negative impact on U.S. national security. However, State is not required to conduct a particular number of Blue Lantern checks in a given fiscal year. State conducted two types of end-use monitoring checks in Egypt during fiscal years 2011 through 2015—prelicense checks and postshipment verifications (hereafter, postshipment checks). State conducts prelicense checks prior to issuance of a license and conducts postshipment checks after an export has been approved and shipped. According to State’s Blue Lantern Guidebook, prelicense checks are generally used to verify the security of facilities where items may be temporarily or permanently housed and to ensure that the details of a proposed transaction match those on a license application, among other things. Postshipment checks are used to inquire with the end user about the specific use and handling of exported articles or about other follow-up matters related to the transaction and compliance with U.S. regulations and laws, among other things. As shown in figure 4, State’s Directorate of Defense Trade Controls requests a Blue Lantern check in Egypt by sending a cable to U.S. Embassy Cairo. The cable may request that embassy personnel make inquiries to confirm the bona fides of the end user or other party to the transaction and may include specific questions for the embassy to ask the subject of the check. State officials at Embassy Cairo conduct the check by sending letters to the Egyptian government or another entity. When embassy personnel receive a response to their inquiries, they send a return cable with their findings. Directorate of Defense Trade Controls officials then determine whether to close the case favorably or unfavorably. The results of Blue Lantern checks inform decisions on whether to approve licenses for the export of U.S. defense articles. State officials at Embassy Cairo responsible for Blue Lantern checks in Egypt during fiscal years 2013 through 2015 reported never having conducted a site visit to physically verify the security and end use of the items. According to State officials, the Egyptian government has resisted such visits, and Blue Lantern guidance does not require them. From October 2010 to April 2015, 1,280 license applications were submitted for the permanent export of defense articles to Egypt through direct commercial sales. Of those 1,280 license applications, State approved 937 licenses for the export of defense articles to Egypt. In fiscal years 2011 through 2015, State conducted 12 Blue Lantern checks on Egyptian government entities that purchased a variety of security-related equipment through direct commercial sales, including missile equipment, explosives, satellite components, riot control items, and NVDs. Of these 12 checks, 8 were prelicense checks and 4 were postshipment checks, as shown in table 6. State reported favorable results for 8 of the 12 checks and unfavorable results for 4. According to State guidance, if the critical questions have been answered satisfactorily, the transaction appears legitimate, and the bona fides of the end users or other parties are confirmed, the case will likely be closed as “favorable.” If the transaction’s legitimacy cannot be confirmed, the consignees or end user appear untrustworthy, or if there are other troubling discrepancies, the case will likely be closed as “unfavorable.” The reasons for checks in Egypt closed as unfavorable include a lack of response from the Egyptian government and the government’s denial that it had ordered the equipment subject to the check, according to our analysis of State data. While State completed 12 Blue Lantern checks from fiscal years 2011 through 2015, it faced challenges due to slow and incomplete responses from the Egyptian government and other factors such as periods of limited staffing at Embassy Cairo. Slow responses to Blue Lantern checks and limited embassy staffing. Embassy Cairo completed 2 of 12 Blue Lantern checks in Egypt from fiscal years 2011 through 2015 within State’s recommended time frames. According to the Blue Lantern Guidebook, prelicense checks are requested to be completed within 30 days and postshipment checks are requested to be completed within 45 days. In fiscal years 2011 through 2015, Embassy Cairo conducted 1 of its 8 prelicense checks within State’s 30-day goal, and completed 1 of 4 postshipment checks within State’s 45-day goal. On average, during this period, State completed its Blue Lantern checks on Egyptian entities in about 134 days, with prelicense checks averaging about 105 days and postshipment checks averaging 191 days. Seven of 12 Blue Lantern checks on Egyptian entities took 100 or more days to complete, and 1 of these 7 checks took over 300 days to complete during fiscal years 2011 through 2015 (see table 7). According to State guidance, State should defer making a decision on a license application until the results of prelicense checks are received. We found that State generally complied with this guidance for prelicense checks in Egypt. Lengthy delays in completing prelicense checks can be costly to U.S. exporters and foreign end users and ultimately harm U.S. competitiveness, according to State guidance. According to State officials, some of the delays in completing Blue Lantern checks in Egypt were due to the Egyptian government’s slow responses to State’s inquiries. For example, in one postshipment check requested in June 2013, the Egyptian government did not respond to questions related to this Blue Lantern check for over 8 months, according to our analysis of State cables. This check involved thermal imagers, which are considered to be sensitive night vision equipment. As a result, State experienced substantial delays verifying the location, security, and end use of these items. According to our analysis of State data, in another postshipment check requested in October 2013 involving satellite components, State did not receive information it deemed sufficient to close the check from the Egyptian government for more than 6 months and, according to State officials, the satellite was launched before State was able to complete the Blue Lantern check. Another two Blue Lantern checks requested in May and June 2013 took over 100 days to complete. According to State officials, political instability in Egypt and tensions in the U.S.-Egypt relationship at the time affected the timeliness of the Egyptian government’s response to these four checks. In the two most recent Blue Lantern prelicense checks, both of which involved NVDs going to the Egyptian Ministry of Interior, a State official noted that there were delays in obtaining a response because inquiries first had to be routed through the Ministry of Foreign Affairs. For one of these checks, State did not receive a response to two of its initial Blue Lantern inquiries, and as a result, completing the check took 4 months. State officials also noted that Embassy Cairo was under ordered departure status from July to November 2013, which limited staffing at the embassy, and a staffing transition in 2014 also contributed to delays in completing some checks. For instance, limited staffing at Embassy Cairo during the latter half of 2013 affected the timeliness of four Blue Lantern checks active during that period. According to State officials, the lack of available staff at Embassy Cairo during the ordered departure caused embassy officials to request additional time to complete Blue Lantern checks and to request that at least one Blue Lantern check be put on a temporary hold during that period. As a result of the hold and the Egyptian government’s slow response time, this check took 8 months to complete. A State official also noted that a staffing transition in the summer of 2014 affected another check. In this case, Embassy Cairo took 52 days to contact the Egyptian government after receiving the Blue Lantern cable requesting the prelicense check, and as a result, the check took more than 60 days to complete. Incomplete responses to some Blue Lantern checks. During fiscal years 2011 through 2015, the Egyptian government provided complete responses to 4 of 12 Blue Lantern checks. For 6 of the 12 checks, it provided partial responses to the questions asked by State officials from Embassy Cairo. For 1 of the 12 checks, the Egyptian government did not provide any response. For another 1 of the 12 checks, State officials from the Directorate of Defense Trade Controls did not receive a response from Embassy Cairo officials, so we were unable to determine whether Embassy Cairo made inquiries with the Egyptian government and whether the Egyptian government responded to any inquiries that may have been made. Directorate of Defense Trade Controls officials closed the 2 cases for which it received no response, both prelicense checks, as unfavorable and recommended that the license applications not be approved. In the 6 Blue Lantern cases where the Egyptian government partially responded to the questions asked by State officials at Embassy Cairo, State closed the cases as favorable in 5 of the 6 cases. In one prelicense check, the Egyptian government did not respond to questions about the physical security of the storage site where they planned to store NVDs, nor did it specify the branch of the Ministry of Interior—an organization identified by State as having security force units of concern for human rights violations—that would be the end user of the NVDs. In another prelicense check, State asked the Egyptian government to confirm the involvement of intermediaries from two other countries in the transaction as well as the specific type and quantity of the items it had ordered; however, the Egyptian government only confirmed the involvement of one of the intermediaries and did not provide any information about the type and quantity of items ordered. In a postshipment check involving the transfer of riot control items, such as rubber ball cartridges and smoke grenades, to the Egyptian Ministry of Interior, the Egyptian government did not respond to a question about the disposition and use of the items, according to a cable from Embassy Cairo. State closed each of these three checks as favorable. Table 8 shows a breakdown of the response-completeness status for the 12 Blue Lantern checks and the corresponding count of favorable and unfavorable results. State officials offered various reasons to explain why the Egyptian government provided partial responses to Blue Lantern inquiries and why State did not make greater efforts to obtain complete responses. According to State officials, the Egyptian government is sensitive to questions that it views as possibly infringing on its sovereignty, including questions about its purchases of U.S. military equipment. In addition, one State official who has conducted Blue Lantern checks in Egypt noted that the Egyptian government may not entirely understand Blue Lantern inquiries. According to State officials, a response to every question posed in a Blue Lantern check is not necessary for a licensing determination. These officials also noted that if their key concerns have been addressed, they are generally comfortable closing the check. However, as previously noted, because State officials at Embassy Cairo do not conduct optional site visits, written responses from the Egyptian government provide the only available information on the use and security of equipment purchased through direct commercial sales. Without timely and complete information, State may not be able to provide reasonable assurance within its own recommended time frames that some recipients are storing, handling, and using this sensitive U.S. equipment properly. State has two optional programs that are designed to facilitate cooperation and compliance with Blue Lantern end-use monitoring requirements. However, State has not used either program in Egypt since 2008, despite the Egyptian government’s limited cooperation in providing complete and timely responses to Blue Lantern inquiries. One such program consists of outreach visits conducted by State officials from the Directorate of Defense Trade Controls, who travel to meet with U.S. embassy, host government, and local business officials to educate them about the Blue Lantern program and to elicit cooperation with Blue Lantern end-use monitoring. State guidance outlines six criteria that officials use when deciding whether to conduct a Blue Lantern outreach visit. We determined that Egypt meets the following three of these criteria: High percentage of unfavorable results. The average global rate of unfavorable checks for the Blue Lantern program over the last 4 years reported (fiscal years 2011-2014) was 21 percent. Over the same period, 36 percent of Blue Lantern checks in Egypt were closed as unfavorable. State conducted 1 Blue Lantern check in Egypt in fiscal year 2015 and closed it as favorable, which lowered the unfavorable rate for Blue Lantern checks in Egypt to 33 percent during fiscal years 2011 through 2015. Need to provide education on U.S. export law and regulations. According to our analysis of State documents and data, the Egyptian government provided partial responses to 6 Blue Lantern inquiries, gave no response to at least 1 inquiry, and did not respond to at least 5 inquiries within requested time frames in fiscal years 2011 through 2015. As previously mentioned, this resulted partly from a lack of understanding about the program, according to a State official. No prior outreach visit. According to State officials in Washington, D.C., and Cairo, Egypt, as of November 2015, no Blue Lantern outreach visit had been conducted in Egypt since 2008, prior to Egypt’s political transitions in 2011 and 2013. In addition, in July 2015, State introduced the Blue Lantern Post Support Program, which provides funding for outreach and educational activities and events hosted by U.S. embassies or consulates to improve compliance with U.S. export control laws and regulations, among other things. State guidance for this program notes that when considering and prioritizing proposals to fund, State will consider various criteria, including the host government’s cooperation with Blue Lantern checks and the government’s level of understanding of U.S. commercial defense trade controls. According to a State official who conducted Blue Lantern checks in Egypt in 2013, outreach to the Egyptian government on the Blue Lantern program would be worthwhile. However, Embassy Cairo did not submit a proposal for funding under the Blue Lantern Post Support Program in 2015. According to State officials, they did not conduct any outreach in Egypt in fiscal years 2011 through 2015 because the number of Blue Lantern checks conducted in Egypt during this period was small and because they prioritized outreach to other countries. Without the cooperation of the Egyptian government, State may continue to face challenges in obtaining complete and timely responses to Blue Lantern inquiries. The U.S. government completed human rights vetting for 5,581 Egyptian security forces before providing U.S.-funded training in fiscal year 2011 through March 31, 2015; however, our analysis of a sample of names from training rosters of Egyptian security forces who received U.S.-funded training shows that that the U.S. government did not complete all required vetting prior to providing training, in violation of State’s and DOD’s policies. In contrast to State’s vetting requirements for training, State’s policies and procedures encourage, but do not specifically require, vetting for foreign security forces that receive U.S.-funded equipment, including those in Egypt. The primary method State uses in Egypt to comply with Leahy law requirements when providing equipment is to attest in memos that State is in compliance with Leahy law requirements. Various factors have posed challenges to the U.S. government’s efforts to vet recipients of U.S. assistance. Gaps and uncertainties in information have made it challenging for U.S. officials to vet some cases before providing training. Additionally, State has not established procedures for clearing smaller units or individuals within a larger unit that has been deemed ineligible to receive assistance. Finally, Embassy Cairo has recorded little information on human rights abuses by Egyptian officials in INVEST since the beginning of fiscal year 2011, despite State requirements to do so. As shown in table 9, the U.S. government completed human rights vetting for a total of 5,581 Egyptian individuals or units before providing training from fiscal year 2011 through March 31, 2015. Of the individuals and units submitted for vetting, State approved training for approximately 90 percent of the Egyptian security forces it vetted during this period. State suspended about 9 percent of the vetting cases for Egyptian security forces, suspending some cases for administrative reasons and suspending other cases because of potentially derogatory information related to the individuals or units being vetted that could not be resolved before the start of the planned training. In some cases, this potentially derogatory information related to human rights abuses, and in other cases it related to other types of potentially derogatory information, such as involvement in terrorism. For example, State suspended an individual in fiscal year 2015 because it was unable to clear the individual’s unit from involvement in torture at a military prison prior to the start of the planned training. According to State, Embassy Cairo was able to subsequently provide additional information to DRL that cleared this individual’s unit from involvement in the incident at the prison. Embassy Cairo then resubmitted the individual for vetting, and State approved him to participate in a different course, later in fiscal year 2015. State also suspended an individual in fiscal year 2015 because it had identified potential terrorism links that could not be ruled out prior to the start of the planned training. According to State officials, Embassy Cairo found no information credibly linking this person to terrorism activity; however, before further checks could be conducted, he was dropped from the course at the request of OMC-E to avoid holding up other participants. State also rejected training for Egyptian security forces in a limited number of cases due to credible information of gross violations of human rights. State rejected a total of 18 cases in fiscal year 2011 through March 31, 2015—less than 1 percent of the total cases vetted. It has not rejected any cases since fiscal year 2013, including no cases since the removal of President Morsi in July 2013. According to State officials, these rejections were related to both acts committed by specific individuals, as well as credible information about gross violations of human rights involving an individual’s unit. According to State officials, State has rejected a limited number of cases, including no cases since fiscal year 2013, due in part to problematic units and individuals being filtered out by the embassy before they are formally submitted for training. Finally, State cancelled 34 cases (less than 1 percent of the total cases vetted) for administrative reasons from November 2014, when this new disposition option was created in INVEST, through March 2015. According to State officials, these cancellations were due to issues such as training courses being cancelled and data entry errors. We determined that the U.S. government did not conduct required vetting before providing training for some of the Egyptian security forces that were trained with U.S. security-related assistance from seven accounts in fiscal year 2011 through March 31, 2015. To make this determination, we selected a generalizable stratified random sample of 166 names from training rosters of Egyptian security forces who received training funded through these seven accounts during this period. We then cross-checked the 166 names in our sample with human rights vetting data from the INVEST system to verify that the Egyptian security forces were vetted before receiving the training. State deemed our estimate of the percentage of Egyptian security forces that were not vetted and some aspects of the methodology we used to generate this estimate to be sensitive but unclassified information. We therefore omitted that information from this report. By not conducting all required human rights vetting prior to providing U.S. training to Egyptian security forces, State and DOD are not in compliance with their policies regarding human rights vetting. In addition to examining rosters for training funded through the seven accounts, we also requested training rosters from State for Egyptian security forces who had received training funded through the NADR account; however, State was unable to provide this information. We therefore did not include this account in our analysis. While INVEST data show that State vetted a number of Egyptian security forces that received NADR-funded training in fiscal year 2011 through March 31, 2015, without the NADR training roster we were not able to assess the extent to which State completed all required human rights vetting for Egyptian security forces that were trained using funding from the NADR account. State’s Foreign Affairs Manual notes, among other things, the importance of producing and maintaining adequate documentation of the agency’s activities. In addition, federal Standards for Internal Control in the Federal Government states that agencies should clearly document transactions and all significant events, the documentation should be readily available for examination, and all documentation should be properly managed and maintained. Without the training roster for the NADR account, State cannot provide assurance that all Egyptian security forces that received NADR-funded training were vetted as required. The State and DOD Leahy laws’ prohibition against providing assistance to units of foreign security forces for whom there is credible information of a gross violation of human rights also applies to equipment. However, unlike its required process for vetting individuals and units nominated to receive U.S.-provided training (see fig. 1), State does not have policies or procedures specifically requiring vetting of Egyptian security forces slated to receive U.S.-funded equipment. State policy encourages the use of the INVEST system to conduct vetting for equipment recipients but allows posts the flexibility to use other methods to comply with the Leahy laws. For Egypt, State uses memos to attest to its compliance with the Leahy laws for equipment provided to Egyptian security forces. While the memos declare State’s compliance with the Leahy laws, State officials acknowledged that there is no required process used to support the statements in the memos and that INVEST is not used to vet Egyptian recipients of U.S. equipment. We reviewed the eight memos that State drafted for fiscal years 2011 through 2015 that covered all FMF assistance allocated for Egypt during this period. The purpose of these memos is to request the Office of Management and Budget’s approval for the apportionment of FMF funds allocated for Egypt. In each of the memos, State included a statement that it was not aware of any credible information of gross violations of human rights by any unit to which assistance would be provided. More recent memos also included a statement that State would ensure that FMF assistance for Egypt would be provided only to units the department had positively determined not to have been linked to human rights violations. None of the memos we reviewed specified particular Egyptian units that were authorized to receive the FMF assistance covered by the memo. However, two of the eight memos we reviewed identified particular Egyptian security forces that would not be receiving assistance covered by the memo. For example, a September 2013 memo requesting the apportionment of approximately $584 million in fiscal year 2013 FMF funds stated that no funds would be used to support the Cairo military police. State’s three memos requesting the apportionment of fiscal years 2014 and 2015 funds noted that violent incidents in Egypt in July and August 2013 and the Egyptian military’s operations in the Sinai remained under review. State’s Bureau of Political-Military Affairs is responsible for drafting the FMF memos for Egypt, including the statements regarding compliance with the Leahy laws; however, the bureau said that it does not play a role in supporting the statements in the memos and that this was the responsibility of DRL. The DRL official responsible for reviewing these memos told us that he may check the unit names of the prospective equipment recipients if that information is available at the time the memos are drafted, to see if he is aware of human rights concerns with any of the recipient units; however, State officials told us that the specific items to be financed and the specific units or individuals to receive the items are not generally known at the time the memos are circulated and may not be known until many months or even years later. According to State officials, this is due to the Foreign Military Sales process, which can involve lengthy negotiations with Egypt and other partner countries about their requirements and extended contracting processes for complex military systems. Also, according to State officials, the receipt of sale and transfer of equipment is often concluded in the United States, with the Egyptian government then responsible for freighting the equipment to Egypt. This can result in delays in the final Egyptian recipients receiving the equipment, according to State officials. In addition, State does not currently have policies or procedures in place to require vetting after the equipment has been furnished to the Egyptian government and the ultimate end-user unit or individual is known, according to State officials. State officials noted that in some cases, Egyptian security forces receive training in association with equipment that they are provided and are thus vetted through INVEST before receiving the training. Finally, State officials said that in cases where there is no direct recipient for U.S. assistance, such as when bulk equipment and other forms of assistance (e.g., ammunition, uniforms, radios, spare parts) are provided to a country’s military services or the armed forces as a whole for general use, it is a challenge for State to verify the identity of the final recipients of this equipment. Additionally, key officials with information about human rights violations in Egypt are not involved in drafting and reviewing these memos. Embassy Cairo officials who are responsible for managing Leahy vetting at the post stated that they do not play a role in the development of these memos. These Embassy Cairo officials stated that it was not clear what role, if any, the post should be playing in ensuring Leahy law compliance for Egyptian equipment recipients and that more guidance from State headquarters on this issue would be beneficial. NEA and DRL officials in State headquarters who were responsible for conducting human rights vetting for Egyptian training recipients also stated that they do not play a role in ensuring Leahy law compliance for equipment provided to Egyptian security forces. We previously reported on State’s use of memos to comply with Leahy law requirements for equipment in a 2011 report examining human rights vetting in the Persian Gulf countries. In that report, we found that State did not conduct comparable human rights vetting for recipients of equipment as it did for recipients of training. We recommended that State implement individual- and unit-level human rights vetting for recipients of U.S.-funded equipment to reduce the risk that U.S.-funded equipment might be used by violators of human rights in the Persian Gulf countries. State concurred with our recommendation, but as of November 2015, State had not implemented it. DRL officials we interviewed acknowledged that the current approach to complying with the Leahy laws for equipment needs to be strengthened to ensure that equipment will not be provided to security forces that have committed gross violations of human rights. According to DRL officials, State is continuing to work to develop and implement a comprehensive policy on equipment vetting that is different from the current, memo-based procedure, but it has not established a specific time frame for doing so. In addition, a DRL official noted that DRL is working on a revised version of the INVEST system, to be completed in May 2016, that is expected to help facilitate equipment vetting. However, the DRL official stated that DRL has not made specific determinations about what functions related to equipment vetting will be included in the updated system or about time frames for developing agency policies and procedures that would require use of the updated system for equipment vetting. Standard practices in program management include, among other things, developing a plan to execute projects within a specific time frame. Because State has not developed policies or procedures specifically requiring vetting for Egyptian recipients of U.S. equipment, it is more difficult to reasonably ensure that U.S. equipment will not be provided to Egyptian security forces for whom there may be credible information that a unit has committed gross violations of human rights. This increases the risk that State may violate the prohibition in the Leahy laws as well as its own policy that it should ensure that its programs are efficiently and effectively carried out in accordance with applicable laws. Various factors affected State’s implementation of the Leahy laws. For example, gaps and uncertainties in information have challenged U.S. efforts to vet for training. In addition, the Egyptian government has routinely been unwilling to provide information that would facilitate the vetting process, according to State officials. Moreover, State has not established procedures for clearing smaller units or individuals within a larger unit that has been deemed ineligible to receive assistance. Finally, Embassy Cairo has populated little information about human rights abuses in Egypt in the INVEST system, despite State requirements to do so. Embassy Cairo and State officials noted that the Egyptian government has routinely been unwilling to provide information that would facilitate the vetting process. According to U.S. officials, the Egyptian government sometimes does not provide the unit-specific information necessary to complete vetting. Embassy Cairo and State headquarters officials also stated that the Egyptian government was unwilling to provide organizational charts and other information for certain key ministries that would facilitate the vetting process. For example, U.S. officials said that more detailed organizational charts for the Egyptian Ministry of Interior (MOI) would help the U.S. government differentiate between, on one hand, MOI subunits that were of concern for gross violations of human rights and thus were not able to receive U.S. assistance and, on the other hand, those MOI subunits that were not likely involved in the incidents for which State had credible information that human rights violations had occurred. The State Leahy law requires that State develop procedures to ensure that when an individual is designated to receive U.S. training or other assistance, the individual’s unit is vetted as well as the individual. State guidance notes that country security assistance teams should be well informed about the force structure and unit descriptions for the security forces with which they work and thus should be able to provide the appropriate unit-level identification for vetting purposes. State guidance directs embassies to work with host-country counterparts to identify units for the purposes of vetting and notes the importance of host-country cooperation. Embassy Cairo officials noted that issues related to Leahy vetting have been a significant source of tensions with the Egyptian government. In some cases, Egyptian agencies refused to allow any of their members to attend training events if any individuals from their organizational unit failed to clear vetting. In addition, embassy officials acknowledged that they stopped proposing training to the Egyptian government under certain programs, such as NADR ATA, because they did not want to risk causing further strain in the bilateral relationship if Egyptian officials were not approved through the vetting process. U.S. officials noted that this cessation in submitting candidates for training was one of the reasons that, since fiscal year 2013, no Egyptian security forces had been rejected through the Leahy vetting process due to credible information of a gross violation of human rights. We requested to meet with several Egyptian government ministries to obtain their perspective on the Leahy vetting program during our fieldwork in Egypt, but the Egyptian government did not respond to our request, according to State officials. Additionally, although the Egyptian government initially approved our request to meet with officials of the Egyptian Training Authority to discuss Leahy vetting for Egyptian military students, the government subsequently decided not to hold the meeting. When units have been rejected through the vetting process and deemed ineligible to receive assistance under the Leahy laws, the U.S. government cannot provide the unit further assistance unless the requirements for an exception have been met. In February 2015, State and DOD issued a joint remediation policy that outlined standards for exercising these exceptions in the Leahy laws and allowing assistance to resume to units previously deemed ineligible. However, State officials we interviewed said that if a larger unit is rejected in INVEST, it is possible for smaller units within that larger unit to be subsequently approved for training without having to meet the standards in the February 2015 State- DOD guidance, if it can be demonstrated that these smaller units were not implicated in the gross violation of human rights. This is consistent with State’s 2012 Leahy vetting guide, which states that the relevant unit for vetting purposes is the lowest deployable organizational element of a security force capable of exercising command and discipline over its members. State officials said the policy allows State to approve training for smaller units or individuals within a larger unit that is ineligible to receive assistance, if it can be demonstrated that the smaller units or individuals, by the nature of their duties, geographic location, or other circumstances, would not have been involved in the gross violation of human rights. According to U.S. officials, this approach has been used in Egypt on certain occasions. State has also used this approach in cases where units have not been officially rejected in INVEST but have been suspended because State has identified human rights concerns with the unit and there is a lack of sufficient information to complete vetting for the nominated individuals or units. While this approach has been used in Egypt, and although State’s policy allows smaller units to be identified as discrete units for purposes of Leahy vetting, State has not established specific procedures for clearing smaller units within a larger security force organization that has been rejected due to credible information of a gross violation of human rights. For example, State’s Leahy vetting guide, Embassy Cairo’s standard operating procedures, and State and DOD’s joint remediation guidance do not specifically discuss the ability to clear such units and do not establish procedures for doing so. State’s Foreign Affairs Manual highlights the importance of ensuring that key polices are documented. Without established procedures for clearing smaller units within larger organizations that have been deemed ineligible to receive assistance due to a gross violation of human rights, Embassy Cairo and other embassies do not have clear guidance on the extent to which they are able to use this option and in what situations it is or is not appropriate to seek to do so. The State Leahy law requires State to establish procedures to ensure that information on gross violations of human rights by security force units is evaluated and preserved. In addition, State’s 2012 Leahy vetting guide states that embassies in particular are required to populate the INVEST system—in conjunction with vetting or otherwise—with information on human rights abuses as these abuses come to light. However, Embassy Cairo has recorded limited information on human rights abuses by security forces in Egypt in INVEST since the beginning of fiscal year 2011, despite State’s findings of a range of human rights abuses by security forces in Egypt and despite State having vetted thousands of cases since then. As of October 2015, Embassy Cairo had uploaded only three documents to INVEST since fiscal year 2011 and no documents since fiscal year 2013, according to DRL officials. DRL officials stated that it is common for posts to not use the document library function in INVEST despite the requirement in its Leahy guide that they do so and, instead, to maintain this information in other formats, such as spreadsheets Embassy Cairo officials told us that they use to track individuals and units of concern. However, by not uploading relevant information into INVEST, Embassy Cairo is not maintaining a centralized repository of information on human rights abusers in Egypt that can be used by others in the agency. Without a centralized repository of this information, State cannot be assured that all current and future officials vetting cases in INVEST will have the information needed to make accurately informed and timely decisions regarding whether or not to approve Egyptian security officials for U.S.-funded training. The United States provides about $1.3 billion in security-related assistance to Egypt annually. DOD and State established the Golden Sentry and Blue Lantern programs, respectively, to provide reasonable assurance that military equipment transferred or exported to foreign governments is used for its legitimate intended purposes and does not come into the possession of individuals or groups who pose a threat to the United States or its allies. However, gaps in the implementation of these end-use monitoring programs—in part due to limited cooperation from the Egyptian government—hampers DOD’s and State’s ability to provide such assurances. For instance, the Egyptian government’s incomplete and slow responses to U.S. inquiries hindered State’s efforts to ensure that equipment sold through direct commercial sales is used as intended. State has recently made funding available for activities to foster greater host government cooperation with Blue Lantern requirements in some countries. However, such activities have not been used in Egypt to help improve the completeness and timeliness of these end-use monitoring checks targeting U.S. arms and other military items sold through direct commercial sales. The United States has a policy interest in leveraging U.S. assistance to encourage Egypt and other foreign governments to prevent their security forces from committing human rights violations and to hold their forces accountable when violations occur. However, the U.S. government has not consistently vetted all individuals and units in the Egyptian security forces for human rights concerns before providing training, as required by its policies. State also does not have policies or procedures for vetting specific individuals and units before it provides equipment, even though military equipment constitutes the vast majority of U.S. assistance to Egypt. Without such vetting, the U.S. government risks providing U.S. equipment, in violation of the Leahy laws, to Egyptian security forces that have committed human rights abuses. Additionally, gaps in documentation and procedures may limit the effectiveness of State’s process for vetting prospective recipients of training. The absence of certain training rosters for Egyptian security forces that received U.S. training limits the ability of U.S. agencies and third parties to verify whether these forces were properly vetted in accordance with State’s policies. This also limits accountability over U.S. efforts to train and equip Egyptian security forces. State’s lack of procedures for determining when individuals or subunits may be eligible to receive training, despite being part of larger entities prohibited from receiving assistance under the Leahy laws, increases the likelihood that these determinations may be applied inconsistently. Finally, Embassy Cairo’s minimal use of the INVEST system as a centralized repository for information on human rights abuses in Egypt limits the availability of relevant information to other U.S. officials conducting human rights vetting of candidates for U.S.-funded training in Egypt. To strengthen assurances that military equipment sold through direct commercial sales is used as intended, we recommend that the Secretary of State take the following action: Utilize available Blue Lantern outreach programs to help improve the completeness and timeliness of responses from the Egyptian government. To strengthen compliance with the Leahy laws and implementation of State’s human rights vetting process and to help ensure that U.S. funded assistance is not provided to Egyptian security forces that have committed gross violations of human rights, we recommend that the Secretary of State take the following two actions: Determine, in consultation with the Secretary of Defense, the factors that resulted in some Egyptian security forces not being vetted before receiving U.S. training, and take steps to address these factors, to ensure full compliance with human rights vetting requirements for future training. As State works to implement a revised version of the INVEST system that is expected to help facilitate equipment vetting, develop time frames for establishing corresponding policies and procedures to implement a vetting process to help enable the U.S. government to provide a more reasonable level of assurance that equipment is not transferred to foreign security forces, including those in Egypt, when there is credible information that a unit has committed a gross violation of human rights. To strengthen State’s documentation and procedures related to its human rights vetting process, we recommend that the Secretary of State take the following three actions: Take steps to ensure that State maintains training rosters or similar records of Egyptian security forces that have received U.S.-funded training to allow verification that required human rights vetting was completed before the individual or units received the training. Issue guidance establishing procedures for determining when subunits—and individuals within those subunits—are eligible to receive U.S. assistance when they are part of a larger unit that has been deemed ineligible to receive assistance under the Leahy laws. Direct Embassy Cairo to comply with the State requirement to record relevant information it obtains regarding gross violations of human rights in INVEST. We provided a draft of the sensitive but unclassified version of this report to the Departments of State, Defense, Homeland Security, and Justice for review and comment. State and the Departments of Homeland Security and Justice provided technical comments, which we incorporated as appropriate. State also provided written comments, which are reproduced in appendix VI. State generally concurred with our recommendations. DOD did not provide comments. State agreed with our recommendation to utilize available Blue Lantern outreach programs to help improve the completeness and timeliness of responses from the Egyptian government and noted that it would do so, subject to restrictions on travel to Egypt and any limitations inherent in the United States’ current political relations with the Egyptian government. State also agreed with our recommendation to determine and address the factors that led to some Egyptian security forces not being vetted before receiving training and asserted that the department remains committed to ensuring that perpetrators of gross violations of human rights do not receive U.S. training or assistance. Additionally, State agreed with our recommendation to develop time frames for establishing policies and procedures to provide a more reasonable level of assurance that the department is complying with the Leahy laws for recipients of equipment. Although State acknowledged challenges identifying recipients of equipment across the range of assistance activities, it noted that it would continue to update its systems—including a new version of the INVEST system—and procedures to facilitate human rights vetting for recipients of equipment. State partially agreed with our recommendation to maintain training rosters or other records of Egyptian security forces that have received U.S.-funded training. State indicated that it would attempt to implement this recommendation but noted resource constraints at Embassy Cairo may hinder its ability to do so. State also partially agreed with our recommendation to develop policies and procedures for determining when individuals and subunits may receive U.S. assistance while part of larger units that have been deemed ineligible to receive assistance. While State acknowledged that criteria for making these determinations are not covered in its guidance, it noted that it already takes such considerations into account on a case-by-case basis during internal policy deliberations to restrict or deny assistance and is currently discussing revisions to its guidance regarding this issue. State agreed with our recommendation that Embassy Cairo comply with the State requirement to record relevant information it obtains regarding gross violations of human rights in INVEST. Accordingly, State noted that it would maintain in INVEST, and periodically update, a version of the spreadsheet it uses to track Egyptian security force units of concern and other allegations of human rights abuses. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretaries of State, Defense, and Homeland Security; and the Attorney General of the United States. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7331 or [email protected]. GAO staff who made contributions to this report are listed in appendix VII. The objectives of this review were to examine, for fiscal years 2011 through 2015, the extent to which the U.S. government (1) committed or disbursed funds allocated for security-related assistance for Egypt, (2) implemented end-use monitoring for equipment transferred to Egyptian security forces, and (3) vetted Egyptian recipients of U.S. security-related assistance for human rights concerns. To determine the extent to which the U.S. government committed or disbursed funds allocated for security-related assistance to Egypt in fiscal years 2011 through 2015, we collected and analyzed data from the Department of State’s (State) Office of U.S. Foreign Assistance Resources, by appropriation account, on allocations, unobligated balances, unliquidated obligations, and commitments or disbursements. Recognizing that different agencies and bureaus may use slightly different accounting terms, we provided State with definitions from GAO’s A Glossary of Terms Used in the Federal Budget Process and requested that it provide the relevant data according to those definitions. The data State provided were as of the end of fiscal year 2015. State provided data on bilateral security assistance from the Foreign Military Financing (FMF); International Military Education and Training (IMET); International Narcotics Control and Law Enforcement (INCLE); and Nonproliferation, Anti-terrorism, Demining, and Related Programs (NADR) accounts. Because FMF funds are budgeted and tracked differently than for other foreign assistance accounts, State provided data on FMF funding that was uncommitted or committed rather than data on unliquidated obligations and disbursements. To assess the reliability of the data provided, we requested and reviewed information from State regarding the agency’s underlying financial data systems and the checks, controls, and reviews used to ensure the accuracy and reliability of the data provided. We determined that the data State provided were sufficiently reliable for the purposes of this report. To gather additional information on the status of assistance to Egypt, we interviewed State and Department of Defense (DOD) officials and reviewed agency documents to identify factors that contributed to any unobligated balances and unliquidated obligations. Finally, we identified any relevant legal authorities related to these accounts, including the periods of availability for funds to be obligated from each of these accounts. To determine the extent to which the U.S. government implemented end- use monitoring for equipment transferred to Egyptian security forces, we reviewed agency guidance, analyzed end-use monitoring data and documentation, interviewed U.S. and Egyptian officials, and conducted on-site inspections of military equipment during fieldwork to Egypt in June 2015. To determine the extent to which DOD implemented Golden Sentry end-use monitoring for equipment transferred to Egyptian security forces through government-to-government programs, we reviewed relevant program guidance in the Security Assistance Management Manual and standard operating procedures used by the Office of Military Cooperation–Egypt (OMC-E). We also reviewed the terms and conditions of the Letters of Offer and Acceptance for transfers of U.S.-origin night vision devices (NVD) to Egyptian security forces, the Egyptian Ministry of Defense’s June 2012 control plan for the physical security and accountability of NVDs, and DOD’s April 2013 criteria for end-use monitoring of man-portable NVDs. We reviewed a report summarizing the findings from DOD’s February 2012 Compliance Assessment Visit in Egypt, two U.S. Central Command Inspector General reports for OMC-E, and correspondence from OMC-E to the Egyptian Ministry of Defense communicating end-use monitoring findings. We interviewed or obtained written information from DOD officials in the Defense Security Cooperation Agency, Defense Technology Security Administration, and U.S. Central Command. During fieldwork in Cairo, Egypt, from June 7 through June 11, 2015, we interviewed DOD officials from OMC-E and Egyptian officials from the Egyptian Armament Authority, a unit within the Egyptian Armed Forces responsible for overseeing the procurement of U.S. military equipment and communicating end-use monitoring requirements to units that use this equipment, according to Egyptian officials. We reviewed and analyzed data and management reports from DOD’s Security Cooperation Information Portal database to identify defense articles provided to Egypt and determine compliance with enhanced end-use monitoring inventory requirements. We compared the data with management reports and other documents and determined that the data were sufficiently reliable for the purposes of our analysis. Using data provided by DOD, we drew a random sample of Stinger missiles out of the total number that the U.S. government had transferred to Egypt through government-to-government programs as of April 2015, and we inventoried the missiles in our sample by serial number during fieldwork in Egypt. Our sample was generalizable to the population of Stinger missiles available for observation. We also requested to inventory a sample of NVDs subject to enhanced end-use monitoring during our fieldwork in Egypt, but we were unable to complete this inventory because the NVD storage facility we visited housed NVDs that were not subject to enhanced end-use monitoring. During our fieldwork in Egypt, we also observed DOD officials conducting routine end-use monitoring for F-16 aircraft. To assess evidence of enhanced and routine end-use monitoring in Egypt, we reviewed enhanced end-use monitoring physical security and accountability checklists and routine end-use monitoring reports. To determine the extent to which State implemented Blue Lantern end- use monitoring for equipment transferred to Egyptian security forces through direct commercial sales, we reviewed State guidance on the Blue Lantern program, including the Blue Lantern Guidebook and the Standard Operating Procedures for the Blue Lantern program. We also reviewed relevant cables on the Blue Lantern program. To determine the timeliness and completeness of responses to Blue Lantern checks, we reviewed the cables associated with each Blue Lantern check conducted in Egypt in fiscal years 2011 through 2015. We also reviewed correspondence between the U.S. embassy in Cairo—in this report, “Embassy Cairo”— and the Egyptian government on Blue Lantern checks conducted from July 2014 to September 2015. To determine the number, type, and results of Blue Lantern checks conducted on Egyptian entities in fiscal years 2011 through 2015, we obtained and analyzed Blue Lantern data for Egypt. We also used these data in our analysis of the length of time it took to complete Blue Lantern checks, the commodities subject to the checks, and the reasons for unfavorable Blue Lantern checks in Egypt in fiscal years 2011 through 2015. We reviewed the information in the Blue Lantern cables for consistency with corresponding data in the Blue Lantern database and determined that the Blue Lantern data were sufficiently reliable for the purposes of our analysis. In addition, we analyzed State’s direct commercial sales licensing data on defense articles exported to Egypt to identify the number of licenses going to Egyptian end users and intermediaries from fiscal year 2011 to April 2015, as well as State’s determinations on such licenses subject to Blue Lantern checks during that period. We interviewed State officials in the Directorate of Defense Trade Controls in Washington, D.C., who are responsible for managing the Blue Lantern program as well as the State official at Embassy Cairo who is responsible for conducting Blue Lantern checks in Egypt. In addition, we interviewed a State official who conducted Blue Lantern checks in Egypt from December 2012 to April 2014 to obtain information on the extent to which Egypt’s 2013 political transition affected Blue Lantern checks conducted during that time. To assess the extent to which the U.S. government vetted Egyptian security forces for human rights concerns, we reviewed both the State and DOD Leahy laws. In addition, we analyzed State documents establishing its policies and procedures for complying with the Leahy laws and conducting human rights vetting. For example, we analyzed State’s 2012 Leahy vetting guide, State’s 2010 International Vetting and Security Tracking (INVEST) user guide, and a number of other relevant State cables and policy documents issued since the beginning of fiscal year 2011 that establish further requirements or provide additional guidance on various aspects of the human rights vetting process. We also analyzed DOD’s 2014 implementation guidance for the DOD Leahy law. In addition, we assessed Embassy Cairo’s 2014 Guide for Leahy Law Human Rights Vetting, which establishes the embassy’s standard operating procedures for complying with the Leahy laws. To gather additional information on human rights vetting in Egypt, we conducted interviews with State officials from the Bureau of Democracy, Human Rights, and Labor (DRL) and the Bureau of Near Eastern Affairs (NEA) who are responsible for conducting or overseeing human rights vetting in Washington, D.C. DRL officials also provided us a demonstration of the INVEST system. At Embassy Cairo, we interviewed State officials from the Political Section who oversee human rights vetting at the post. To gather further information on the human rights vetting process at Embassy Cairo, we interviewed State officials from the International Narcotics and Law Enforcement Section and the Regional Security Office, DOD officials from OMC-E, Department of Homeland Security officials from Immigration and Customs Enforcement and U.S. Customs and Border Protection, and Department of Justice officials from the Federal Bureau of Investigation and the Drug Enforcement Administration. These officials were responsible for vetting Egyptian security officials for training, for sponsoring training that required Egyptian participants to be vetted, or for both. During our fieldwork in Egypt, we also requested to meet with officials of several Egyptian government ministries to obtain their perspective on the Leahy laws and U.S. government human rights vetting efforts; however, according to an Embassy Cairo official, the Egyptian government did not respond to our request. The Egyptian government initially approved our request to meet with officials of the Egyptian Training Authority to discuss human rights vetting for Egyptian military students; however, the Egyptian Training Authority later declined to participate in the meeting. We also analyzed State data on human rights vetting results in Egypt from the INVEST system for fiscal year 2011 through March 31, 2015, to determine the extent to which State approved, rejected, suspended, or cancelled vetting cases for Egyptian officials nominated for U.S.-funded training. To assess the reliability of the INVEST data, we reviewed documentation on the INVEST system and conducted interviews with State officials knowledgeable of the system. We determined that the INVEST data State provided were sufficiently reliable for the purposes of this report. To assess the extent to which the U.S. government conducted required vetting of Egyptian security officials before they received U.S.-funded training, we collected rosters of Egyptian security forces that received U.S.-funded training from State and DOD in fiscal year 2011 through March 31, 2015. In total, we received rosters for training funded through seven appropriations accounts. These seven accounts included four State accounts—FMF, IMET, INCLE, and Peacekeeping Operations— and three DOD accounts—the Countering Terrorism Fellowship Program, the DOD Regional Centers, and Joint Combined Exchange Training. Using these training rosters, we developed a generalizable random sample of 166 names from a population of 3,743 Egyptian security forces that received training funded through these seven accounts during this period. The sample included names from the roster for each of the seven accounts. We then cross-checked the names in our sample with human rights vetting data from the INVEST system to verify that the Egyptian security forces were vetted before receiving the training. In addition to receiving rosters for training funded through the seven accounts, we also requested training rosters from State on Egyptian security forces that had received training funded through the NADR account. However, State told us that it was unable to provide this information. As a result, we were not able to include the NADR account in our sample and we were not able to assess the extent to which State had completed required human rights vetting for Egyptian security forces that received training funded through this account. To gather additional information on how State ensures compliance with the Leahy laws for equipment that it provided to Egyptian security forces, we also reviewed eight State apportionment memos covering all FMF assistance for Egypt in fiscal years 2011 through 2015. We assessed the extent to which each of these memos addressed Leahy vetting compliance in Egypt. Finally, we assessed State’s actions to ensure compliance with the Leahy laws against standards in its Foreign Affairs Manual related to creating records and management controls. We also assessed State’s actions against established internal control standards in the federal government and against standards established by the Project Management Institute. We conducted this performance audit from February 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOD’s Defense Security Cooperation Agency administers the Golden Sentry program to monitor the end use of defense articles and defense services transferred through Foreign Military Sales. Under this program, DOD implements two levels of end-use monitoring—enhanced and routine—and conducts periodic Compliance Assessment Visits. DOD requires enhanced end-use monitoring for sensitive defense articles, services, or technologies specifically designated by the military departments’ export policy, the interagency release process, or by DOD policy as a result of consultation with Congress. DOD requires routine end-use monitoring for all defense articles and services provided through government-to-government programs. Routine end-use monitoring is conducted in conjunction with other security cooperation functions and uses any readily available source of information. State’s Directorate of Defense Trade Controls administers the Blue Lantern program to monitor the end use of defense articles and services exported through direct commercial sales. Under its Blue Lantern program, State is required to conduct end-use monitoring checks based on a case-by-case review of export license applications against established criteria for determining potential risks. To determine whether to conduct a Blue Lantern check, State considers 20 indicators that may trigger a check, such as unfamiliar end users, foreign intermediate consignees with no apparent connection to the end user, and requests for sensitive commodities whose diversion or illicit retransfer could have a negative impact on U.S. national security. Table 10 provides an overview of the Golden Sentry and Blue Lantern end-use monitoring programs. Department of State Leahy law No assistance shall be furnished under the Foreign Assistance Act of 1961 or the Arms Export Control Act to any unit of the security forces of a foreign country if the Secretary of State has credible information that such unit has committed a gross violation of human rights. Department of Defense Leahy law Of the amounts made available to the Department of Defense, none may be used for any training, equipment, or other assistance for a unit of a foreign security force if the Secretary of Defense has credible information that the unit has committed a gross violation of human rights. The Secretary of Defense shall, in consultation with the Secretary of State, ensure that prior to a decision to provide any training, equipment, or other assistance to a unit of a foreign security force full consideration is given to any credible information available to the Department of State relating to human rights violations by such unit. Prohibition does not apply if the Secretary of State determines and reports to specified congressional committees that “the government of such country is taking effective steps to bring the responsible members of the security forces unit to justice.” Prohibition does not apply if the Secretary of Defense, after consultation with the Secretary of State, determines that the government of such country has taken all necessary corrective steps, or if the equipment or other assistance is necessary to assist in disaster relief operations or other humanitarian or national security emergencies. Not later than 15 days after the use of the exception, the Secretary of Defense shall submit to the appropriate congressional committees a report providing notice of the use of the exception and stating the grounds for the exception. The Secretary of Defense, after consultation with the Secretary of State, may waive the prohibition if he determines that such a waiver is required by extraordinary circumstances. Not later than 15 days after the exercise of any waiver, the Secretary of Defense shall submit a report to the appropriate congressional committees describing the information related to the gross violation of human rights; the extraordinary circumstances that necessitate the waiver; the purpose and duration of the training, equipment, or other assistance; and the U.S. forces and the foreign security force unit involved. In the event that funds are withheld from any unit pursuant to the law, the Secretary of State shall promptly inform the foreign government of the basis for such action and shall, to the maximum extent practicable, assist the foreign government in taking effective measures to bring the responsible members of the security forces to justice. Department of Defense Leahy law The Secretary of Defense shall establish, and periodically update, procedures to ensure that any information in the possession of the Department of Defense about gross violations of human rights by units of foreign security forces is shared on a timely basis with the Department of State. ensure that for each country the Department of State has a current list of all security force units receiving U.S. training, equipment, or other types of assistance; facilitate receipt by the Department of State and U.S. embassies of information from individuals and organizations outside the U.S. government on gross violations of human rights by security force units; routinely request and obtain such information from the Department of Defense, the Central Intelligence Agency, and other U.S. government sources; ensure that such information is evaluated and preserved; ensure that when an individual is designated to receive United States training, equipment, or other types of assistance the individual’s unit is vetted as well as the individual; seek to identify the unit involved when credible information of a gross violation exists but the identity of the unit is lacking; and make publicly available, to the maximum extent practicable, the identity of those units for which no assistance shall be furnished pursuant to the law. The Secretary of Defense shall submit a report to congressional appropriations committees not later than March 31, 2015, and annually thereafter through 2024, providing information on the total number of cases submitted for vetting, and the total number of such cases approved, suspended, or rejected for human rights reasons, non-human rights reasons, or administrative reasons; in the case of units rejected for non-human rights reasons, a detailed description of the reasons relating to the rejection; a description of the interagency processes used to evaluate compliance with vetting requirements; and any comments from commanders of the combatant commands about how the Department of Defense Leahy law affects their theater security cooperation plans, among other things. Table 11 provides a summary of the status of all bilateral security-related assistance allocated for Egypt in fiscal years 2011 through 2015, as of September 30, 2015. The U.S. government provides bilateral security-related assistance to Egypt through a number of accounts, including the Foreign Military Financing; International Narcotics Control and Law Enforcement; International Military Education and Training; and Nonproliferation, Anti- terrorism, Demining, and Related Programs accounts. Tables 12 through 15 provide information on the status of funds allocated for assistance for Egypt from these accounts for fiscal years 2011 through 2015, as of the end of fiscal year 2015. In addition to the contact named above, Jeff Phillips (Assistant Director), Drew Lindsey (Analyst-in-Charge), Ryan Vaughan, Rachel Dunsmoor, Ashley Alley, Tina Cheng, David Dayton, Justin Fisher, Jeff Isaacs, and Oziel Trevino made key contributions to this report.
The U.S. government has allocated an average of about $1.3 billion annually in security assistance for Egypt in fiscal years 2011 through 2015. DOD and State have established end-use monitoring programs to ensure that military equipment transferred to foreign countries is safeguarded and used for its intended purposes. In addition, legal requirements, known as the Leahy laws, prohibit DOD- and State-funded assistance to units of foreign security forces if there is credible information that these forces have committed a gross violation of human rights. This report examines, for fiscal years 2011 through 2015, the extent to which the U.S. government (1) committed or disbursed funds allocated for security-related assistance for Egypt, (2) implemented end-use monitoring for equipment transferred to Egyptian security forces, and (3) vetted Egyptian recipients of security-related assistance for human rights concerns. GAO analyzed U.S. agency data and documentation; conducted fieldwork in Egypt; and interviewed U.S. officials in Washington, D.C., and Cairo, Egypt. This is the public version of a sensitive but unclassified report issued in February 2016. U.S. agencies allocated approximately $6.5 billion for security-related assistance to Egypt in fiscal years 2011 through 2015. As of September 30, 2015, over $6.4 billion of the $6.5 billion total had been committed or disbursed. The majority of the funding (99.5 percent) was provided to Egypt through the Department of State's (State) Foreign Military Financing (FMF) account. The funds from this account were used to purchase and sustain a wide variety of military systems, including F-16 aircraft, Apache helicopters, and M1A1 tanks. The Departments of Defense (DOD) and State implemented end-use monitoring for equipment transferred to Egyptian security forces, but challenges including obtaining Egyptian government cooperation hindered some efforts. DOD completed all required end-use monitoring inventories and physical security inspections of storage sites for missiles and night vision devices (NVD) in fiscal year 2015, but DOD lacked documentation showing that it completed physical security inspections for these sensitive items in prior years. Despite agreeing to give access, the Egyptian government prevented DOD officials from accessing a storage site to verify the physical security of some NVDs prior to 2015, according to DOD officials and documents. State conducted 12 end-use checks of U.S. equipment exported to Egypt in fiscal years 2011 to 2015, but State data indicate that the Egyptian government's incomplete and slow responses to some inquiries limited U.S. efforts to verify the use and security of certain equipment, including NVDs and riot-control items. Despite this lack of cooperation, since 2008, State has not used outreach programs in Egypt that are intended to facilitate host country cooperation and compliance with State's monitoring program. According to State officials, this was due to the small number of end-use checks conducted in Egypt and the lower priority assigned to Egypt than to other countries. The U.S. government completed some, but not all, human rights vetting required by State policy before providing training or equipment to Egyptian security forces. State deemed GAO's estimate of the percentage of Egyptian security forces that were not vetted to be sensitive but unclassified information, which is excluded from this public report. Moreover, State has not established specific policies and procedures for vetting Egyptian security forces receiving equipment. Although State concurred with a 2011 GAO recommendation to implement equipment vetting, it has not established a time frame for such action. State currently attests in memos that it is in compliance with the Leahy law. However, without vetting policies and procedures, the U.S. government risks providing U.S. equipment to recipients in Egypt in violation of the Leahy laws. GAO is making six recommendations to strengthen State's implementation of end-use monitoring and human rights vetting, including utilizing its end-use monitoring outreach programs and developing time frames for establishing policies and procedures for equipment vetting. State generally agreed with these recommendations.
Nearly two-thirds of the world’s coca crop is grown in Peru. Most of that coca is processed into cocaine base, which is flown to Colombia to make cocaine for shipment to the United States and Europe. Since the 1980s, the primary coca-growing and drug-trafficking activities in Peru have been in its Upper Huallaga Valley. During the early 1980s, the United States provided support for Peru to conduct manual eradication of mature coca leaf. However, because of security concerns for personnel conducting manual eradication, these activities ceased in 1987. Gradually, the United States began to (1) support Peruvian efforts to eradicate coca seedbeds and (2) conduct law enforcement operations against drug-trafficking activities. Before 1989, both operations were conducted by helicopter from Tingo Maria, about 150 miles southeast of Santa Lucia, the center of illegal drug activities. In 1989, the United States and Peru moved their operations to a base located near the town of Santa Lucia. The base, which became the center of U.S. and Peruvian eradication and law enforcement operations, supported between 430 and 492 personnel, including 32 U.S. personnel. The United States continued to provide support to the base until late 1993. The map on page 3 shows the locations of these bases. The Departments of State and Defense and the Drug Enforcement Administration (DEA) coordinate antidrug activities with Peruvian law enforcement and military organizations. At the U.S. embassy in Peru, these functions are carried out by the Narcotics Affairs Section (NAS), the U.S. Military Assistance Advisory Group, and the DEA Country Attache’s Office. Other U.S. agencies also provide support to Peruvian antidrug programs and operations. According to U.S. officials, the rationale for building the Santa Lucia base was to place U.S. personnel in the safest possible environment from which to conduct antidrug activities. U.S. personnel flying into the heart of the drug-trafficking area were increasingly at risk because in the mid-1980s, the Sendero Luminoso—a Maoist organization attempting to overthrow the Peruvian government—took control of the area. This group protected those trafficking in drugs in return for monetary support for the Sendero. In 1988, the United States began to build the Santa Lucia base, which included an airfield, a maintenance facility for 6 to 10 U.S. UH-1H helicopters used for eradication and law enforcement missions, and housing. Because the base was in a highly dense, tropical area with no safe, accessible roads, fixed-wing aircraft (C-123s and C-130s) were supplied by the State Department’s Bureau of International Narcotics Matters (INM) to transport personnel, equipment, and supplies to the base from Lima several times each week. In addition, DEA and Peruvian aircraft used the base for law enforcement operations. According to INM and U.S. embassy records, about $49.2 million was provided to construct, maintain, and operate the Santa Lucia base during fiscal years 1988-93. These funds were included as part of the State Department’s International Narcotics Control Program, which is authorized under the Foreign Assistance Act of 1961, section 481, as amended. INM provided these funds for five projects in Peru (see table 1). Included in the $9.8 million construction project are the costs of daily laborers to construct the base and its related infrastructure; installation of prefabricated housing; equipment and commodities needed to construct and operate helicopter and fixed-wing aircraft facilities and an airstrip; recreational equipment, food, and clothing; and miscellaneous items such as payments to the Peruvian engineer in charge of constructing the airstrip and the rental of heavy equipment. The $16.9 million for CORAH was provided for the direct and indirect costs of supporting between 200 and 250 Peruvian workers to help construct, maintain, and operate Santa Lucia base; provide support services; and perform limited antidrug duties. Direct costs were for activities on the base, including operating equipment such as electrical generators, providing food service, cutting the grass, procuring supplies, and monitoring U.S.-provided equipment to ensure that it was used for counternarcotics purposes. Indirect costs were for support provided from the CORAH headquarters at Tingo Maria, including the purchase of food, construction equipment such as trucks and a bulldozer, supplies and materials, and general supplies for Santa Lucia; and administrative support functions for CORAH personnel at the base. Between March 1990 and November 1993, 10 CORAH workers eradicated coca seedbeds about 3 to 4 hours a day, 3 to 4 days a week. This work was suspended in November 1993 because of budgetary constraints. In addition, up to 50 CORAH workers installed concrete obstacles to block 10 illegal airstrips in the Upper Huallaga Valley. The costs for blocking the airstrips could not be readily determined, but the embassy’s Narcotics Affairs Section said they were included in the direct and indirect costs discussed above. About $6.4 million was spent for police support at Santa Lucia, including (1) per diem for Peruvian police officers stationed at the base to provide security for the base and for workers on eradication missions and (2) commodities used by the police to support the base. About $4.6 million was provided for salaries and expenses of the NAS staff and activities related to administrative support of the base as well as other antidrug programs in Peru. Such support included processing procurement vouchers for goods and services. About $11.5 million was spent on operating and maintaining the fixed-wing aircraft that transported personnel, supplies, and other items to and from Santa Lucia. Generally, two C-123 and two C-130 aircraft were used for these missions. This was funded by INM’s airwing account and not included in Peru’s antidrug program budget. The Congress reduced the State Department’s annual International Narcotics Control Program request for fiscal year 1994 from $148 million to $100 million. After coordinating with various U.S. agencies involved in antidrug activities regarding program options, the State Department decided that it could not adequately support maintenance and operations at the Santa Lucia base while supporting its antidrug programs in Peru and other countries. Thus, in December 1993, the United States stopped supporting the base and the Peruvian government assumed responsibility for the base’s administrative and operational control. State Department officials reported that although trafficking activities had moved outside of the immediate range of U.S.-provided helicopters, they would have maintained the Santa Lucia base had the budget not been reduced. U.S. embassy officials stated that they had already begun to cover an expanded area by conducting helicopter operations using forward operating locations outside of the Santa Lucia area. Some aspects of the restructured programs have been completed and are in place. Specifically, NAS has moved its helicopter maintenance facilities from Santa Lucia to Pucallpa and the government of Peru is now responsible for operating the Santa Lucia base. In addition, INM-owned fixed-wing aircraft used to support U.S. antidrug operations have been returned to the United States and fixed-wing aircraft are now being rented from Peru. However, the U.S. embassy has faced a number of obstacles to fully implementing a mobile basing concept for conducting antidrug missions, including problems with helicopter maintenance, internal conflicts over the responsibility for planning and coordinating antidrug operations, and a U.S. decision not to share with Peru real-time information and assistance that could lead to the shoot down of civilian aircraft suspected of drug trafficking. To stay within the budget, the State Department moved helicopter maintenance facilities and associated U.S. and Peruvian police personnel from Santa Lucia to several different sites in Pucallpa, where an international airport is located. The airport serves as the center from which operations are conducted and for resupplying and supporting maintenance operations. The airport also houses U.S. fixed-wing aircraft used by DEA and other agencies to support law enforcement operations. The maintenance facilities, aviation ground support equipment, and spare parts for 10 UH-1H helicopters are located at a Peruvian naval base in Pucallpa. About 15 U.S. contractor personnel assist the Peruvian police in maintaining and operating the helicopters. A total of about 30 U.S. and 30 to 35 Peruvian personnel live in a hotel about 10 minutes from the naval base. The United States spent about $450,000 for (1) security improvements to the hotel, (2) improvements to three warehouses behind the hotel, (3) improvements to a refueling area, and (4) refurbishment of a warehouse and extension of the perimeter wall at the naval base. Planned projects include improvements to the hotel and naval base and construction of a small hangar and ramp area at the airport for fixed-wing aircraft. Cost estimates for these projects were not available. U.S. embassy personnel stated that U.S. personnel are not allowed to leave the hotel unless they are transported in official vehicles to and from work because of security concerns. According to NAS personnel, morale at Santa Lucia was much better because personnel were free to move around the base after work and tended to interact more readily. With the loss of U.S. support to Santa Lucia, the government of Peru agreed to administer the base and maintain it as a location from which antidrug operations could be conducted. According to the U.S. Embassy’s NAS Director, U.S. officials were concerned that the government of Peru might be unable to provide the resources needed to maintain and operate the base adequately. At the time of our visit, the airstrip needed repair because of holes in the runway and other maintenance problems. The NAS Director estimated that about $1.5 million is needed to repair the airstrip. Nevertheless, DEA is continuing to conduct antidrug operations from the base. Because the State Department stopped supporting the Santa Lucia base, INM-owned fixed-wing support was terminated. In December 1993, INM’s fixed-wing aircraft—two C-123s and two C-130s—and almost $10.1 million in aircraft spare parts were shipped for storage to Davis Monthan Air Force Base in Tucson, Arizona. The 28 personnel responsible for maintaining the aircraft were returned to the United States. To support antidrug missions at Pucallpa, the embassy now rents fixed-wing aircraft from the Peruvian air force and civilian companies. According to the embassy, the monthly rental costs should be less than $20,000. The embassy’s implementation of the mobile basing concept has been complicated by several problems. The concept included the following assumptions: 8 to 10 helicopters and fixed-wing aircraft would be available for antidrug an operational planning group would be established in the embassy to plan law enforcement operations. In early 1994, the Defense Department notified State that it had to ground UH-1H helicopters that had certain engine numbers, which had to be overhauled because of mechanical problems. This created maintenance problems with the UH-1H helicopters that limited the embassy’s ability to fully support the mobile basing concept. Five of the 10 helicopters used in Peru were grounded for 6 weeks because their engines required overhaul. A total of 11 engines had to be overhauled at a cost of $1.65 million. The five remaining helicopters were used extensively during the 6-week period, and thus, all subsequently needed maintenance at about the same time. In June 1994, 3 of the 10 helicopters were in for scheduled maintenance, leaving the embassy with only 7 to conduct operations. Recent embassy reports say that high levels of metal particles are being found in gear boxes and engines, indicating excessive wear and use. An embassy official indicated that this situation may cause future maintenance problems and affect mobility of operations. According to DEA officials, helicopter maintenance problems have limited their ability to plan and conduct operations. For example, a DEA official stated that of the 13 missions requiring helicopter support during a recent 3-month period, the helicopters experienced mechanical problems during 6 of them. In two cases, DEA teams were delayed in the jungle because of the problems. In addition, DEA’s CASA-212 fixed-wing transport aircraft was grounded for several months because of maintenance problems. Finally, an embassy official stated that Peruvian aircraft are frequently grounded because their mechanics have not been properly trained. To further complicate matters, the operational planning group was not formally established by the U.S. Ambassador until July 1994, 7 months after the mobile basing concept was approved, and has not yet been staffed. The delay was caused by internal differences within the U.S. embassy about the structure and staffing of the group. The NAS Director believed that the group should be responsible for more than law enforcement operations and include eradication operations, administrative and training support to the Peruvian police, and other operations that may be needed. He also believed that, since the group would be under the DEA attache that it should be composed primarily with persons having law enforcement backgrounds to ensure that DEA conducts operations meeting U.S. antidrug objectives in Peru. DEA and U.S. military personnel, on the other hand, believed that the group should be primarily responsible for planning law enforcement operations and be staffed with military personnel, who would be more experienced in planning specific operations and identifying logistics support requirements for law enforcement operations in the jungle and interacting with Peruvian military forces in planning military-type operations. According to U.S. officials, the group will be staffed with DEA agents as well as military personnel providing operational, communications, and logistical expertise. The group will be under the control of the DEA country attache. According to a U.S. embassy official, no specific assignments of military personnel have been made to date. Another factor affecting the mobile basing concept’s implementation is the May 1, 1994, decision to stop sharing certain drug-trafficking information with the governments of Colombia and Peru, which we reported on in August 1994. This step was taken because of legal concerns about the probable criminal liability of U.S. personnel who provide information that could lead to the shooting down of a civilian aircraft suspected of transporting illegal drugs. According to U.S. officials, the sharing of real-time information is critical to ensuring that they can take timely action against drug-trafficking activities to increase the risks associated with these activities. The officials stated that the policy decision had impacted on their ability to conduct antidrug operations. Although the impact that the policy has had on the flow of drugs being shipped from Peru to Colombia is unclear, it is clear that pilots flying between Peru and Colombia have changed their operations, since there is little fear of interception by U.S. and Peruvian forces as long as detection capabilities remain negligible and there is no sharing of information. Various U.S. reports and officials have stated that, before the May decision, drug traffickers wanted to minimize their exposure to the air interdiction threat. Thus, they (1) used fewer flights with larger drug loads, (2) flew mainly in the early evening hours, and (3) spent on an average only about 10 to 12 minutes in loading and unloading their cargoes. U.S. officials in Peru said that since the policy change, drug traffickers have changed their operations and (1) have begun multiple flights with smaller drug loads and (2) have begun flying during the day, and some traffickers have doubled their time on the ground. In addition, U.S. officials stated that an analysis of flight patterns indicates that traffickers are reverting to more direct air routes from Peru into Colombia instead of the indirect and more time-consuming routes they were taking before the cutoff of information. DEA officials advised us that the policy to not share real-time information has caused them to forego law enforcement operations against illegal drug activities. Finally, a recent Defense Department report states that the policy of not sharing real-time information has reduced the risks associated with drug-trafficking activities in Peru. On October 5, 1994, the President signed legislation that provides official immunity for authorized U.S. personnel from liability, notwithstanding any other provision of law, if information they provide is used to shoot down civilian aircraft suspected of drug trafficking. However, before sharing of information can resume, the President must determine that (1) illicit drug trafficking poses a national security threat to Peru and (2) Peru has appropriate procedures in place to protect against the innocent loss of life. The executive branch is discussing this issue with the Peruvian government. As of November 30, 1994, the sharing of information had not yet resumed. To obtain information for this report, we interviewed officials and reviewed pertinent documents at the Departments of State and Defense and the Drug Enforcement Administration in Washington, D.C.; the U.S. Southern Command in Panama; and the U.S. Embassy in Lima, Peru. We also interviewed Peruvian police officials responsible for counternarcotics programs. We did our review between April and July 1994 in accordance with generally accepted government auditing standards. As requested, we did not obtain written agency comments on a draft of this report. However, we discussed the information in this report with agency officials and included their comments where appropriate. Unless you release its contents earlier, we plan no further distribution of this report until 10 days after its issuance. At that time, we will send copies of the report to the Secretaries of Defense and State, the Administrator of the Drug Enforcement Administration, and the Director of the Office of National Drug Control Policy. We will also provide copies to others on request. This report was prepared under the direction of Mr. Benjamin Nelson, Associate Director, who may be reached on (202) 512-4128. Other major contributors are Mr. Andres Ramirez, Assistant Director, and Mr. Ronald D. Hughes, Evaluator-in-Charge. Joseph E. Kelley Director-in-Charge International Affairs Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on U.S. antidrug efforts in Peru, focusing on the: (1) rationale for, and costs associated with, the construction, maintenance, and operations of the Santa Lucia antidrug base; (2) rationale for discontinuing support of the Santa Lucia base; and (3) current status of U.S. efforts to restructure antidrug programs in Peru. GAO found that: (1) the Santa Lucia base was constructed to place U.S. personnel in the safest possible environment from which to conduct antidrug activities; (2) between fiscal years 1988 and 1993, the State Department spent about $49.2 million to construct, maintain, and operate the Santa Lucia base; (3) in December 1993, the U.S embassy restructured its antidrug programs in Peru because it could not continue to support the base while also supporting other U.S. antidrug efforts; (4) although the executive branch has approved a formal mobile basing concept to implement antidrug efforts, the U.S. embassy has been slow in implementing the concept because of maintenance problems with helicopters, internal differences within the U.S. embassy over how the operational planning group would function to coordinate law enforcement operations, and the decision to stop sharing information with the government of Peru that could be used to shoot down civilian aircraft suspected of drug trafficking; and (5) although legislation has been passed to allow information sharing on drug activities, the Administration has not reached agreement with Peru on certain required preconditions.
FHWA is the DOT agency responsible for federal highway programs— including distributing billions of dollars in federal highway funds to the states—and developing federal policy regarding the nation’s highways. The agency provides technical assistance to improve the quality of the transportation network, conducts transportation research, and disseminates research results throughout the country. FHWA’s business units conduct these activities through its research and technology program, which includes “research” (conducting research activities), “development” (developing practical applications or prototypes of research findings), and “technology” (communicating research and development knowledge and products to users). FHWA maintains a highway research facility in McLean, Virginia. This facility, known as the Turner-Fairbank Highway Research Center, has over 24 indoor and outdoor laboratories and support facilities. Approximately 300 federal employees, on-site contract employees, and students are currently engaged in transportation research at the center. According to FHWA officials, the agency’s research and technology program is oriented to supporting the agency’s and DOT’s strategic goals for the nation’s transportation system, including to promote public health and safety by working toward the elimination of transportation-related deaths and injuries; to provide an accessible, affordable, and reliable transportation system for all people, goods, and regions; to support a transportation system that sustains the nation’s economic growth; to protect and enhance communities and the natural environment affected by transportation; and to ensure the security of the transportation system for the movement of people and goods, and to support the national security strategy. The research and technology program is generally a component of broader agency programs directed toward the achievement of these strategic goals. For example, in a recent report the Transportation Research Board’s Research and Technology Coordinating Committee (RTCC) stated that most of FHWA’s research and technology program’s projects are aimed at incremental improvements to lower highway construction and maintenance costs, improve highway system performance, increase highway capacity, reduce highway fatalities and injuries, reduce adverse environmental impacts, and provide a variety of benefits such as improved travel times and fewer hazards for highway users. Concerned about the strategic focus of surface transportation research and technology activities, Congress required DOT to establish a strategic planning process to identify national priorities related to research and technology for surface transportation when it passed the Transportation Equity Act for the 21st Century in 1998. This process was to result in a strategic plan that included, among other things, performance goals, resources needed to achieve those goals, and performance indicators for the succeeding 5 years for each area of research and technology deployment. The plan was also to be developed with comments from external stakeholders. In response to this requirement, FHWA contributed to the development of a research, development, and technology strategic plan for all of DOT. DOT’s plan identifies formal research, development, and technology strategies to support each of DOT’s strategic goals. The plan is not focused solely on surface transportation research but applies to all modes, including examples of research activities undertaken by FHWA in support of the agency’s strategic goals. Congress also required that a group established by the National Research Council review DOT’s plan, and this has taken place for several years. Separately, in 1998 FHWA developed a 10-year strategic plan for the agency as a whole, stating that research is a strategy for achieving the plan’s objectives. The Research, Development, and Technology business unit has developed performance plans that support some of FHWA’s research efforts. Funding mechanisms for this program’s activities have varied in recent years. Prior to fiscal year 1992, they were wholly funded from FHWA’s administrative and operating funds. From fiscal years 1992 through 1997, the program was supported by a mix of operating funds and funds made available for specific types of research. For fiscal years 1998 through 2003, the Transportation Equity Act for the 21st Century authorized funding for the following seven research activities: surface transportation research, technology deployment, training and education, intelligent transportation systems, intelligent transportation systems deployment, university transportation centers, and the Bureau of Transportation Statistics. Since 1998, FHWA has generally not used administrative funds for research activities. A portion of the funds for the research and technology program are designated for or directed to particular research programs and recipients, either in the authorization or appropriations legislation or in committee reports. Although FHWA technical staff set priorities for the research and technology program, its activities are carried out through a combination of federal employees, private contractors and grantees, and university researchers. During the past decade, the use of contract employees instead of federal employees to conduct research has increased. Because the program’s authorizing legislation is scheduled to expire in fiscal year 2003, to continue it Congress will have to reauthorize the program and determine how it will be funded. Since 1998, individual business units within FHWA have directed and carried out the activities of FHWA’s research and technology program that fall under the surface transportation research and technology deployment areas. (See app. II for agency organization charts.) Under the current organization, directors of these business units (Federal Lands Highway; Infrastructure; Operations; Planning and Environment; Policy; Research, Development, and Technology; and Safety) work collaboratively to provide leadership for the program’s activities (see table 1). The program’s management is complex because these business units are individually responsible, among other things, for identifying research needs, developing strategies to address transportation problems, and managing research and technology activities that support the agency’s strategic goals. In some cases, the business units conduct their own research. However, the Research, Development, and Technology business unit, located at the Turner-Fairbank Highway Research Center, conducts research for the Infrastructure, Operations, and Safety business units. The Research, Development, and Technology business unit also works with the other business units to prepare materials to support the program’s overall budget, and it serves as FHWA’s liaison to other organizations that advise FHWA on research or conduct highway-related research. The agency’s leadership team, composed of the business unit directors, field service directors, a division administrator, the FHWA administrator, and the FHWA executive director, meets periodically to advise the business units on research and technology program priorities, budgets, and milestones. FHWA’s leadership team advises the business units on how funds should be distributed by considering designations in statutes and committee reports and the stated needs of individual business units. The Office of the Administrator approves final budgets for the business units. In fiscal year 2002, the business unit responsible for the largest percentage of surface transportation research and technology deployment funds was the Infrastructure business unit (see fig. 1). Prior to the agencywide restructuring in 1998, research activities were managed throughout the organization, including at the Office of the Associate Administrator for Research and Development and the Office of Technology Applications. Decisions related to developing research and technology projects, budgets, and acquisition plans were made by the Research and Technology Executive Board. Chaired by the executive director, the board’s membership included all agency associate administrators, the director of the Intelligent Transportation Systems Joint Program Office, and one regional administrator. The board met periodically to obtain information from working groups composed of representatives from across the agency, the National Highway Institute, and other DOT agencies. FHWA has recently assessed the effects of its 1998 agencywide restructuring and has drafted 13 recommendations to address the limitations of the new organization. Two of these recommendations specifically address the agency’s research and technology program, identifying the need to raise its stature in FHWA. The agency has created and filled the position of assistant director for Research, Technology, and Innovation Deployment as a response to this recommendation. This new position will also be responsible for implementing recent recommendations made by the RTCC for improving FHWA’s program. In addition to its own research projects, FHWA collaborates with other DOT agencies to conduct research. For example, FHWA works with DOT’s Research and Special Programs Administration to coordinate efforts to support key research identified in the department’s strategic plan. In fiscal year 2001, FHWA and the Research and Special Programs Administration contributed an estimated $15.2 million and $3.5 million, respectively, for these collaborative, “intermodal” research and technology efforts. Examples of FHWA’s research with other transportation modes include: an ongoing study with DOT’s National Highway Traffic Safety Administration, through the Georgia Institute of Technology, to investigate the relationship between vehicle speed and crash risk under various demographic, environmental, and physical conditions. Funds from FHWA were spent to compare the speeds of drivers involved in crashes with the prevailing speeds of other drivers at the time and location of the crashes; and a study at the Center for Climate Change and Environmental Forecasting, with the collaboration of several other agencies, including DOT’s Maritime Administration, Federal Railroad Administration, and National Highway Traffic Safety Administration. This study examined the potential effects on transportation infrastructure of such climate change phenomena as rising sea levels, increasing frequency of severe weather events, and changing precipitation levels. Several other entities and organizations, detailed below, conduct surface transportation research that can be related to FHWA’s research and technology program. FHWA officials told us that the agency has both formal and informal means for coordination with some of these other organizations. Each of the 50 states, Washington, D. C., and the Commonwealth of Puerto Rico have an independent highway research program. In general, state programs address technical questions associated with the planning, design, construction, rehabilitation, and maintenance of highways. State highway research projects usually reflect local concerns. According to an official at the Transportation Research Board, 47 states indicated that they spent approximately $322 million in 1999 on such research. State research programs are generally funded through federal funds set aside from the federal highway aid apportioned to the states. FHWA division administrators in each state approve the state’s annual or biennial research program, funded by a subset of federal funds. The national association that represents state departments of transportation, the American Association of State Highway and Transportation Officials, also plays a key role in highway research. This association has a standing committee on research that develops voluntary standards and guidelines. The National Cooperative Highway Research Program conducts research on acute problems related to highway planning, design, construction, operation, and maintenance that are common to most states. Typically, its research projects are problem-oriented and designed to produce results that have an immediate application. As voluntary program members, state departments of transportation approve research projects and agree to provide financial support. Each member state provides an amount equal to 5.5 percent of its state planning and research funds. Program funding for fiscal year 2001 was $30.6 million. FHWA formally coordinates with members of this program and the American Association of State Highway and Transportation Officials to review proposed projects. FHWA also participates in selecting projects that complement the agency’s defined program, reducing duplication and leveraging limited funding. The private sector conducts or sponsors individual programs. Private organizations include companies that design and construct highways and supply highway-related products, national associations of industry components, and engineering associations active in construction and highway transportation. Funding information for private-sector highway research is generally proprietary in nature, although an official of the Transportation Research Board estimated that the total funding for this research ranged from $75 million to $150 million annually. Universities receive funding for research on surface transportation from FHWA, the states, and the private sector. For example, since 1988 DOT has awarded grants under its University Transportation Center program to universities throughout the nation to support education, research, and technology deployment. Each grantee is called a University Transportation Center, whether working alone or as the lead of a consortium of universities. Some have formed centers for research, education, and training in specialty areas related to highway transportation. Thirty-three centers currently exist; they were either selected competitively or specified in legislation. The Office of Innovation, Research, and Education within the department’s Research and Special Programs Administration manages the program; funding provided for the 33 centers in fiscal year 2001 from FHWA’s research and technology program amounted to $23.9 million. Leading organizations that conduct scientific and engineering research, other federal agencies with research programs, and experts in research and technology have identified and use best practices for developing research agendas and evaluating research outcomes. Although the uncertain nature of research outcomes over time makes it difficult to set specific, measurable program goals and evaluate results, the best practices we identified are designed to ensure that the research objectives are related to the areas of greatest interest and concern to research users and that research is evaluated according to these objectives. These practices include: Developing research agendas through the involvement of external stakeholders: External stakeholder involvement and merit review are particularly important for FHWA because its research is expected to improve the construction, safety, and operation of transportation systems that are primarily managed by others, such as state departments of transportation. According to RTCC, research has to be closely connected to its stakeholders to help ensure relevance and program support, and stakeholders are more likely to promote the use of research results if they are involved in the research process from the start. The committee also identified merit review of research proposals based on technical criteria by independent technical experts as being necessary to help ensure the most effective use of federal research funds. In 1999, we reported that other federal science agencies—such as the Environmental Protection Agency and the National Science Foundation—used such reviews to varying degrees to assess the merits of competitive and noncompetitive research proposals. Evaluation of research using expert review of the quality of research outcomes or other best practices: A form of expert review called peer review is a process that includes an independent assessment of the technical and scientific merit or quality of research by peers with essential subject area expertise and perspective equal to that of the researchers. Peer review does not require that the final impact of the research be known. In 1999, we reported that federal agencies, such as the Department of Agriculture, the National Institutes of Health, and the Department of Energy, use peer review to help them (1) determine whether to continue or renew research projects, (2) evaluate the results of research prior to publication of those results, and (3) evaluate the performance of programs and scientists. In its 1999 report, the Committee on Science, Engineering, and Public Policy also stated that expert review is widely used to evaluate three aspects of the Government Performance and Results Act:(1) the quality of current research as compared with other work being conducted in the field, (2) the relevance of research to the agency’s goals and mission, and (3) whether the research is at the “cutting edge.” Although FHWA engages external stakeholders in elements of its research and technology program, the agency currently does not follow the best practice of engaging external stakeholders on a sustained basis. The agency expects each business unit to determine how or whether to involve external stakeholders in the research process. As a result, this approach is used inconsistently. Prior to its 1998 restructuring, FHWA worked with some external stakeholders to initiate “roadmapping” activities for each of its key research areas that would have resulted in research agendas for these areas. To prepare individual roadmaps, the agency’s working groups collaborated across agency office boundaries and with members of the RTCC. However, before the roadmapping had been completed for all research areas, FHWA changed its approach to managing research because of the agency’s reorganization, and RTCC’s involvement with roadmapping ceased. FHWA acknowledges that its approach to preparing research agendas is inconsistent and that the directors of FHWA’s business units primarily use input from the agency’s business units, resource centers, and division offices. Although agency officials told us that resource center and division office staff provide the business unit directors with input based on their interactions with external stakeholders, external stakeholder input into developing research agendas is usually ad hoc, provided through technical committees and professional societies. For example, the agency’s agenda for environmental research was developed with input from both internal sources (including DOT’s and FHWA’s strategic plans and staff) and external sources (including the Transportation Research Board’s reports on environmental research needs and clean air, environmental justice leaders, planners, civil rights advocates, and legal experts). Similarly, the agency uses external stakeholders to provide merit review of research projects on an ad hoc basis. For example, to prepare its “Conditions and Performance Report”, the Policy business unit used a peer review group to provide input into the Highway Economic Requirements System (an economic model that uses marginal cost-benefit analysis to optimize highway investment). FHWA acknowledges that the agency lacks a consistent, transparent, and systematic approach for engaging stakeholders in setting research agendas. However, FHWA has recently taken several steps to increase the involvement of external stakeholders in developing research agendas. First, FHWA’s work with RTCC has resulted in the agency’s obtaining occasional external guidance for its overall program since 1991. The committee points out, however, that it cannot provide broad-based input from stakeholders on the full range of potential highway research topics or specific projects on a continuing basis because its membership is not representative of all the disciplines included in FHWA’s research and technology program. In its 2001 report, the committee recommended that decisions about FHWA research topics should balance stakeholders’ concerns against experts’ external reviews and recommendations as to which research areas hold promise for significant breakthroughs. According to the draft response to the recommendation, FHWA plans to develop such a process by June 30, 2002. In addition, in 1998, FHWA helped organize a National Highway Research and Technology Partnership Forum to identify national highway research and technology needs using input from external stakeholders. Although the forum identified research needs and priorities for FHWA’s consideration in its draft report of August 2001, its long-term role remains to be seen. FHWA officials told us that their ability to develop their research agendas using best practices is also affected by funding designations contained in statutes and committee reports. These designations take a variety of forms, including requiring FHWA to initiate or maintain specific research efforts and specifying dollar amounts for particular recipients. According to agency officials, the designations made by the Transportation Equity Act for the 21st Century and conference reports accompanying recent appropriations acts have represented significant proportions of the agency’s research budget. Using agency data, we calculated that 44 percent of authorized surface transportation research and technology deployment funds in fiscal year 2000, 48 percent in fiscal year 2001, and 44 percent in fiscal year 2002 were designated (see app. I, tables 4, 5, and 6).Agency officials acknowledged that these funding designations reflect congressional interests and priorities but also stated that without these designations, FHWA would have an enhanced opportunity to consistently plan its research agendas and select researchers for its projects according to accepted best practices. In 1999, the Committee on Science, Engineering, and Public Policy reported that federal agencies that support research in science and engineering have been challenged to find the most useful and effective ways to evaluate the performance and results of the research programs they support. However, the committee found that research programs, no matter what their character and goals, can be evaluated meaningfully on a regular basis and in accordance with the Government Performance and Results Act. The committee emphasized that the evaluation methods must match the type of research and its objectives, and it concluded that expert or peer review is a particularly effective means to evaluate federally funded research. The peer review process includes an independent assessment of the technical and scientific merits of research by those with knowledge and expertise equal to that of the researchers whose work they review. According to FHWA officials, the agency does not have an agencywide systematic process to evaluate whether its research projects are achieving intended results and does not generally use a peer review approach. Although the agency’s business units may use such various methods as obtaining feedback from customers and evaluating outputs or outcomes versus milestones, they all use success stories as the primary method to evaluate research outcomes. According to agency officials, success stories are examples of research results adopted or implemented by such stakeholders as state departments of transportation. Although agency officials told us that peer reviews are useful to assess research quality, relevance, and technical breakthroughs, success stories can document the financial returns on investment and nonmonetary benefits of research and technology efforts. FHWA officials provided us with the following examples of success stories: Research conducted by the Infrastructure business unit produced a specification guide on how to mitigate earthquake damage to structures. The guide was adopted by the American Association of State Highway Transportation Officials for inclusion in its guidance to state departments of transportation. The operations research and technology group developed the 511 traveler telephone number that replaced 300 different traveler information telephone numbers nationwide. This single, three-digit number is currently being used in the states of Utah and Nebraska and in parts of Virginia, Kentucky, and Ohio to provide motorists with timely local travel information to help relieve traffic congestion. To respond to one of FHWA’s priority safety emphases, the safety research and technology group developed rumble strips to warn drivers who are driving their vehicles off the road. According to agency officials, in the eight states surveyed that have used rumble strips, crash reduction has ranged from 18 to 72 percent, and the cost-benefit ratio has ranged from 30:1 to as high as 60:1. Research on long-term pavement performance is significantly improving the pavement-engineering process nationwide. Engineers are using a software tool known as a long-term pavement performance bind to more accurately determine the asphalt binder grade needed for specific environmental conditions. This software tool has helped highway agencies to save at least $50 million each year by reducing the application of unnecessary substances that increase the costs of highway construction. In 2001, RTCC also concluded that peer or expert review is an appropriate way to evaluate FHWA’s surface transportation research and technology program. Therefore, the committee recommended a variety of actions, including a systematic evaluation of outcomes by panels of external stakeholders and technical experts to help ensure the maximum return on investment in research. Agency officials told us that increased stakeholder involvement and peer review will require significant additional expenditures for the program. However, a Transportation Research Board official told us that the cost of obtaining expert assistance could be relatively low because the time needed to provide input would be minimal and could be provided by such inexpensive methods as electronic mail. As a partial response to RTCC’s recommendation, FHWA has established a laboratory assessment process that will be used to conduct regular reviews of the Turner-Fairbank Highway Research Center. These reviews will be conducted by panels of external technical experts and will include such issues as technical excellence and quality of lab activities. FHWA’s draft response to this recommendation indicates that it plans to initiate an evaluation process by June 30, 2002. With millions of dollars for its research, FHWA’s research and technology program has the potential to significantly improve the nation’s highway system. FHWA has described several success stories to us but, because its decisions about selecting research and identifying priorities are uneven in the extent to which they use best practices such as seeking external input, it is unclear whether the agency is selecting the most important and relevant research. In addition, because FHWA does not systematically evaluate its research and technology program, it is unclear whether the research is having the intended results or whether some refocusing of the research would be justified. Therefore, we agree with several of the recent recommendations from the Transportation Research Board’s Research and Technology Coordinating Committee, which were designed to remedy these limitations of FHWA’s program. In its draft response to these recommendations, FHWA has indicated that it will take action on most of them. The cost of making such improvements in FHWA’s research and technology program is unknown and will influence the extent to which FHWA can adopt certain best practices. Because Congress has been concerned about the strategic focus of FHWA’s research and technology program and will soon have to make decisions about the nature of the program and the level of resources to devote to it, information generated by FHWA’s potentially improved processes for developing research agendas and evaluating research outcomes, as well as information about the cost of such changes, will also be useful to Congress. To help ensure that FHWA’s research agenda and approach to evaluation are identifying research with the highest value to the surface transportation community and monitoring the outcomes of that research, we are recommending that the secretary of transportation direct the FHWA administrator to develop a systematic approach for obtaining input from external stakeholders in determining the research and technology program’s agendas; develop a systematic process for evaluating significant ongoing and completed research that incorporates peer review or other best practices in use at federal agencies that conduct research; and develop specific plans for implementing these recommendations, including time frames and estimates of their cost. We obtained oral comments on a draft of this report from FHWA officials, including the director of Research, Development, and Technology and the director of the Office of Program Development and Evaluation. These officials indicated that they were pleased that the draft report had recognized some of the FHWA research and technology program’s accomplishments to date, along with its potential to significantly improve the nation’s highway system. They also indicated general agreement with the draft report’s overall assessment of the program and the draft report’s recommendations. The FHWA officials told us that they have been working with both internal and external groups to assess the processes used to plan the research and technology program and to evaluate its results. These officials maintain that the program is essentially sound and pursues worthy research in an effective manner with key program stakeholders. Nonetheless, the agency officials agreed that improvements are possible in the methods used to select research and technology projects and to evaluate program results. They told us that FHWA had recently taken steps to make research a higher priority for the agency by investing in research to meet stakeholders’ needs, improving delivery of innovations to potential users, and improving business processes in the research and technology program. As a result of a major restructuring assessment, FHWA officials told us that the agency has also committed to making research and technology more prominent as a strategy for achieving FHWA’s mission. With regard to project planning and selection, FHWA officials explained that they are examining ways to improve existing methods for incorporating stakeholder input and seeking means to further ensure that stakeholder perspectives are fully and effectively considered. Finally, with regard to evaluating program results, FHWA officials told us that although there are merits to current methods, more extensive and consistent use of best practices such as peer review could benefit the program. We acknowledge that FHWA recently has planned or put into place several initiatives designed to improve its research and technology program, and we describe these actions in this report. Nevertheless, we continue to believe that additional actions in response to our recommendations are warranted to improve FHWA’s processes for setting research agendas and evaluating research efforts. We are sending copies of this report to congressional committees and subcommittees with responsibilities for transportation, the secretary of transportation, the Federal Highway Administration administrator, and the director of the Office of Management and Budget. We will make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-2834. Key contributors to this report were Sharon Dyer, Sally Gilley, Octavia Parks, Deena Richart, and Kate Siggerud. In fiscal year 1992 (the first year in which FHWA’s research and technology program was authorized under the Intermodal Surface Transportation Efficiency Act of 1991), authorized funding for the entire program increased almost fivefold, from approximately $88.6 million in fiscal year 1991 to $442.4 million. Since that time, authorized funding for FHWA’s research and technology program has remained relatively flat; from fiscal year 1992 through fiscal year 2001, authorized funding for the program went from $442.4 million to $437.3 million. However, since fiscal year 1998 these authorized funds have been subject to an obligation limitation that has reduced amounts available for research purposes an average of about 11 percent a year below authorized funding levels (see fig. 2). The areas of research funded from fiscal years 1992 through 2001 have varied based on authorizing legislation. From fiscal year 1992 through fiscal year 1997, the majority of FHWA’s entire surface transportation research and technology funding went to support the Intelligent Vehicle Highway Systems program. The remainder of funds primarily supported the agency’s highway research, development, and technology program and applied research and technology program. Since fiscal year 1998, the majority of the agency’s research and technology program funds have continued to support the intelligent transportation systems program as well as the surface transportation research program. (See tables 2 and 3 for funding allocations by program area for fiscal years 1992 through 2001.) These funds were subject to designations in statutes and committee reports, with the Infrastructure business unit being the most affected (see tables 4, 5, and 6 for designations by business unit for fiscal years 2000 through 2002). In fiscal year 2002, approximately 80 percent of the surface transportation research and technology deployment funds provided to the Infrastructure business unit were designated.
The Federal Highway Administration (FHWA) has received hundreds of millions of dollars for its surface transportation research and technology program during the past decade. For example, in 1998 the Transportation Equity Act for the 21st Century, included over $447 million for fiscal year 2002 for FHWA's transportation research and technology efforts for six-year period of 1998 through 2003. FHWA's research and technology program is complex because each of the program offices within the agency are responsible for identifying research needs, formulating strategies to address transportation problems, and setting goals that support the agency's strategic goals. One business unit at FHWA's research laboratory provides support for administering the overall program and conducts some of the research. The agency's leadership team provides periodic oversight of the overall program. FHWA's processes for managing the research and technology program, and in particular for developing research agendas and evaluating research outcomes against intended results, do not always align with the best practices for similar federal research and technology programs. FHWA acknowledges that its approach for developing research agenda and involving external stakeholders in determining the direction of the program's research lacks a consistent, transparent, and systematic process. Instead, most external stakeholder involvement is ad hoc through technical committees and professional societies. The agency primarily uses a "success story" approach to evaluate its research outcomes. While this approach shows some benefits, it cannot be used as the primary method to evaluate the outcomes of the research because these stories represent only a fraction of the program's completed research projects. As a result of its relatively varied processes, it is unclear whether the organization is selecting research projects that have the highest potential value, or what is the extent to which these projects have achieved their objectives.
National securities exchanges and registered securities associations, along with registered clearing agencies and the Municipal Securities Rulemaking Board, are collectively termed self-regulatory organizations (SROs) under Section 3(a)(26) of the Securities Exchange Act of 1934 (Exchange Act). NASD is the SRO of the securities industry responsible for regulating the over-the-counter (OTC) securities market and the products traded in it. NASD’s responsibilities are contained in Section 15A of the Exchange Act, and it operates subject to SEC oversight. NASD has responsibility for ensuring that its members comply with federal securities laws and NASD rules. It is the largest SRO in the United States, with a membership that includes virtually every broker/dealer in the nation that does a securities business with the public. Through its subsidiaries, NASD Regulation, Inc. (NASDR) and the Nasdaq Stock Market, Inc. (Nasdaq), NASD develops rules and regulations, conducts regulatory reviews of members’ business activities, and designs and operates marketplace services and facilities. NASD helps to establish and coordinate the policy agendas of its two subsidiaries and oversees their effectiveness. It has delegated to Nasdaq the obligation to develop, operate, and maintain systems, services, and products for the securities markets that NASD operates. It has also delegated to Nasdaq responsibility for formulating regulatory policies and listing criteria applicable to the markets it operates. The Nasdaq Stock Market began operation in 1971 as the first electronic, screen-based stock market for nonexchange listed securities. Nasdaq enables securities firms to execute transactions for investors and for themselves in an environment of real-time trade reporting and automated market surveillance. As of December 1997, more than 6,200 securities were traded on Nasdaq, representing approximately 5,500 companies. In addition to its screen-based operations, Nasdaq is distinguished from stock exchanges by its use of multiple market makers—independent dealers who openly compete with one another for investors’ orders in each Nasdaq security. Nasdaq has two tiers: the Nasdaq National Market, where approximately 4,200 of Nasdaq’s larger companies are listed and traded; and the Nasdaq SmallCap Market, where approximately 1,300 smaller, emerging growth companies are traded. Before a company’s stock can be traded on the Nasdaq Stock Market, the company must be admitted to Nasdaq. Upon request, the company receives written notice of the applicable Nasdaq qualifications requirements. The company must then submit a listing application (together with supporting financial statements) in which it states that it (1) will abide by all applicable marketplace rules, (2) currently meets the applicable requirements for inclusion of its stock in Nasdaq, (3) will file with NASD copies of all reports or other information filed with SEC or other regulatory authorities, and (4) will pay the fees associated with inclusion in Nasdaq. As part of the new listing requirements, all companies listing on the Nasdaq Stock Market are required to sign and complete a listing agreement in addition to the listing application. Nasdaq has authority over the initial and continued inclusion of securities in its markets in order to maintain the quality of and public confidence in its markets. Nasdaq may deny initial inclusion or delist securities even though the securities meet all criteria for initial or continued inclusion. “. . . the NASD’s role in Nasdaq is the same as that of the organized exchanges with respect to the lists of securities traded on them. . . . primary emphasis must be placed on the interests of prospective future investors. The latter group is entitled to assume that the securities in the system meet the system’s standards. Hence, the presence in Nasdaq of non-complying securities could have a serious deceptive effect.” SEC’s statutory oversight responsibilities regarding Nasdaq’s listing requirements include its authority to (1) review and approve or deny SRO-proposed rule changes, (2) inspect SROs, and (3) review listing decisions either on appeal or by its own initiative. SRO rules and proposed rule changes may cover such activities as organization and administration, financial products traded, business conduct, and discipline. SRO rules also include listing requirements for traded companies. Nasdaq’s listing requirements are embodied in its marketplace rules. SEC reviews SRO-proposed rules to ensure that they are consistent with the requirements of the Exchange Act and subsequent regulations. If SEC cannot make such a finding, it must disapprove the proposed rule change. On February 28, 1997, NASD filed a proposed rule change with SEC to make listing requirements for issuers listed on Nasdaq more stringent. SEC approved the rule change on August 22, 1997. In addition to its authority to approve SRO-proposed rules, the Exchange Act authorizes the Commission to conduct “reasonable periodic, special, or other examinations” of “ll records” maintained by SROs. These examinations, or inspections, may be conducted “at any time, or from time to time,” as the Commission “deems necessary and appropriate in the public interest, for the protection of investors, or otherwise in furtherance of the purposes of this title.” The SEC office responsible for conducting these inspections is the Office of Compliance Inspections and Examinations (OCIE). The Commission created OCIE in 1995 to streamline and improve the inspection process. Previously, the responsibility for inspections was divided between the Divisions of Market Regulation and Investment Management. OCIE’s stated mission is to protect investors, foster compliance with the securities laws, and deter violative conduct through effective inspections of regulated entities. Among the types of inspections OCIE conducts are routine oversight inspections of programs administered by securities industry SROs to monitor the effectiveness with which these organizations fulfill their statutory responsibilities under the federal securities laws. These inspections test SROs’ compliance with their regulatory and other duties, and they are to be routinely conducted on a cyclical basis. OCIE does not inspect an entire SRO, focusing instead on particular program areas. OCIE has inspected several programs in each SRO annually. Its inspection goals are based on such criteria as an established inspection cycle, length of time since last visit, known problems, or recent program developments.Inspection reports are to be reviewed internally by senior management within OCIE as well as by the Commissioners, where appropriate. This review focuses on Nasdaq’s Listing Department and SEC’s Office of Compliance Inspections and Examinations, the office responsible for oversight of SROs. To determine how SEC met its oversight responsibilities regarding SRO listing programs, we reviewed SEC inspection reports, inspection workpapers, annual reports, and other SEC internal documents. We also interviewed SEC officials. To determine whether Nasdaq followed its listing and maintenance requirements with regard to Comparator, we reviewed NASD and Nasdaq manuals; Nasdaq, NASDR, and SEC documents and court papers; and SEC filings. We also interviewed Nasdaq and SEC officials. To identify the actions Nasdaq has taken since May 1996, we reviewed Nasdaq documents and SEC filings. To determine how Nasdaq monitors exceptions to its listing and maintenance requirements, we interviewed Nasdaq officials and reviewed NASD and Nasdaq manuals. We obtained written comments on a draft of this report from SEC (see app. I) and Nasdaq (see app. II), and their comments are discussed at the end of this letter. We did our work in Washington, D.C., between February and September 1997 in accordance with generally accepted government auditing standards. SEC oversight actions related to Nasdaq’s Listing Department included approving two rule changes NASD proposed to make its Smallcap listing requirements more restrictive. In 1997, SEC also inspected Nasdaq’s Listing Department for the first time since 1986. In 1991, and again in 1997, NASD proposed, and SEC approved, rule changes that made listing and maintenance requirements more restrictive for the SmallCap Market. For example, the 1991 change doubled from $2 million to $4 million the assets NASD required of companies applying for listing. The 1997 change tightened this asset requirement from total assets of $4 million to net tangible assets of $4 million. Net tangible assets are total assets minus total liabilities and goodwill. The 1997 rule change retained the $1 minimum bid price for common and preferred stock for continued listing. However, the 1997 rule change removes an alternative available under the 1991 rule allowing a company to maintain its listing when its bid price falls below $1, as long as its capital and surplus exceed $2 million and the market value of its public float exceeds $1 million. NASD’s rationale for this requirement was that it provided a safeguard against certain market activity associated with low-priced securities. Nasdaq officials said the new quantitative listing and maintenance requirements would further protect investors and enhance the quality and credibility of the Nasdaq Smallcap Market. In addition to the quantitative requirements, NASD’s 1997 rule change also included a peer review requirement for independent auditors of Nasdaq SmallCap listed companies. To meet this requirement, these companies must be audited by an independent auditor that has received or is enrolled in a peer review program that meets acceptable guidelines and is subject to oversight by an independent body. To qualify, such a peer review program must provide that an accounting firm’s quality control system is to be externally peer reviewed every 3 years. Nasdaq officials believe that this requirement will improve the quality and stability of Nasdaq companies. When the 1997 rule change was adopted, Nasdaq officials estimated that about 30 percent of companies listed on the SmallCap Market would no longer be eligible for continued listing. Companies currently listed have 6 months to meet the new maintenance requirements (until February 22, 1998). The 6-month period is intended to give currently listed companies adequate time to complete appropriate corporate action to achieve full compliance. Tables 1 and 2 summarize and compare Nasdaq’s quantitative SmallCap listing and maintenance requirements. When SEC identifies deficiencies in the operations of SROs, improvements in these operations can occur only if the deficiencies are effectively resolved. Without ongoing, systematic follow-up, SEC cannot ensure that its recommendations to correct these deficiencies are implemented properly in a timely manner. Before OCIE established new procedures, SEC depended primarily on subsequent inspections to follow up on its inspection recommendations. SEC provided us with information on the listing department inspections it had performed since 1986, and this information is presented in table 3. In its 1983 and 1986 inspections of Nasdaq’s Listing Department, SEC had followed up on recommendations it had made in prior inspections, finding that deficiencies had been corrected. The 1983 report refers to an inspection conducted in 1979 and concludes that Nasdaq complied with recommendations made in the 1979 report. Similarly, the 1986 report refers to the 1983 inspection and concludes that Nasdaq complied with recommendations made in the 1983 report. By contrast, the 1997 report refers to the 1986 report and concludes that Nasdaq ignored the recommendations SEC made 11 years earlier. SEC stated that failure by Nasdaq to enforce its listing and maintenance standards could have the effect of misleading investors who are entitled to assume that Nasdaq-listed securities meet its published requirements. However, in 1986, Nasdaq’s written response to SEC’s inspection report disagreed with SEC’s findings. Nasdaq cited alternative means to address one recommendation it declined to implement and stated that its practices regarding issuers making delinquent filings met the intent of SEC’s second recommendation. Because SEC had not followed up on its 1986 recommendations until 1997, this disagreement continued for 11 years, and Nasdaq believed it had addressed the issues SEC raised. Disagreements like this could be avoided if recommendations were followed up systematically and not dependent solely on subsequent inspections. OCIE officials told us that the reason the Nasdaq Listing Department had not been inspected since 1986 was because SEC must inspect a wide range of exchange programs with limited resources, and SEC had no inspection cycle for listing departments until 1996. OCIE officials told us that they began to reevaluate the cycles and coverage of SEC’s inspection program when OCIE was created in 1995. They said they made a number of changes to the program, including establishing inspection cycles for listing departments. As additional resources became available, OCIE shortened its inspection cycles. In November 1996, the OCIE Director placed regional SRO listing departments on a regular 3-year cycle of inspections; and the American Stock Exchange, Nasdaq, and the New York Stock Exchange were to be inspected, at least in part, on a 2-year cycle. OCIE inspected equity listing programs at all the exchanges in 1997. An SEC document states that the length of time since the last visit and known problems are criteria to be considered when OCIE sets its inspection goals. OCIE officials also told us they have instituted a number of procedures in addition to subsequent inspections to ensure that SEC’s recommendations are addressed. First, the recommendations are to be included in a report sent to the SRO, and the SRO is requested to respond in writing to the recommendations within 60 days, outlining remedial actions it intends to take. SEC is to ask the SRO to provide a specific timetable for the actions, and the SRO must send SEC written confirmation at the completion of each action. Second, in cases where the findings and recommendations are more significant, the SRO may be required to report the findings and intended remedial actions to its board of directors. Senior officials of the SRO may be required to meet with OCIE to discuss the report; and in the most egregious cases, OCIE may refer the matter to SEC’s Enforcement Division. Third, OCIE is to analyze the written SRO responses to ensure that (1) each recommendation has been adequately addressed, (2) the results have been reported to senior management, and (3) any outstanding issues are being monitored. When OCIE makes a large number of recommendations, it is to prepare spreadsheets to monitor the progress of remedial actions. Fourth, OCIE may conduct a follow-up inspection that focuses on the remedial actions taken to ensure that the SRO has properly implemented OCIE’s recommendations. OCIE staff also are to review the remedial actions in its next cyclical inspection of the program. OCIE officials told us that these procedures are intended to ensure that problems found during an inspection do not persist and that immediate remedial action is properly implemented. These new OCIE procedures should significantly improve recommendation follow-up. However, these procedures do not involve Commissioners, the agency’s highest authorities. Involving the Commissioners in following up on recommendations would provide them information on the status of corrective actions deemed significant by SEC staff and would provide an additional incentive for SROs to comply. One way to accomplish this would be for SEC staff to periodically report all open, significant recommendations to the Commission. Involving the Commission would be analogous to OCIE’s policy of involving an SRO’s board of directors when OCIE deems its findings significant enough to merit board involvement. As SEC staff determine that SROs comply with recommendations, those recommendations could be closed. Although Comparator occasionally had problems complying with Nasdaq’s listing and maintenance requirements, Nasdaq never granted Comparator any exceptions. However, SEC found that Comparator’s continued listing was inappropriate because, among other deficiencies, Nasdaq failed to investigate the value of Comparator’s assets. Nasdaq records show that Comparator had never received an exception, either to listing or maintenance requirements. Nasdaq officials provided us copies of excerpts from Comparator’s SEC filings for the period from June 1989 through March 1996, along with selected trading and market information for the same time period. This information showed that Comparator complied with all initial listing requirements and, except as discussed below, also complied with all maintenance requirements. Comparator was most recently listed on the Nasdaq SmallCap Market from February 28, 1990, through June 12, 1996. Nasdaq cited the company for late filings in 1991 and again in 1992. In both instances Comparator corrected the deficiencies before the conclusion of compliance procedures initiated by Nasdaq staff and received no exceptions at any time. Although Comparator’s bid price was typically less than $1, Comparator complied with Nasdaq’s $1 minimum bid price requirement by meeting the capital and surplus alternative and under that option maintained its listing. However, in 1993 Comparator failed to meet the alternative $2 million capital and surplus requirement. The company corrected this deficiency in its next public filing. In 1995, Comparator was not current in its annual listing fees but corrected this deficiency when notified by Nasdaq staff. In May 1996, Nasdaq staff notified the company it was not current in its filings. Nasdaq staff asked Comparator for updated filings and payment of fees. Nasdaq officials informed us that at all other times Comparator’s public filings demonstrated compliance with Nasdaq maintenance requirements, and the company received no exceptions at any time. After its 1997 inspection of Nasdaq’s Listing Department, SEC criticized Nasdaq’s handling of Comparator. SEC staff said Nasdaq’s Listing Department should have looked more closely at Comparator’s balance sheet, particularly its assets. SEC staff noted that in Comparator’s 1994 annual report, more than 50 percent of its assets consisted of patents and licenses related to obscure technologies. This made it relatively easy for Comparator to inflate their values. Because Nasdaq failed to verify the value of Comparator’s assets, SEC claimed that Comparator continued to be listed inappropriately. SEC recommended that when asset valuation is an issue, Nasdaq staff should obtain additional information that would allow a Nasdaq analyst to verify, to the extent reasonably necessary, the validity and value of the asset. SEC reported that Comparator had numerous problems that should have been tracked on a watch list system. In addition to the questionable assets just mentioned, SEC concluded that Comparator’s termination of its corporate secretary for improper issuance of stock and stealing from the company, in addition to the 27 unsatisfied final judgments against it, should have foreshadowed the noncompliance that ultimately led to Comparator’s removal from the SmallCap Market on June 12, 1996. SEC recommended that Nasdaq institute a watch list tracking system to identify and monitor companies experiencing difficulties that might be an indicator of future noncompliance. SEC stated that Comparator had issued press releases announcing (1) the acquisition of a company engaged in real estate development in China, (2) its entry into a contract to produce the world’s first biometrically protected national identification card, and (3) the introduction of its new identification verification system. SEC found that none of the claims made in these press releases were true. SEC recommended that Nasdaq require analysts to review Nasdaq companies’ press releases. SEC noted that on Comparator’s 1993, 1994, and 1995 financial statements, the independent auditor’s opinions expressed doubts about whether Comparator could continue as a “going concern.” SEC noted no indication of concern by Nasdaq. SEC recommended that Nasdaq revise its procedures to require that companies receiving a going-concern opinion on their financial statements be required to file a business plan with Nasdaq demonstrating the company’s ability to continue to operate in compliance with Nasdaq’s maintenance requirements. SEC’s primary criticism was that Nasdaq did not adequately review companies for initial and continued listing. SEC reported that this condition existed mainly because Nasdaq failed to devote sufficient resources to the Listings Department. SEC also noted other deficiencies in Nasdaq’s Listing Department. In one case, SEC noted that Nasdaq failed to follow up on or refer for further investigation possible securities law violations it discovered in its review process, and SEC recommended that this be corrected. SEC expressed its concern that investors did not fully appreciate the difference between the National Market and the SmallCap Market. It recommended that Nasdaq highlight the differences between companies trading in these two markets and the attendant risks of investing in either market. SEC criticized the organizational structure of the Listing Department, noting that the senior official in charge of the Department also had marketing responsibilities. SEC observed that Nasdaq had generally failed to enforce filing deadlines and recommended that such deadlines be enforced. SEC also observed that Nasdaq had difficulty producing files in a timely manner and recommended that Nasdaq review and revise its filing system. Finally, SEC noted that Nasdaq’s Review Committee is dominated by members of the securities industry, and about 70 percent of its pool of hearing panel members are employed by market makers. SEC recommended that the Review Committee contain a strong representation of nonindustry representatives. SEC recognized that Nasdaq has taken significant steps to address several of its recommendations to improve the Listing Department and the SmallCap Market. SEC believes that these developments reflect a commitment by Nasdaq towards improving the SmallCap Market. Nasdaq officials disagreed with some of SEC’s findings, but they generally recognized the merits of SEC’s recommendations and stated their commitment to respond and continue to improve the quality of the SmallCap Market. They disagreed with SEC’s findings that as a general matter, Nasdaq staff reviews of company filings were cursory or that Nasdaq had failed to satisfy its regulatory responsibilities to preserve and strengthen the quality of, and public confidence in, the SmallCap Market. As previously discussed, Nasdaq officials also disagreed that they had ignored recommendations SEC made in its 1986 inspection report. Nasdaq officials noted that although the SmallCap Market represented only about 3 percent of the Nasdaq Stock Market’s total market value, they devoted significant resources to that market. Nasdaq statistics indicate that in 1996 the Department reviewed 374 applications for listing on the SmallCap Market and denied 132 of them, about 35 percent. During the same period the Department identified 972 deficiencies in 640 SmallCap companies. Of the 640 companies receiving deficiency notices, 548, about 86 percent, achieved compliance. Nasdaq took several actions to address OCIE’s criticism that Nasdaq failed to verify Comparator’s assets or track its problems on a watch list. These actions also responded to OCIE’s general criticism of inadequate review of filings due to insufficient resources. To complement its review procedures for listed companies, Nasdaq increased the staffing of its Listing Department by 11 positions to 44. In the SmallCap Market area, Nasdaq increased its review staff by 80 percent, from five to nine. Nasdaq’s new requirement that independent auditors of Nasdaq companies be subject to peer review is intended to help ensure a firmer basis for the reliance Nasdaq places upon audited financial statements, including asset valuation. To identify and track high-risk companies, Nasdaq has developed an automated risk scoring system. This system was designed to identify companies with profiles that suggest the need for additional scrutiny, including scrutiny of asset valuations. These profiles are to be based on quantitative, qualitative, and trading attributes. Nasdaq also created a new special investigative unit of five experienced staff with financial and accounting expertise. It subsequently increased authorized staffing to a total of seven positions. This unit is intended to complement the listing qualifications program and to allow Nasdaq to watch and track high-risk issuers with more specialized focus. Such issuers might include those whose management, large shareholders, consultants, or underwriters had a disciplinary history. From its inception in December 1996 through April 1997, the unit has delisted five SmallCap issuers and investigated and closed two other matters. In response to OCIE’s criticism that it failed to review Comparator’s press releases, Nasdaq stated that the review of Nasdaq companies’ press releases is the primary responsibility of Nasdaq’s Market Watch staff. Nasdaq-listed companies are required to notify Market Watch of the release of any significant information before its public release. Market Watch is to assess the information and, when appropriate, may implement temporary trading halts. Market Watch is also to notify the Listing Department and NASDR when there appears to be a pattern of misleading press releases. Upon such notification by Market Watch, the Listing Department is to evaluate the press release and follow up on any concerns the Department may have with the company. Nasdaq stated that a separate review of all press releases by its Listing Department is not warranted and would not be an appropriate allocation of resources. In response to OCIE’s criticism that Nasdaq was unconcerned about Comparator’s going concern audit opinions, Nasdaq stated it does not believe that companies with going concern opinions should in each instance be required to file a business plan in order to maintain their listing. Nasdaq stated that it took this position because these plans focus on uncertain projections of future performance. Nasdaq agrees that a going concern opinion is a factor that the Listing Department should always consider. However, it believes that other factors, such as the proceeds from the sale of stock, may counterbalance the opinion. In late 1996, Nasdaq added going concern audit opinions as a separate data element in its database of information about Nasdaq-listed companies. When Nasdaq staff review companies’ filings, they are to note the presence of going concern opinions, and those companies are to be watched and tracked more closely. Nasdaq also took actions that responded to OCIE’s general criticisms of the Listing Department. Nasdaq implemented the use of a worksheet to be filled out when it reviews listed companies’ SEC filings. To improve its referrals process, Nasdaq adopted a policy that referrals to NASDR Enforcement, SEC, and other law enforcement agencies should be in written form. Nasdaq officials met with SEC staff to establish the parameters of the referral program. As of November 21, 1997, Nasdaq staff had made three written referrals under the program. Regarding OCIE’s recommendation that Nasdaq highlight the differences between companies that trade in the National Market and companies that trade in the SmallCap Market, Nasdaq agrees with SEC’s general policy that investors should be provided greater information about the securities they are buying. Nasdaq stated that it continues to make substantial investments in its public Internet Web site (Nasdaq.com), which includes a broad range of information for individual investors, such as current company and market information. The NASDR Web site (NASDR.com) also provides investors with a basic primer on how securities regulation works and how investors can avoid problems before they occur. The site also provides information on steps investors can take if they run into difficulty. In August 1996, NASD established the Office of Individual Investor Services to enhance investor education and outreach efforts and to establish a strong advocate for the individual investor within NASD. This office offers training on investment basics, provides guidance on working with a broker, publishes an investor newsletter, makes presentations, and provides information at investor forums. To separate the compliance responsibilities of the Listing Department from its marketing responsibilities to obtain new listings, Nasdaq restructured its reporting lines so that the head of the Listing Department no longer reports to a Senior Vice President with direct marketing responsibilities. The Listing Department now reports to the Executive Vice President for Issuer, Investor, and International Services. To enhance its filing delinquency program, Nasdaq now provides its analysts with real-time access to periodic reports filed electronically with SEC. Nasdaq anticipates this access will significantly reduce its delinquency discovery times and allow it to monitor listed companies’ filing status on a daily basis. To produce files in a timely manner when they are needed or requested, Nasdaq converted its issuer files from paper copies to electronic media for public filings and to an optical storage and retrieval system for issuers’ proprietary material. To diversify the makeup of its Listing and Hearing Review Council, Nasdaq has agreed to change the makeup of the council. In 1998, the council is to comprise 11 members, with the majority being nonindustry representatives. Nasdaq began designing and implementing these changes at different times after May 1996. Although SEC acknowledged that many of the changes Nasdaq made met the intent of SEC’s recommendations, not all of the changes have been completely implemented, and others have not been in effect long enough to adequately assess their effectiveness. Further, SEC noted that the changes in Nasdaq’s listing and maintenance requirements that it approved in August 1997 would not affect the need for Nasdaq to implement SEC’s recommendations. When Nasdaq staff make decisions to deny listing or to delist a company, the company can request a hearing before a Nasdaq listing qualifications panel. On the basis of its review, the panel may determine that an exception is warranted. Nasdaq staff maintain a database to monitor information about all companies that go through its hearings process. Nasdaq staff use the information in the database to gauge the day-to-day operations of the hearings process. However, the Listing Department does not aggregate or analyze the information over time to assess what happens to companies that request exceptions and their ultimate disposition. As a result, Nasdaq is missing opportunities for measuring the overall effectiveness of its operations. As described earlier, issuers must apply to be listed on the Nasdaq SmallCap Market. If an application is denied, or if the company has fallen out of compliance with maintenance requirements, the company can request an exception to the denial or delisting decision made by Nasdaq staff. The exception must be requested in writing and a fee paid. The Nasdaq Listing Qualifications Panel (NLQP), a two-member panel composed of both securities industry and nonindustry professionals approved by the NASD Board of Governors, reviews denial or delisting decisions made by Nasdaq staff. NLQP makes a decision that is immediately actionable, but the decision is subject to review at the request of the company or a member of the Nasdaq Listing and Hearing Review Committee (NLHRC). NLHRC is an 11-member standing committee appointed by the Nasdaq Board of Directors. NLHRC receives all decisions made by NLQP and can affirm, reverse, modify, or remand any decisions it receives. Furthermore, all NLHRC decisions are provided to, and may be called for review by, the Nasdaq Board of Directors or the NASD Board of Governors. In addition to the levels of review described above, Nasdaq officials pointed out that decisions by NLHRC after Board consideration can be appealed to SEC, and SEC may call any NLHRC decision for review. Nasdaq maintains a database that includes information about a company’s deficiencies as well as the outcomes of hearings (whether a company is approved or denied initial listing, granted an exception, or delisted from Nasdaq). Nasdaq staff use this database to document the terms of any exceptions granted and the company’s final disposition with respect to the terms of the exception. Nasdaq officials told us the Listing Department uses the information in its databases to generate a daily delinquency report that lists all companies that are delinquent in their filings. The Department also generates a weekly list of companies that do not comply with other maintenance requirements. According to Nasdaq officials, the databases that produce these reports will be replaced shortly by a new system that will consolidate in one database all information about a listed company, including its compliance record and a record of any deficiencies and exceptions granted. Currently, Nasdaq does not routinely use overall program statistics to evaluate and guide its Listing Department activities. For example, the Department produces no routine reports for senior management that present overall program statistics. By not routinely aggregating and analyzing overall program statistics over time, Nasdaq cannot demonstrate the effectiveness of its exceptions granting policies. Key indicators of effectiveness, such as the outcomes of companies granted exceptions, compared to those not granted exceptions as well as compared to program goals, can help to demonstrate the effectiveness of Nasdaq’s exceptions granting policies. For example, Nasdaq officials provided statistics that showed they received 1,147 listing applications for the SmallCap Market between May 1994 and June 1997. Of that number 66 companies, or 5.8 percent, were listed with exceptions to listing requirements. During the same period, Nasdaq granted exceptions to maintenance requirements to 168 companies. On an annualized basis, the average number of companies granted an exception was 53, or 3.8 percent of the average number of companies (1,381) listed on the SmallCap market at any given time. These numbers have little meaning without some context. Collecting and analyzing the data over time, especially the outcomes for these companies (whether they remain on the SmallCap Market or list on another market), could provide Nasdaq a key indicator of the effectiveness of its exceptions granting process. Nasdaq officials also provided statistics for us that showed 562 companies dropped off the SmallCap Market from May 1, 1994, to May 30, 1997. Of that number, 409 were delisted as noncompliant, and 153 delisted voluntarily. On an annualized basis, the 562 companies that were no longer listed represent a turnover rate of about 182 companies, or 13.2 percent of the average number of companies (1,381) listed at any given time. Collected and analyzed over time, data on this turnover rate of companies listed on the Nasdaq SmallCap Market, including information on what happened to those companies, would provide Nasdaq, SEC, and investors a key indicator of the effectiveness of its listing and maintenance standards. Such data, when compared to program goals, can help demonstrate Listing Department results; identify performance gaps; and align activities, core processes, and resources. The experiences of leading organizations that use such information show that it can become a driving force in improving the effectiveness and efficiency of program operations. Although SEC inspected all SRO listing departments in 1997, during the preceding 11 years it had inspected these departments infrequently or not at all. Before 1995, frequent and regular inspections were SEC’s primary method of following up to ensure its recommendations were implemented. Our work at Nasdaq’s Listing Department showed that infrequent inspections and the lack of an effective recommendation follow-up system allowed deficiencies that SEC identified to remain uncorrected for long periods. OCIE’s action in 1996 to establish regular inspection cycles for SRO listing departments, if properly implemented, should help ensure that deficiencies in these departments do not remain uncorrected for long periods. More importantly, OCIE’s new procedures provide a systematic process to follow up on the recommendations it makes in all of its SRO inspections. Including SEC Commissioners, who have the authority to require SROs to comply with OCIE’s recommendations, in the process would provide an additional incentive for SROs to comply with OCIE recommendations. We share SEC’s concern that the deficiencies identified in Nasdaq’s Listing Department operations could have had the effect of misleading investors who are entitled to assume that the stocks listed on the Nasdaq SmallCap Stock Market meet the listing and maintenance requirements of that marketplace. The Listing Department has made changes in its operations that, if implemented correctly, should improve the SmallCap Market and enhance investor protection. Not all of these changes have been completely implemented, and others have not been in effect long enough to adequately assess their effectiveness. Our work also showed that Nasdaq’s Listing Department does not routinely use overall program statistics to evaluate and guide its activities. Aggregating and analyzing such information could help Nasdaq ensure that its programs are results oriented, its goals are clearly established, and its strategies for achieving those goals are appropriate and reasonable. Such information could also help SEC conduct better regulatory oversight of SRO listing programs. We recommend that the Chairman, SEC, require OCIE to periodically report the status of all open, significant recommendations to the Commissioners; and require NASD to develop management reports based on overall program statistics that demonstrate its Listing Department’s operating results, such as the number of companies granted exceptions to listing and maintenance requirements along with their ultimate disposition, and to submit this data periodically to the Commissioners for review. We requested comments on a draft of this report from the Chairman, SEC. On December 19, 1997, the Director, Office of Compliance Inspections and Examinations for SEC, provided written comments. These comments are reprinted in appendix I. SEC also provided technical comments, which we incorporated where appropriate. SEC agreed with the facts as stated in our report. It also agreed with our recommendation that OCIE periodically apprise the Commission of the status of all open, significant recommendations. Further, SEC stated that it intends to take steps to inform the Commission whenever an SRO submits a response to an SEC inspection report that indicates the SRO does not intend to take adequate corrective actions in response to SEC’s recommendations. We requested comments on a draft of this report from the Chairman, NASD. On December 19, 1997, the President of the Nasdaq Stock Market, Inc., provided written comments. These comments are reprinted in appendix II. Nasdaq Stock Market officials agreed with the conclusion reached in our report regarding the need for Nasdaq to make greater use of statistics to evaluate and guide its activities. They accepted our recommendations and stated they will provide senior management with statistical reports on the Listing Department’s operations on a quarterly basis. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from its issue date. At that time, we will send copies of this report to the Majority and Minority Members of the House Commerce Committee and to other interested parties. We will also make copies available to others on request. Major contributors to this review are listed in appendix III. Please contact me at (202) 512-8678 if you or your staff have any questions. The following is GAO’s comment on the Nasdaq Stock Market, Inc.’s, December 19, 1997, letter. 1. We added text on pages 11 and 16-19 that indicates that (1) Nasdaq officials disagreed with SEC that they had ignored recommendations SEC made in its 1986 inspection report; and (2) Nasdaq officials, until 1997, believed they had addressed the issues raised in SEC’s 1986 inspection. Rosemary Healy, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the National Association of Securities Dealers' (NASD) automated quotation (NASDAQ) SmallCap Stock Market and the events surrounding the listing of Comparator Systems Corporation stock. GAO noted that: (1) the Securities Exchange Commission (SEC) has taken actions to meet its oversight responsibilities with respect to the NASDAQ Stock Market Listing Qualifications Department by approving two NASDAQ requests for rule changes to tighten listing standards in 1991 and 1997 and by inspecting the Department's operations in 1979, 1983, 1986, and 1997; (2) it did not follow up on its 1986 recommendations to improve Listing Department operations until 1997, 11 years later; (3) when it did follow up in 1997, SEC reported that some of the same deficiencies it had found in 1986 still existed, and it found additional deficiencies as well; (4) NASDAQ disagreed and stated that it had responded to SEC's 1986 inspection report and that for 11 years it believed it had addressed the issues SEC raised; (5) before the Office of Compliance Inspections and Examinations (OCIE) established new procedures, SEC used subsequent and follow-up inspections as its primary method for ensuring that its recommendations were implemented; (6) this did not provide systematic recommendation followup when constraints such as limited resources or changing priorities caused long periods of time between inspections, as occurred for the NASDAQ Listing Department; (7) OCIE has instituted a number of procedures to provide more systematic recommendation followup, but these procedures do not involve SEC's Commissioners, who have the authority to require self-regulatory organizations to comply with OCIE's recommendations; (8) the Listing Department followed its listing and maintenance requirements for Comparator and had never granted the company any exceptions to those requirements; (9) SEC criticized NASDAQ's handling of Comparator because the Department had failed to investigate assets that appeared questionable on the company's financial statements; (10) SEC subsequently proved that Comparator officials had inflated those assets to continue the company's NASDAQ listing and facilitate the sale of its stock; (11) SEC made several recommendations to improve NASDAQ's Listing Department operations, which NASDAQ has begun to implement; (12) since the May 1996 run-up in trading of Comparator, NASDAQ has improved its Listing Department operations in response to its own inquiry as well as SEC's; and (13) NASDAQ monitors individual company requests for exceptions to its listing and maintenance requirements through reviews and approvals by the NASDAQ and NASD boards of directors and through information by Listing Department staff.
The federal government plays a role in overseeing Internet pharmacy activity to the extent that these entities engage in interstate commerce or violate federal laws. However, states have traditionally regulated the practice of pharmacy and the practice of medicine. State boards of pharmacy license pharmacists and pharmacies, and state medical boards license physicians and set standards to ensure appropriate care, including standards for writing prescriptions. By violating federal and state laws, rogue Internet pharmacies threaten the public health. No one federal agency is designated as the lead in combating rogue Internet pharmacy activity. Instead, several federal agencies, including FDA, CBP, and ICE, have separate and distinct roles and often work together. Under the Federal Food, Drug, and Cosmetic Act (FDCA), FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported prescription drugs that are marketed to U.S. consumers. The FDCA requires that certain drugs be dispensed pursuant to a prescription that is issued by a licensed practitioner. The act also requires drug manufacturers to obtain FDA’s approval before marketing their drugs in the United States. To obtain FDA’s approval, manufacturers must demonstrate to the agency that their drug is safe and effective for its intended use, and meet other statutory and regulatory standards relating to drug purity, labeling, manufacturing, and packaging. Drugs that are manufactured in foreign countries for the U.S. market, including those sold over the Internet, are subject to the same requirements as those manufactured domestically. That is, all prescription drugs offered for import must meet the requirements of the FDCA, including requirements for obtaining FDA approval. Drugs that are unapproved, or do not meet other provisions of the FDCA, such as those listed below, may be subject to enforcement action. Misbranded drugs include those that are sold without a prescription that meets applicable requirements, as well as those whose labeling or container is misleading or does not include required information, such as the name of the drug, adequate directions for use, and cautionary statements. Adulterated drugs include those that differ in strength, quality, or purity from approved products, as well as those that are not manufactured in conformity with good manufacturing practices. Counterfeit drugs include those sold under a product name without proper authorization—where the drug is mislabeled in a way that suggests that it is the authentic and approved product—as well as unauthorized generic versions of FDA-approved drugs that mimic trademarked elements of such drugs. Drugs that do not appear to be in compliance with these provisions may be denied entry into the United States. In addition, those—including Internet pharmacy operators—that cause drugs to be misbranded, adulterated, or counterfeited, as well as those that sell such drugs, violate the FDCA and are subject to enforcement action. Counterfeiting and trafficking or selling counterfeit drugs also violate laws that protect intellectual property rights. DOJ’s Drug Enforcement Administration (DEA) is responsible for enforcing the Controlled Substances Act (CSA), which regulates the possession, manufacture, distribution, and dispensing of controlled substances, such as narcotic pain relievers. DEA is also responsible for enforcing provisions and investigating violations of the Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which amended the CSA to regulate the distribution and dispensing of controlled substances on the Internet. The act requires all entities that sell, or facilitate the sale, of controlled substances online to register and be authorized by the DEA to do so. Entities based in foreign countries are not eligible for registration; it is illegal for consumers to import controlled substances. The act also defines what constitutes a valid prescription for controlled substances, and requires that such a prescription be issued for controlled substances dispensed over the Internet. CBP is responsible for enforcing laws prohibiting the illegal importation of goods into the United States, including prescription drugs that have not been approved by the FDA for the U.S. market, as well as those that are counterfeit or misbranded. Additionally, the importation of prescription drugs by individuals for personal use is illegal, but FDA may exercise its regulatory discretion in determining whether to take enforcement action against such importation.inspections of products presented for import at the border. CBP interdicts and turns suspicious prescription drug shipments over to FDA for CBP coordinates with FDA to conduct examination, and may seize and destroy certain shipments that are deemed to be in violation of applicable laws. ICE is responsible for, among other things, investigating violations of customs and trade laws, including those related to trafficking in counterfeit goods. ICE also operates the National Intellectual Property Rights Coordination Center, the mission of which is to share information across 17 federal government agencies and four foreign regulatory agencies, coordinate enforcement actions, and conduct investigations related to intellectual property theft—including those that occur through rogue Internet pharmacies. USPIS helps prevent the illegal importation of prescription drugs by providing CBP with information about suspicious mail packages entering the United States. USPIS also investigates issues related to the misuse of mail. Other federal agencies are also sometimes involved in investigating rogue Internet pharmacy activity, to the extent that their jurisdiction relates to illicit activities conducted by these entities. The Internal Revenue Service investigates instances of money laundering, which is the act of disguising or concealing illicit funds to make them appear legitimate. The Federal Trade Commission (FTC) may investigate rogue Internet pharmacies to the extent that their websites make false or misleading statements about how they collect and use medical information about consumers, which constitute violations of the Federal Trade Commission Act. In addition, FTC may investigate potential violations of the CAN-SPAM Act of 2003, which imposes limitations and penalties on the transmission of certain unsolicited commercial e-mail, such as those with misleading information in the line identifying the person who sent the message. DOJ’s Federal Bureau of Investigation may investigate rogue Internet pharmacies if their activities defraud health care benefit programs or present a clear public health or safety threat. DOJ prosecutes rogue Internet pharmacies through U.S. Attorneys’ Offices located in 94 federal judicial districts throughout the nation, and through DOJ’s Civil and Criminal Divisions, located in Washington, DC. U.S. Attorneys are the chief federal law enforcement officers for each federal district, and they serve as the nation’s principal litigators under the direction of the Attorney General, working with officials from appropriate federal, state, local, and foreign agencies to prosecute rogue Internet pharmacy cases in their districts. DOJ’s Civil and Criminal Divisions also prosecute such cases, coordinating closely with U.S. Attorneys, particularly in cases spanning multiple districts or international borders. DOJ’s Civil Division has expertise in prosecuting cases involving FDCA violations, and DOJ’s Criminal Division has expertise in prosecuting cases related to trafficking in counterfeit goods and offenses such as money laundering and fraud that are often integral to these criminal operations, as well as expertise in working with foreign law enforcement to obtain evidence or secure the extradition of defendants from other countries. In the United States, prescription drugs must be prescribed and dispensed by licensed health care professionals, who can help ensure proper dosing and administration and provide patients with important information on the drug’s use. To legally dispense a prescription drug, a pharmacist licensed by the state and working in a pharmacy licensed by the state must be presented a valid prescription from a licensed health care professional. In addition, most states require pharmacies located outside their state to obtain a nonresident pharmacy permit prior to dispensing prescription drugs to customers located in that state. Some states regulate Internet pharmacies according to the same standards that apply to nonresident pharmacies. Others require pharmacies to obtain a special license in order to dispense prescription drugs ordered online. The regulation of the practice of pharmacy is rooted in state pharmacy practice acts and regulations enforced by state boards of pharmacy. The state boards of pharmacy also are responsible for routinely inspecting pharmacies, ensuring that pharmacists and pharmacies comply with applicable state and federal laws, and investigating and disciplining those that fail to comply. States also are responsible for regulating the practice of medicine. All states require that physicians practicing in the state be licensed to do so. State medical practice laws generally outline standards for the practice of medicine and delegate the responsibility of regulating physicians to state medical boards. Each state’s medical board also defines the elements of a valid patient—provider relationship, and grants prescribing privileges to physicians and other health care professionals. In addition, state medical boards investigate complaints and impose sanctions for violations of state medical practice laws. Because regulation of the practices of pharmacy and medicine occurs at the state level, definitions and other requirements related to these practices differ from state to state. As a result, there is no uniform, national definition of the term “prescription” that applies to noncontrolled substances. Thus, certain activities, such as prescribing drugs without performing an in-person examination, may be explicitly illegal in one state while another state may not specifically address its legality. Organizations such as NABP and FSMB have established and promoted uniform national standards related to Internet pharmacies for the consideration of state pharmacy and medical boards, as well as for consumers. NABP established the Verified Internet Pharmacy Practice Sites (VIPPS) program to provide a means for the public to identify legitimate Internet pharmacies. This accreditation program identifies those online pharmacies that are appropriately licensed, are legitimately operating via the Internet, and have successfully completed a review and inspection by NABP. FSMB has developed model guidelines regarding the appropriate use of the Internet in medical practice. According to these guidelines, electronic technology should supplement and enhance, but not replace, the crucial interpersonal interactions that are the basis of the physician—patient relationship. These professional standards, however, are not legally enforceable in the absence of state laws establishing such requirements. The Center for Telehealth and e-Health Law (CTeL), an organization that works to overcome legal and regulatory barriers to telemedicine, issued guidance in February 2013 regarding how telemedicine, using two-way audio-video communications, can be used to establish a bona fide physician—patient relationship when prescribing noncontrolled substances. Specifically, the guidance notes that an appropriate examination of the patient by the practitioner must occur prior to the issuance of a prescription, and that audio-only telephone conversations and e-mails cannot be used as a basis for establishing a bona fide practitioner—patient relationship. Rogue Internet pharmacies often sell unapproved prescription drugs— including those that are substandard, counterfeit, and have no therapeutic value or are harmful to consumers. These drugs may be manufactured under conditions that do not meet FDA standards, including unsanitary and unsterile conditions. The drugs sold by rogue Internet pharmacies have been found to contain too much, too little, or no active pharmaceutical ingredient, or the wrong active ingredient. They have also been found to contain dangerous contaminants, such as toxic yellow highway paint, heavy metals, and rat poison. Consumers who have taken prescription drugs purchased from rogue Internet pharmacies have experienced health problems, required emergency treatments, and have died.pharmacies sell prescription drugs without legitimate medical oversight, consumers may be harmed by ingesting drugs that are contraindicated for them, or have interactions with other medications that they are taking. However, adverse events caused by prescription drugs purchased from rogue Internet pharmacies are difficult to detect and quantify. Consumers may purchase drugs from rogue Internet pharmacies because of privacy concerns or to circumvent normal processes for obtaining prescription drugs. As a result, they may be reluctant to report health problems that they experience. Further, it can be difficult to determine whether adverse events are caused by substandard drugs. The role played by drugs from rogue Internet pharmacies may even go unnoticed. For example, when consumers take drugs that have no therapeutic value to treat their diseases, they may not experience adverse events to the drugs themselves, but they derive no benefit. Persistent symptoms may be attributed to their diseases, as opposed to ineffective treatments. Rogue Internet pharmacies violate a variety of federal and state laws. Many ship unapproved drugs into the United States and sell drugs to consumers without a prescription that meets federal and state requirements. Rogue Internet pharmacies also violate other federal and state laws, such as those related to fraud and money laundering, in addition to not complying with industry standards. Although the exact number of rogue Internet pharmacies is unknown, most operate from abroad. According to LegitScript, an online pharmacy verification service that applies NABP standards to assess the legitimacy of Internet pharmacies, there were over 34,000 active rogue Internet pharmacies as of April 2013. Federal officials and other stakeholders we interviewed consistently told us that most rogue Internet pharmacies operate from abroad, and many have shipped drugs into the United States that are not approved by FDA. In doing so, they violate FDCA provisions that require FDA approval prior to marketing prescription drugs to U.S. consumers, as well as customs laws that prohibit the unlawful importation of goods, including unapproved drugs. The prescription drugs that rogue Internet pharmacies sell have included counterfeit, misbranded, and adulterated drugs. Certain rogue Internet pharmacies have also sold dietary supplements that contain prescription drug ingredients, in violation of the FDCA. In addition, some, particularly those abroad, have sold controlled substances to customers located in the United States. As no Internet pharmacies have been approved by DEA to dispense controlled substances to customers in the United States as of May 3, 2013, doing so violates the CSA. Example of a Foreign Rogue Internet Pharmacy Indicted for Illegally Selling Drugs to U.S. Customers In 2012, two operators of a rogue Internet pharmacy based abroad were indicted for allegedly shipping unapproved prescription drugs into the United States, in violation of the FDCA. The shipments allegedly included controlled substances, and the pharmacy owners were also charged with importing controlled substances without authorization, in violation of the CSA. According to DOJ’s indictment, some of the imported drugs were misbranded because the packages did not include adequate directions for the drugs, some of the drugs offered for sale were listed under a different name, and the company fulfilled orders without ensuring that customers had a prescription. The indictment also noted that the operators packaged drug shipments to evade scrutiny by customs officials. For example, drugs were allegedly wrapped with carbon paper and black plastic bags, and packages included false return addresses. To sell drugs to their U.S. customers, foreign rogue Internet pharmacies use sophisticated methods to evade scrutiny by customs officials and smuggle their drugs into the country. For example, they have used intermediary shippers to help disguise the actual source of their shipments, which, according to CBP officials, may increase the likelihood that the shipments get through customs unnoticed. FDA and ICE officials told us that rogue Internet pharmacies have also misdeclared the contents of packages sent via express courier services or cargo shipments, in violation of customs laws. Federal agencies use importation declarations to identify potentially illicit shipments for further examination; as such, misdeclaring the contents of such packages can result in illicit shipments evading additional scrutiny at the border. Further, rogue Internet pharmacies have disguised or hidden their drugs in various types of packaging; for example, CBP has found drugs in bottles of lotion and in tubes of toothpaste. Example of a Licensed Brick-and-Mortar Pharmacy Selling Misbranded Drugs In 2011 and 2012, the owners of a U.S.-based brick-and-mortar pharmacy were convicted of several charges related to selling misbranded prescription drugs for rogue Internet pharmacies. The pharmacy was paid by multiple foreign rogue Internet pharmacies to fill prescriptions that did not meet state medical board requirements for a valid prescription and were sold and distributed in violation of the CSA. The rogue Internet pharmacies paid doctors, or in some cases, lay persons, to review brief online medical questionnaires and authorize the orders. The pharmacy filled orders for drugs, including controlled substances, and shipped them to customers who were usually located in a different state than the pharmacy. Because the drugs were sold without a valid prescription, they were considered misbranded, in violation of the FDCA. Rogue Internet pharmacies also often sell drugs to consumers without a prescription, in violation of FDCA and state requirements, or with a prescription that does not satisfy FDCA and state requirements. According to federal officials, they have done this by advertising that no prescription is necessary or by allowing consumers to purchase drugs after completing a brief online questionnaire that does not meet their state’s requirements for a valid prescription. In some cases, rogue Internet pharmacies have ignored information from these questionnaires and have allowed consumers to make a purchase, regardless of the information disclosed. These actions violate the FDCA requirement that certain drugs be dispensed only with a prescription that is written by a licensed practitioner. In addition, some rogue Internet pharmacies operating from abroad have recruited doctors and pharmacies based and licensed in the United States to fulfill online prescription drug orders in exchange for payment, according to officials from federal agencies and stakeholders. Often, they have targeted doctors and pharmacies that are struggling financially, and have compensated them according to the number of prescriptions they authorize and fill, respectively. In these circumstances, the doctors violate state laws or medical board regulations as well as industry standards, such as those issued by FSMB and CTeL, which require valid patient—provider relationships prior to the issuance of a prescription. Likewise, the pharmacists violate state laws or pharmacy board regulations by selling drugs without ensuring that there is a prescription that meets state requirements. Drugs sold in this manner are considered misbranded, and are subject to enforcement under the FDCA. Rogue Internet pharmacies violate a variety of federal laws, including those related to fraud, money laundering, and intellectual property rights, according to officials from several federal agencies and stakeholders we interviewed. For example, rogue Internet pharmacies have engaged in mail fraud by using the mail to facilitate their illegal transactions. In addition, some rogue Internet pharmacies have engaged in money laundering.Internet pharmacies, operators have created a shell—or fake—company Specifically, to use the proceeds generated from rogue to disguise the nature of their business, or have misstated the nature of their business to banks that process their credit card transactions, according to stakeholders we interviewed. To appear more legitimate to their consumers, rogue Internet pharmacies have also violated intellectual property laws by fraudulently displaying trademarks on their websites. For example, rogue Internet pharmacies have fraudulently displayed the VIPPS accreditation logo as well as the logos for payment processors such as Visa, MasterCard, or PayPal, without having obtained permission. Rogue Internet pharmacies have violated a range of other federal laws, such as those related to making false or misleading statements, as well as by engaging in other deceptive and unfair acts or practices. For example, rogue Internet pharmacies have violated the CAN-SPAM Act by sending e-mails that list false information in the subject line or otherwise hide the message’s origin. Rogue Internet pharmacies also violate state laws, including those related to operating without an appropriate license. Rogue Internet pharmacies have violated state laws by not obtaining pharmacy licenses from the states where their customers reside. In addition, licensed brick-and- mortar pharmacies recruited to fulfill prescription drug orders for rogue Internet pharmacies have violated state laws when they perform activities not authorized under their license or when they ship drugs to out-of-state customers. According to officials from state boards of pharmacy we interviewed, brick-and-mortar pharmacies have fulfilled online prescription drug orders, including to residents of another state without obtaining a nonresident pharmacy license in the state where those customers reside or without ensuring the prescriptions are valid. When fulfilling such orders for out-of-state customers, brick-and-mortar pharmacies have also violated valid prescription requirements of the state where their customers live. For example, the California Board of Pharmacy identified an Internet pharmacy based in Utah that was violating California pharmacy laws, which require that prescription drugs be dispensed through the Internet only with a prescription issued after a good-faith medical examination from a physician licensed in the state. According to the California Board of Pharmacy, the Utah pharmacy was selling prescription drugs to Californians based on prescriptions that it knew or should have known were not based on good-faith medical exams and were written by physicians who were not licensed in California, in violation of California law. However, according to Utah Board of Pharmacy officials, the pharmacy was complying with Utah’s laws, which allow certain licensed Internet pharmacies to dispense specified types of prescription drugs— such as certain erectile dysfunction drugs and hormone-based contraception—solely on the basis of an online questionnaire. Rogue Internet pharmacies do not comply with industry standards for legitimate Internet pharmacies. For example, officials from federal agencies and stakeholders told us that rogue Internet pharmacies have not provided accurate or complete information to domain name registrars when registering a website and have not adequately protected customer privacy. In addition, rogue Internet pharmacies have not displayed identifying information on their website, such as a business address and telephone number. Rogue Internet pharmacies are often complex, global operations, and as a result, federal agencies face substantial challenges investigating and prosecuting their operators. Officials from federal agencies and stakeholders we interviewed told us that piecing together these operations can be difficult because rogue Internet pharmacies can be composed of thousands of related websites. Although a small number of individuals own the majority of rogue Internet pharmacies operating across the world, they may contract with hundreds or thousands of individuals to set up, run, and advertise their websites—primarily by sending out unsolicited spam e-mails. The ease with which operators can set up and take down websites also makes it difficult for agencies to identify, track, and monitor rogue websites and their activities, as websites can be created, modified, and deleted in a matter of minutes. Additionally, rogue Internet pharmacies frequently locate different components of their operations in different countries, further complicating efforts to unravel the entirety of a rogue Internet pharmacy operation. For example, one rogue Internet pharmacy registered its domain name in Russia, used website servers located in China and Brazil, processed payments through a bank in Azerbaijan, and shipped its prescription drugs from India. (See fig. 1.) Federal agencies, states, and stakeholders have investigated and prosecuted operators, prevented illicit shipments of pharmaceuticals from entering the United States, and blocked rogue Internet pharmacies’ ability to market and sell their products. Despite facing substantial challenges, several federal agencies— including FDA, ICE, and USPIS—have investigated and prosecuted rogue Internet pharmacy operators that have violated federal laws. (See fig. 2 for a screenshot of a rogue Internet pharmacy that FDA recently investigated, which led to a conviction in 2011.) Agencies have investigated rogue Internet pharmacies independently and conducted collaborative investigations with other federal agencies through ICE’s National Intellectual Property Rights Coordination Center. In certain instances, agencies have collaborated with international law enforcement agencies. Agency investigations have resulted in the conviction of operators, fines, and asset seizures. Specifically, according to agency officials, from fiscal years 2010 through 2012, FDA opened 227 rogue Internet pharmacy investigations and its investigations led to the conviction of 219 individuals and more than $76 million in fines and restitution; ICE initiated 138 investigations and its investigations led to 56 convictions and the seizure of nearly $7 million; USPIS worked on 392 investigations and arrested 560 individuals; IRS conducted 22 investigations and its investigations led to the conviction of 5 individuals; and DEA conducted 49 investigations into rogue Internet pharmacies and seized more than $1 million. In addition to investigating rogue Internet pharmacy operators, federal agencies have investigated companies for providing services to rogue Internet pharmacies. In 2011, a DOJ and FDA investigation led to a settlement under which Google agreed to forfeit $500 million for allowing certain rogue Internet pharmacies to place sponsored advertisements in its search engine results from calendar years 2003 through 2009. In March 2013, a DOJ investigation led to a settlement under which UPS agreed to forfeit $40 million for transporting and distributing prescription drugs, including controlled substances, from certain rogue Internet pharmacies to U.S. consumers from calendar years 2003 through 2010. As part of their settlements with DOJ, both companies noted that they will stop serving rogue Internet pharmacies and create compliance programs to identify rogue actors, in exchange for not being prosecuted by the U.S. government for crimes related to this activity. In addition, according to FedEx documents, DOJ is investigating the company for potentially violating federal law by shipping prescription drugs from Internet pharmacies. Federal agencies have also taken steps to shut down rogue Internet pharmacy websites. For example, FDA and other federal agencies have participated in Operation Pangea, an annual worldwide, week-long initiative in which regulatory and law enforcement agencies from around the world work together to combat rogue Internet pharmacies. In 2013, FDA took action against 1,677 rogue Internet pharmacy websites during Operation Pangea. In 2012, as part of the Operation, FDA informed domain name registrars that over 4,100 rogue Internet pharmacy websites were illegally selling prescription drugs online, in violation of the registrars’ terms of service with their customers. The agency informed the registrars of these violations in order to encourage them to shut down these violative websites. FDA officials told us that the effect of such shutdowns is primarily disruptive since rogue Internet pharmacies often reopen after their websites get shut down; officials from federal agencies and stakeholders we spoke with likened shutting down websites to taking a “whack-a-mole” approach. One stakeholder noted that rogue Internet pharmacies own and keep domain names in reserve so that they can redirect traffic to new websites and maintain operations if any of their websites get shut down. Rogue Internet pharmacies may also find new registrars to host their websites—figure 3 provides an example of a rogue Internet pharmacy website that was shut down during Pangea but that continued operations by switching to another domain name registrar. FDA has also issued warning letters to rogue Internet pharmacies to notify them that they are engaged in potentially illegal activity and direct them to cease their illegal activity. From calendar years 2009 through 2012, FDA reported issuing about 30 warning letters to rogue Internet pharmacies. According to FDA officials, rogue Internet pharmacies often ignore the letters and continue with their illicit activity. However, in some cases, FDA’s warning letters have led to the removal of potentially dangerous products from certain websites. FDA officials told us that they remain committed to combating rogue Internet pharmacies and in April 2013 they formed a new Cyber Crimes Investigation Unit that is devoted to this cause. Federal agencies responsible for preventing illegal prescription drug imports have also interdicted rogue Internet pharmacy shipments. CBP coordinates with FDA to inspect and seize illicit mail, express courier, and cargo shipments of prescription drugs presented for import at the border on a daily basis. CBP also leads Operation Safeguard, a multiagency initiative to target illicit imports of prescription drugs. Once a month, CBP, along with FDA and ICE, targets a specific international mail or express courier facility and, according to agency officials, conducts extensive examinations and seizures of illicit prescription drug shipments for 3 days. In total, from fiscal years 2010 through 2012, FDA reported examining nearly 45,000 shipments and CBP reported seizing more than 14,000 illicit shipments of prescription drugs, with mail shipments constituting the majority of the shipments that were seized. In addition to seizures of shipments presented for import, according to USPIS, the agency seized more than 800 illicit shipments of controlled substances in the domestic mail system during fiscal year 2012. Despite these efforts, FDA officials told us that the sheer volume of inbound international mail shipments— which total nearly 1.2 million pieces every day, according to USPIS— makes it difficult to interdict all illicit prescription drug imports. Other federal agencies have also taken steps to combat rogue Internet pharmacies by sponsoring research and engaging stakeholders. The National Science Foundation has provided grants to researchers who have examined rogue Internet pharmacy operations and developed strategies for combating them. For example, researchers found that rogue Internet pharmacies may be vulnerable to efforts to limit their ability to process online payments.companies to encourage them to limit services to rogue Internet pharmacies. According to DEA officials, illicit pain clinics—brick-and-mortar operations where customers can obtain prescriptions for controlled substances without a legitimate medical need—have since emerged as the primary source of controlled substance diversion. agency’s conclusion that domestic and foreign rogue Internet pharmacies are generally not selling controlled substances. Furthermore, agency officials told us that they do not track data that could demonstrate a reduction in the sale of controlled substances online because they told us there is no reason to do so. DEA officials explained their rationale by saying that the agency does not collect data on threats that do not exist. However, DEA’s 2011 assessment of Internet pharmacies that advertised the sale of controlled substances revealed that 40 percent were selling such substances. DEA has not gathered additional data to demonstrate the extent to which controlled substances are being diverted over the Internet. States face challenges investigating rogue Internet pharmacies and have played a limited role in combating them. Given that most rogue Internet pharmacies operate from abroad, stakeholders including NABP, National Alliance for Model State Drug Laws, and National Association of Attorneys General, as well as officials from several state attorneys general offices told us that states do not have the authority, ability, or resources to investigate and prosecute them. These stakeholders told us that, as a result, states generally have not investigated rogue Internet pharmacies for violating their laws. In addition, officials from the five state boards of pharmacy we interviewed also told us that they do not proactively investigate unlicensed pharmacy activity, and most of the boards view the enforcement of unlicensed pharmacy activity as the responsibility of state law enforcement agencies, rather than themselves. Accordingly, they have not actively sought to identify or investigate rogue Internet pharmacies—either in-state or out-of-state—that sell prescription drugs to customers within their state, though they may look into rogue Internet pharmacies if they received complaints. Further, board officials told us that they face challenges enforcing laws outside of their own states. When rogue Internet pharmacies located in other states violate their state laws, board officials contact officials of state boards where the pharmacies are located, and it is then the responsibility of the contacted boards to take any appropriate investigative or enforcement actions. The boards may also send cease-and-desist letters or attempt to fine out-of- state pharmacies that violate their states’ laws. However, the boards have no ability to ensure compliance with enforcement actions against pharmacies outside of their state that are not licensed in their state. State boards of pharmacy focus on regulating licensed brick-and-mortar pharmacies located within their state. In regulating licensed pharmacies, officials from each of the five states told us that they have taken enforcement actions against licensed pharmacies in their states for fulfilling orders on behalf of rogue Internet pharmacies or illicitly selling prescription drugs over the Internet. For example, in 2010, the Nevada Board of Pharmacy revoked the license of a pharmacist for illegally shipping controlled substances to an out-of-state customer who placed an order through a rogue Internet pharmacy. In addition to actions taken by state pharmacy boards, state medical boards have also taken enforcement actions against physicians involved in illicitly writing prescriptions for rogue Internet pharmacies, according to FSMB officials. Stakeholders that provide services to Internet-based businesses have blocked rogue Internet pharmacies’ ability to market and sell their products. These stakeholders have taken such actions on the basis of information that they learn about and share through various associations. CSIP has helped member companies that provide services to Internet businesses—such as Internet registrars, search engines, and payment processors—share information about rogue Internet pharmacies, and encourages its members to block services to them.CSIP contracts with a third-party company that proactively searches the Internet to identify rogue Internet pharmacies and disseminates this information to its members. In addition, CSIP gathers information about rogue Internet pharmacies from member companies, as well as from other outside sources such as federal agencies. According to CSIP, from November 1, 2011, through December 1, 2012, its members took more than 3 million actions against rogue Internet pharmacies. For example, Internet registrars shut down rogue Internet pharmacy websites, search engines prevented them from placing advertisements, and credit card companies prevented payments from being processed. IACC has also taken action to combat rogue Internet pharmacies by, among other things, working with credit card companies to discourage banks from processing payments for rogue Internet pharmacies. IACC officials told us that they collect information on websites that market counterfeit and otherwise illegal products from trademark and copyright holders, including four brand-name prescription drug manufacturers. IACC then provides this information to credit card companies so that they can take action against banks that process payments for these rogue Internet pharmacies. (See fig. 4.) Under their terms of service, credit card companies can fine or take other enforcement actions against banks that process payments for merchants involved in illegal activities. In addition to private efforts to block services to rogue Internet pharmacies, drug manufacturers maintain surveillance programs to identify and investigate the marketing of counterfeit versions of their brand-name prescription drugs and share their findings with federal agencies. In doing so, they monitor web activity to identify rogue Internet pharmacies, and employ investigators to gather evidence against rogue operators. On the basis of their investigations, these manufacturers provide federal agencies, such as ICE and FDA, with investigative leads and information that may support existing investigations. In addition, manufacturers have also provided CBP with information to better target illicit drug imports and with brochures to help CBP officials differentiate between legitimate and counterfeit prescription drugs. Several stakeholders also help facilitate information sharing between drug manufacturers and federal agencies on rogue Internet pharmacies. The Pharmaceutical Security Institute, an association of 26 drug manufacturers focused on sharing information related to counterfeit prescription drugs, collects and analyzes surveillance information from its members and, according to an official from the institute, helps them share information with federal agencies about the illicit marketing of counterfeit prescription drugs by rogue Internet pharmacies. The National Cyber- Forensics & Training Alliance—an organization that facilitates public- private information sharing on cybercrime—also works with drug manufacturers to share information with federal agencies. Officials said that the alliance collects information from the manufacturers and performs additional intelligence gathering to provide agencies with actionable investigative leads, such as the identities and locations of operators. FDA and stakeholders have taken steps to educate consumers about the dangers of buying prescription drugs from rogue Internet pharmacies and how to identify legitimate ones; however, these efforts face challenges. In September 2012, FDA launched a national campaign called “BeSafeRx: Know Your Online Pharmacy” to raise public awareness and educate consumers about the risks associated with purchasing prescription drugs on the Internet. The campaign provides information about the dangers of purchasing drugs from rogue Internet pharmacies, how to identify the signs of rogue Internet pharmacies, as well as how to find safe Internet pharmacies. FDA officials told us that the agency plans to direct the same messages to health care professionals and assess the campaign’s effectiveness in the future. Some federal agencies and stakeholders have also taken steps to educate consumers about the risks of purchasing prescription drugs online and provide tools to help consumers identify legitimate and rogue Internet pharmacies. For example, CBP, DEA, and FTC post information on their websites regarding the dangers of purchasing drugs online. NABP publicly releases the results of its review of Internet pharmacies quarterly, which most recently showed that 97 percent of the over 10,000 Internet pharmacies that it reviewed were out of compliance with federal or state laws or industry standards. NABP also warns consumers not to buy from websites that are on its publicly available list of rogue Internet pharmacies, and posts information on its website to educate consumers about how to safely buy medicine online. The association directs consumers to purchase medicines from legitimate Internet pharmacies that it has accredited through its VIPPS program; as of May 1, 2013, NABP’s website listed 32 VIPPS-accredited Internet pharmacies. To assist consumers in more readily identifying legitimate online pharmacies, NABP also plans to launch a new top-level domain name called .pharmacy by the end of 2013. The association intends to grant this domain name to appropriately licensed, legitimate Internet pharmacies operating in compliance with regulatory standards—including pharmacy licensure, drug authenticity, and prescription requirements—in every jurisdiction that the pharmacy does business. LegitScript also helps consumers to differentiate between legitimate and rogue Internet pharmacies. It regularly scans the Internet and, using NABP’s standards, classifies Internet pharmacies into one of four categories: (1) legitimate, (2) not recommended, (3) rogue, or (4) pending review. When visiting their publicly available website, consumers can enter the website address of any Internet pharmacy and immediately find As of May 1, 2013, LegitScript had classified LegitScript’s classification.259 Internet pharmacies as legitimate and therefore safe for U.S. consumers, on the basis of NABP standards. Despite the actions of agencies and stakeholders, consumer education efforts face many challenges. Many rogue Internet pharmacies use sophisticated marketing methods to appear professional and legitimate, making it challenging for even well-informed consumers and health care professionals to differentiate between legal and illegal Internet pharmacies. For example, some rogue Internet pharmacies advertise that customers need a prescription in order to purchase drugs, but allow customers to meet this requirement by completing an online questionnaire at the time of sale. Other Internet pharmacies may fraudulently display a VIPPS-accreditation logo on their website, despite not having earned the accreditation, or may fraudulently display Visa, MasterCard, PayPal, or other logos on their website despite not holding active accounts with these companies or being able to process such payments. Figure 5 displays a screenshot of a rogue Internet pharmacy website that may appear to be legitimate to consumers, but whose operators pled guilty to multiple federal offenses, including smuggling counterfeit and misbranded drugs into the United States. Some rogue Internet pharmacies seek to assure consumers of the safety of their drugs by purporting to be “Canadian.” Canadian pharmacies have come to be perceived as a safe and economical alternative to pharmacies in the United States. Over the last 10 years, several local governments and consumer organizations have organized bus trips to Canada so that U.S. residents can purchase prescription drugs at Canadian brick-and- mortar pharmacies at prices lower than those in the United States. More recently, some state and local governments implemented programs that provided residents or employees and retirees with access to prescription drugs from Canadian Internet pharmacies. Despite FDA warnings to consumers that the agency could not ensure the safety of drugs not approved for sale in the United States that are purchased from other countries, the prevalence of such programs may have contributed to a perception among U.S. consumers that they can readily save money and obtain safe prescription drugs by purchasing them from Canada. Many rogue Internet pharmacies seek to take advantage of this perception by purporting to be located in Canada, or sell drugs manufactured or approved for sale in Canada, when they are actually located elsewhere or selling drugs sourced from other countries. Educational efforts also need to overcome issues related to consumer demand for these drugs. Many consumers mistakenly believe that if a drug may be prescribed for medical use, it is safe to consume regardless In addition, of whether they have a prescription for that particular drug.other pressures, including consumers’ desire to self-medicate, their wish for privacy related to obtaining lifestyle medications (such as drugs for sexual dysfunction), and relatively high out-of-pocket costs for brand- name drugs may fuel a demand among consumers to purchase prescription drugs from rogue Internet pharmacies. While educational efforts attempt to overcome these challenges, their success thus far is unknown—in part, because the volume of drugs purchased from rogue Internet pharmacies is unknown, making it difficult to assess whether educational efforts have been effective at reducing such purchases. We provided a draft of this report for comment to HHS, DOJ, and DHS, and we provided excerpts of this report for comment to USPIS and NSF. We received technical comments from HHS, DOJ, and DHS, which we incorporated as appropriate. We are sending copies of this report to the Department of Commerce, the Department of Health and Human Services, the Department of Homeland Security, the Department of Justice, the Department of State, the Federal Trade Commission, the Internal Revenue Service, the National Science Foundation, the Office of Management and Budget, and the United States Postal Inspection Service, as well as other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix III. 2. Alliance for Safe Online Pharmacies 3. American Medical Association 5. Center for Safe Internet Pharmacies 6. Center for Telehealth and e-Health Law 7. Eli Lilly and Company 8. Federation of State Medical Boards 10. Generic Pharmaceutical Association 12. Google Inc. 14. Internet Crime Complaint Center 16. MasterCard International, Incorporated 17. Merck & Co., Inc. 19. National Alliance for Model State Drug Laws 20. National Association of Attorneys General 21. National Association of Boards of Pharmacy 22. National Association of Chain Drug Stores 23. National Community Pharmacists Association 24. National Cyber-Forensics & Training Alliance 25. National Science Foundation grant recipient Damon McCoy, Assistant Professor, George Mason University Computer Science Department 26. Partnership for Safe Medicines 29. Pharmaceutical Security Institute 30. Pharmaceutical Research and Manufacturers of America 31. Purdue Pharma L.P. 32. Takeda Pharmaceuticals U.S.A., Inc. 34. Visa, Inc. 35. WellPoint, Inc. Members of Congress have sponsored bills and other stakeholders have endorsed proposals to enhance regulators’ ability to both combat rogue Internet pharmacies and enhance the public’s ability to distinguish rogue Internet pharmacies from legitimate ones. This appendix provides a brief synopsis of federal legislation introduced in the 112th Congress, which ran from January 2011 to January 2013, and the 113th Congress, from January 2013 through June 2013, as well as proposals from stakeholders we interviewed. While some stakeholders broadly supported these proposals, others noted that because most rogue Internet pharmacies are operated from overseas, additional federal laws and authorities, such as those noted below, would have a limited effect on their ability to combat rogue Internet pharmacies. Creating a Federal Definition of a Valid Prescription. Some Members of Congress and other stakeholders have proposed creating a federal definition of a valid prescription that applies to all prescription drugs. Currently, the only federal definition of a valid prescription applies solely to prescriptions for controlled substances. Although the Federal Food, Drug, and Cosmetic Act (FDCA) requires certain drugs to be dispensed upon a prescription of a licensed practitioner, it does not define how this requirement is to be met. Instead, each state’s pharmacy and medical practice acts define what constitutes a valid prescription. As such, when federal prosecutors pursue charges against operators of rogue Internet pharmacies that sell drugs without prescriptions that meet the FDCA’s prescription requirement, they must research the laws of each relevant state to determine which ones apply to their case. Proponents of a federal definition contend that such a definition would make it easier and less resource-intensive for federal and state investigators and prosecutors to gather evidence and build a case against rogue Internet pharmacy operators who sell drugs without valid prescriptions. Some contend, however, that such a definition would be of limited value. They note that, because rogue Internet pharmacy operations have increasingly moved components of their business abroad, they are beyond the boundaries of where such a law could be readily enforced. Additionally, those interested in promoting telemedicine have raised concerns that these proposals have too narrowly defined the circumstances for which prescriptions could be issued on the basis of legitimate medical examinations conducted via telemedicine. Developing a Comprehensive List of Legitimate Internet Pharmacies. Members of Congress have introduced legislation that would have required the establishment of a comprehensive list of legitimate Internet pharmacies. Other stakeholders, such as the Alliance for Safe Online Pharmacies, have also supported this proposal. Members of Congress and stakeholders have proposed that the Food and Drug Administration (FDA) would be responsible for establishing and maintaining the list. Proponents of a comprehensive list contend that it would help consumers, stakeholders, and federal and state agencies distinguish between legitimate and rogue Internet pharmacies. Although the National Association of Boards of Pharmacy (NABP) and LegitScript have tools on their websites that enable consumers to identify legitimate Internet pharmacies, some maintain that FDA management of the list is critical, and could help to inspire public confidence in the list. Others stated that a list created by a third party would be helpful, as long as it is endorsed by the FDA. However, FDA officials and other stakeholders have raised concerns about the agency’s ability to maintain such a list, given the large volume of new Internet pharmacies launched and modified every day. Additionally, FDA does not regulate the practice of pharmacy, which has long been regulated by the states. Finally, according to officials we interviewed from two stakeholders that provide services to Internet businesses, such a list is not necessary as their companies’ policies and procedures allow them to immediately suspend customer accounts once they become aware that such customers are violating their policies and procedures. Establishing a Safe Harbor for Companies That Provide Services to Internet-Based Businesses. Members of Congress have introduced legislation to provide legal immunity to companies—such as Internet registrars, search engines, and credit card processors—that ceased or refused to provide services to rogue Internet pharmacies when acting in good faith. Proponents state that protection from liability would encourage companies to block services to rogue Internet pharmacies. In addition, some told us that such immunity would allow them to more readily take action against suspected rogue Internet pharmacies. However, others doubted the necessity of such legislation. Officials from two companies we interviewed explained that their companies already have the right to refuse service to rogue Internet pharmacies and do not open themselves up to liability by doing so. Granting FDA New Subpoena and Seizure Authorities. Members of Congress have introduced legislation to grant new subpoena and seizure authorities to FDA. Agency officials stated the authorities would enable them to more rapidly investigate and take action against rogue Internet pharmacies.have enabled FDA to compel the attendance and testimony of witnesses and the production of records and other items for the purposes of any hearing, investigation, or other proceeding related to a suspected FDCA violation. Further, seizure authority proposed under this legislation would have provided FDA with the authority to take noncompliant drugs out of the supply chain. At present, FDA must obtain approval from the Department of Justice (DOJ) in order to issue a subpoena or to seize goods. According to FDA officials, the DOJ approval process can delay investigations and enforcement actions. Such delays may lead to the Subpoena authority proposed under this legislation would distribution of noncompliant drugs further into the supply chain and may make such products more difficult to locate and seize. Adopting a Track-and-Trace System for Prescription Drug Supply Chain. Members of Congress have introduced, and stakeholders such as the Intellectual Property Enforcement Coordinator have supported, legislation that would require FDA to implement a system to track, trace, and verify prescription drugs throughout the drug supply chain. In 2007, Congress required FDA to develop standards that would apply to such a system, as well as to develop a standardized numerical identifier that could be applied to prescription drugs during manufacturing and repackaging. In response, FDA issued guidance for industry, and hosted a public workshop on the topic. However, a nationwide track-and-trace system has not yet been implemented. Supporters of a nationwide track-and- trace system contend that it would enable federal agencies to more readily identify counterfeit or adulterated prescription drugs, as well as reduce the potential for counterfeit drugs to enter the supply chain, including through Internet pharmacies. FDA officials told us that while a track-and-trace system would benefit multiple stakeholders, it would likely not directly affect the operations of rogue Internet pharmacies because such enterprises sell counterfeit and adulterated drugs directly to consumers, which is not a distribution method that would be covered by a track-and-trace system. Marcia Crosse, (202) 512-7114, [email protected]. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Michael Erhardt; Cathleen Hamann; Jason Kelly; Lisa Motley; Patricia Roy; and Lillian Shields made key contributions to this report. Prescription Drug Control: DEA Has Enhanced Efforts to Combat Diversion, but Could Better Assess and Report Program Results. GAO-11-744. Washington, D.C.: August 26, 2011. Intellectual Property: Observations on Efforts to Quantify the Economic Effects of Counterfeit and Pirated Goods. GAO-10-423. Washington, D.C.: April 12, 2010. Cybercrime: Public and Private Entities Face Challenges in Addressing Cyber Threats. GAO-07-705. Washington, D.C.: June 22, 2007. Intellectual Property: Better Data Analysis and Integration Could Help U.S. Customs and Border Protection Improve Border Enforcement Efforts. GAO-07-735. Washington, D.C.: April 26, 2007. Internet Management: Prevalence of False Contact Information for Registered Domain Names. GAO-06-165. Washington, D.C.: November 4, 2005. Anabolic Steroids Are Easily Purchased Without a Prescription and Present Significant Challenges to Law Enforcement Officials. GAO-06-243R. Washington, D.C.: November 3, 2005. Prescription Drugs: Strategic Framework Would Promote Accountability and Enhance Efforts to Enforce the Prohibitions on Personal Importation. GAO-05-372. Washington, D.C.: September 8, 2005. Internet Pharmacies: Some Pose Safety Risks for Consumers. GAO-04-820. Washington, D.C.: June 17, 2004. Internet Pharmacies: Adding Disclosure Requirements Would Aid State and Federal Oversight. GAO-01-69. Washington, D.C.: October 19, 2000.
The Internet offers consumers a convenient method for purchasing drugs that is sometimes cheaper than buying from traditional brick-and-mortar pharmacies. According to a recent FDA survey, nearly 1 in 4 adult U.S. Internet consumers have purchased prescription drugs online. However, many Internet pharmacies are fraudulent enterprises that offer prescription drugs without a prescription and are not appropriately licensed. These rogue Internet pharmacies may sell drugs that are expired, improperly labeled, or are counterfeits of other drugs. A number of federal and state agencies share responsibility for administering and enforcing laws related to Internet pharmacies, including state boards of pharmacy, FDA, DOJ, CBP, and ICE. The Food and Drug Administration Safety and Innovation Act directed GAO to report on problems with Internet pharmacies. This report identifies (1) how rogue sites violate federal and state laws, (2) challenges federal agencies face in investigating and prosecuting operators, (3) efforts to combat rogue Internet pharmacies, and (4) efforts to educate consumers about the risks of purchasing prescription drugs online. To conduct this work, GAO interviewed officials from FDA, DOJ, CBP, ICE, and other federal agencies, reviewed federal laws and regulations, and examined agency data and documents. GAO also interviewed officials from five state boards of pharmacy with varied approaches to regulating Internet pharmacies, and stakeholders including NABP, drug manufacturers, and companies that provide services to Internet businesses, such as payment processors. Rogue Internet pharmacies violate a variety of federal and state laws. Most operate from abroad, and many illegally ship prescription drugs into the United States that have not been approved by the Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), that is responsible for ensuring the safety and effectiveness of prescription drugs. Many also illegally sell prescription drugs without a prescription that meets federal and state requirements. Rogue sites also often violate other laws, including those related to fraud, money laundering, and intellectual property rights. Rogue Internet pharmacies are often complex, global operations, and federal agencies face substantial challenges investigating and prosecuting those involved. According to federal agency officials, piecing together rogue Internet pharmacy operations can be difficult because they may be composed of thousands of related websites, and operators take steps to disguise their identities. Officials also face challenges investigating and prosecuting operators because they are often located abroad. The Department of Justice (DOJ) may not prosecute such cases due to competing priorities, the complexity of these operations, and challenges related to bringing charges under some federal laws. Despite these challenges, federal and state agencies as well as stakeholders have taken actions to combat rogue Internet pharmacies. Federal agencies have conducted investigations that have led to convictions, fines, and asset seizures from rogue Internet pharmacies as well as from companies that provide services to them. FDA and other federal agencies have also collaborated with law enforcement agencies around the world to disrupt rogue Internet pharmacy operations. The Department of Homeland Security's (DHS) U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE), which are responsible for enforcing laws related to the importation of goods such as prescription drugs, have also worked with other agencies, including FDA, to interdict rogue Internet pharmacy shipments at the border. Given that most rogue Internet pharmacies operate from abroad, states have faced challenges combating them, and generally focus their oversight on licensed in-state entities that fulfill orders for rogue Internet pharmacies. Companies that provide services to Internet-based businesses, such as search engines and payment processors, have also taken action--primarily by blocking services to them. FDA and others have taken steps to educate consumers about the dangers of buying prescription drugs from rogue Internet pharmacies. FDA recently launched a national campaign to raise public awareness about the risks of purchasing drugs online, and the National Association of Boards of Pharmacy (NABP) posts information on its website about how to safely purchase drugs online. However, rogue Internet pharmacies use sophisticated marketing methods to appear legitimate, making it hard for consumers to differentiate between legitimate and rogue sites. HHS, DOJ, and DHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
NRC’s primary mission is to protect the public health and safety, and the environment, from the effects of radiation from nuclear plants, materials, and waste facilities. Because decommissioning a nuclear power plant is a safety issue, NRC has authority to ensure that owners are financially qualified to decommission these plants. Of the 125 nuclear power plants that have been licensed to operate in the United States since 1959, 3 have been completely decommissioned. Of the remaining 122 plants, 104 currently have operating licenses (although 1 has not operated since 1985), 11 plants are in safe storage (SAFSTOR) awaiting active decommissioning, and 7 plants are being decommissioned. At the time of our analysis, 43 plants were co-owned by different owners. NRC regulations limit commercial nuclear power plant licenses to an initial 40 years of operation but also permit such licenses to be renewed for additional 20 years if NRC determines that the plant can be operated safely over the extended period. NRC has approved license renewals for 16 plants (as of August 20, 2003). In 1988, NRC began requiring owners to (1) certify that sufficient financial resources would be available when needed to decommission their nuclear power plants and (2) require them to make specific financial provisions for decommissioning. In 1998, NRC revised its rules to require plant owners to report to the NRC by March 31, 1999, and at least once every 2 years thereafter on the status of decommissioning funding for each plant or proportional share of a plant they own. Under NRC requirements, the owners can choose from one or more methods, including the following, to provide decommissioning financial assurance: prepayment of cash or liquid assets into an account segregated from the owner’s assets and outside the owner’s administrative control; establishment of an external sinking fund maintained through periodic deposit of funds into an account segregated from the owner’s assets and outside the owner’s administrative control; use of a surety method (i.e., surety bond, letter of credit, or line of credit payable to a decommissioning trust account), insurance, or other method that guarantees that decommissioning costs will be paid; and for federal licensees, a statement of intent that decommissioning funds will be supplied when necessary. In September 1998, NRC amended its regulations to restrict the use of the external sinking fund method in deregulated electricity markets. Prior to this time, essentially all nuclear plant owners chose this method for accumulating decommissioning funds. However, under the amended regulations, owners may rely on periodic deposits only to the extent that those deposits are guaranteed through regulated rates charged to consumers. In conjunction with its amended regulations, NRC issued internal guidance, describing the process for reviewing the adequacy of a prospective owner’s financial qualifications to safely operate and maintain its plant(s) and the owner’s proposed method(s) for ensuring the availability of funds to eventually decommission the plant(s). The guidance outlines a method for evaluating the owner’s financial plans for fully funding decommissioning costs. In addition, the guidance states that, except under certain conditions, the NRC reviewer should, when plants have multiple owners, separately evaluate each co-owner’s funding schedule for meeting its share of the plant’s decommissioning costs. Using our most likely economic assumptions, the combined value of the nuclear power plant owners’ decommissioning trust funds was about 47 percent higher at the end of 2000 than necessary to ensure accumulation of sufficient funds by the time the plants’ licenses expire. This situation contrasts favorably with the findings in our 1999 report, which indicated that the industry was about 3 percent below where it needed to be at the end of 1997 to ensure that enough funds would be available. However, because owners are not allowed to transfer funds from a trust fund with sufficient reserves to one without sufficient reserves, overall industry sufficiency can be misleading. When we individually analyzed the owners’ trust funds, we found that 33 owners for several different plants had not accumulated funds at a rate that would be sufficient for eventual decommissioning. Through 2000, the owners of 122 operating and retired nuclear power plants collectively had accumulated about 47 percent more funds than would have been sufficient for eventually decommissioning, using our most likely economic assumptions. Specifically, the owners had accumulated about $26.9 billion—about $8.6 billion more than we estimate they needed at that point to ensure sufficient funds. This situation contrasts with the findings in our 1999 report, which indicated that the industry had accumulated about 3 percent less than the amount we estimated it should have accumulated by the end of 1997. Using alternative economic assumptions changes these results. For example, under higher decommissioning costs and other more pessimistic assumptions, the analysis shows that the combined value of the owners’ accounts would be only about 0.2 percent above the amount we estimate the industry should have collected by the end of 2000. (See app. II for our results using more optimistic assumptions.) The collective improvement in the status of the owners’ trust funds (under most likely assumptions) since our last report is due to three main factors. First, all or parts of the estimated decommissioning costs were prepaid for 15 plants when they were sold to new owners. For example, the seller prepaid $396 million when the Pilgrim 1 nuclear plant was sold in 1998 for the plant’s scheduled decommissioning in 2012. Second, for 16 other plants, NRC approved 20-year license renewals, which will provide additional time for the owners to make contributions and for the earnings to accumulate on the decommissioning fund balances. Third, owners earned a higher rate of return on their trust fund accounts than we projected in our 1999 report. For example, the average return on the trust funds of owners who responded to our survey was about 8.5 percent (after-tax nominal return) per year, from 1998 through 2000, instead of the approximately 6.25 percent per year we had assumed. The higher return was a result of the stronger than expected performance of financial markets in the late 1990s. Since that time, however, the economy has slowed and financial markets—equities in particular—have generally performed poorly. In contrast to the encouraging industry-wide results, when we analyzed the owners’ trust fund accounts individually, we found that several owners were not accumulating funds at rates that would be sufficient to pay for decommissioning if continued until their plants are retired. Each owner has a trust fund for each plant that it owns in whole or in part. For example, the Exelon Generation Company owns all or part of 20 different plants. For this analysis, we assessed the status of 222 trust funds for 122 plants owned in whole or part by 99 owners. As shown in table 1, using our most likely assumptions, 33 owners of all or parts of 42 different plants (50 trust funds) had accumulated less funds than needed through 2000 to be on track to pay for eventual decommissioning (see app. II for details). Thirteen of these plants were shut down before sufficient funds had been accumulated for decommissioning. Although the remaining 78 owners of all or parts of 93 plants (172 trust funds) had accumulated more funds than we estimate they needed to have at the end of 2000, funds are generally not transferable from owners who have more than sufficient reserves to other owners who have insufficient reserves. Under our most likely assumptions, the owners whom we estimate to be behind will have to increase the rates at which they accumulate funds to meet their eventual decommissioning financial obligations. For our analysis, we compared the trust fund balance that individual owners had accumulated for each plant by the end of 2000 with a “benchmark” amount of funds that we estimate they should have accumulated by that date. In setting the benchmark, we assumed that the owners would contribute increasing (but constant present-value) amounts annually to cover eventual decommissioning costs. For example, at the end of 2000, an owner’s decommissioning fund for a plant that had operated one-half of a 40-year license period (begun in 1980) should contain one-half of the present value of the estimated cost to decommission the owner’s share of that plant in 2020. Although this benchmark is not the only way an owner could accrue enough funds to pay future decommissioning costs, it provides both a common standard for comparisons among owners and, from an equity perspective among ratepayers in different years, a financially reasonable growing current- dollar funding stream over time. Appendix I describes our methodology in more detail. The status of each owner’s fund balance at the end of 2000 is not, by itself, the only indicator of whether an owner will have enough funds for decommissioning. Whether the owner will accumulate the necessary funds also depends on the rate at which the owner contributes funds over the remaining operating life of the plant; by increasing their contribution rates, owners whose trust fund balances were below the benchmark level could still accumulate the needed funds. Consequently, for the owners who provided contributions information to us, we also analyzed whether their recent contribution rates would put them on track to meet their decommissioning obligations. For this second analysis, we compared the average of the amounts contributed in 1999 and 2000 (cost-adjusted to 2000) with a benchmark amount equivalent to the average yearly present value of the amounts the owners would have to accumulate each year over the remaining life of their share of the plants to have enough decommissioning funds. As table 2 shows, 28 owners with ownership shares in 44 different plants (50 trust funds) contributed less than the amounts we estimate they will need to meet their decommissioning obligations, under our most likely assumptions. We compared the owners in table 1 with those in table 2 to see whether owners who are behind in balances were making up their shortfalls with recent increases in contributions. Of the 33 owners who we estimate had less than the benchmark balances through 2000, 26 owners of all or parts of 38 plants provided contributions information. Of these owners, only 8 owners of all or parts of 9 plants appeared to be making up their shortfalls with recent increases in contributions. By contrast, 20 owners with ownership interests in 31 plants recently contributed less to their trust funds than we estimate they needed to put them on track to meet their decommissioning obligations. These results would change under alternative economic assumptions. For example, if economic conditions improve to those assumed in our optimistic scenario, of the 20 owners who were below the benchmark under most likely assumptions on both balances and contributions, 12 owners would still be below the benchmark in both categories, even under optimistic assumptions. However, if economic conditions worsen to those in our pessimistic scenario, 34 owners who were above the benchmark under most likely assumptions on either balances or contributions would be below either of these benchmarks under pessimistic assumptions. (See app. II for detailed results.) NRC’s analysis of the 2001 biennial decommissioning status reports was not effective in identifying owners that might not be accumulating funds at sufficient rates to pay for decommissioning costs when their plants are permanently shut down. Although the NRC reported in 2001 that all owners appeared to be on track to have sufficient funds for decommissioning, our analysis indicated that several owners might not be able to meet financial obligations for decommissioning. NRC’s analysis was not effective for two reasons. First, NRC overly relied on the owners’ future funding plans, or on rate-setting authority decisions, in concluding that the owners were on track to fully fund decommissioning. However, as discussed earlier, based on actual contributions the owners had recently made to their trust funds, several owners are at risk of not accumulating enough funds to pay for decommissioning. Second, for the plants with more than one owner, NRC did not separately assess the status of each co-owner’s trust funds relative to the co-owner’s contractual obligation to fund a certain portion of decommissioning. Instead, NRC combined funds on a plant-wide basis and assessed whether the combined trust funds would be sufficient for decommissioning. Such an assessment method can produce misleading results because the owners with more than sufficient trust funds can appear to balance out those with insufficient trust funds. Furthermore, if NRC had identified an owner with unacceptable levels of financial assurance, it would not have had an explicit basis for acting to remedy potential funding deficiencies because it has not established criteria for responding to unacceptable levels of financial assurances. NRC officials said that their oversight of the owners’ decommissioning funds is an evolving process and that they intend to learn from their review of prior biennial reports and make changes to improve their evaluation of the 2003 biennial reports. However, they also said that any specific changes they are considering are predecisional, and final decisions have not yet been made. According to NRC officials, in reviewing the 2001 biennial reports, they used a “straight-line” method to establish a screening criterion for assessing whether owners were accumulating decommissioning funds at sufficient rates. Specifically, NRC compared the amount of funds accumulated through 2000 (expressed as a percentage of the total estimated cost as of 2000 to decommission the plant) to the expended plant life (expressed as a percentage of the total number of years the plant will operate). Under this method, the owner of a plant that has operated for one-half of its operating life would be expected to have accumulated at least one-half of the plant’s estimated decommissioning costs (that is, it would be collecting at or above the straight-line rate). NRC found that the owners of 64 out of 104 plants currently licensed to operate were collecting at the above a straight-line rate, and that the owners of the remaining 40 plants were collecting at the less than a straight-line rate. On a plant-wide basis, NRC then reviewed the owners’ “amortization” schedules for making future payments to fully fund decommissioning. The schedules, required as part of the biennial reports, consist of the remaining funds that the owners expect to collect each year over the remaining operating life of the plants. In estimating the funds to be collected, the owners may factor in the earnings expected from their trust fund investments. To account for such earnings, NRC regulations allow an owner to increase its trust fund balance by up to 2 percent per year (net of estimated cost escalation), or higher, if approved by its regulatory rate- setting authority, such as a state public utility commission. Because these owners’ amortization schedules identified sufficient future funds to enable them to reach the target funding levels, NRC concluded that all licensees appear to be on track to fund decommissioning when their plants are retired. However, relying on amortization schedules is problematic, in part because the actual amounts the owners contribute to their funds in the future could differ (that is, worsen) from their planned amounts if economic conditions or other factors change. NRC officials said that owners are not required by regulation to report their recent actual contributions to the trust funds, and NRC does not directly monitor whether the owners’ actual contributions match their planned contributions. Consequently, NRC relies on the owners’ amortization schedules as reported in the biennial reports. Such reliance is also problematic because in developing their amortization schedules, the owners could use widely varying rates of return to project the earnings on their trust fund investments. For example, each of the three co-owners of the Duane Arnold Energy Center nuclear plant assumed a different rate, ranging from 2 to 7 percent (net of estimated cost escalation). Other factors being equal, the owners using the higher rates would need to collect fewer funds than the owner using the lower rate of return. While the return that each owner actually earns on its investments may be higher or lower than these rates, by relying on the owners’ amortization schedules, NRC effectively used a different set of assumptions to evaluate the reasonableness of the trust funds accumulated by each owner. Consequently, NRC did not use a consistent “benchmark” in assessing the owners’ trust funds. By contrast, we used historical trends and economic forecasts to develop assumptions about rates of earnings and other economic variables, applied the same assumptions in evaluating the adequacy of each owner’s trust fund, and based expected future contributions on actual amounts contributed in recent years. NRC’s internal guidance for evaluating the biennial reports states that for plants having more than one owner, except in certain circumstances, each owner’s amortization schedule should be separately assessed for its share of the plant’s decommissioning costs. For those plants that have co- owners, NRC used the total amount of funds accumulated for the plant as a whole in its analysis. However, as we demonstrated with our industry-wide analysis, such an assessment for determining whether owners are accumulating sufficient funds can produce misleading results because owners with more than sufficient funds can appear to balance out owners with less than sufficient funds, even though funds are generally not transferable among owners. In explaining their approach, NRC officials said that the section of the guidance that calls for a separate evaluation of each owner’s amortization schedule for its share of the plant is not compulsory. In addition, they said that they consider each owner’s schedule to determine the total funds for the plant as a whole, but they believe that the same level of effort is not required for each individual trust fund balance unless there is a manifest reason to do so. They also stated that NRC’s regulations do not prohibit each co-owner from being held responsible for decommissioning costs, even if these costs are more than the co-owner’s individual ownership share. However, assessing the adequacy of decommissioning costs on a plant-wide basis is not consistent with the industry view, held by most plant owners, that each co-owner’s responsibility should be limited to its pro rata share of decommissioning expenses and that NRC should not look to one owner to “bail out” another owner by imposing joint and several liability on all co-owners. NRC has implicitly accepted this view and has incorporated it into policy to continue it. In a policy statement on deregulation, NRC stated that it will not impose decommissioning costs on co-owners in a manner inconsistent with their agreed-upon shares, except in highly unusual circumstances when required by public health and safety considerations and that it would not seek more than the pro rata shares from co-owners with de minimis ownership. Nevertheless, unless NRC separately evaluates each co-owner’s trust fund, NRC might eventually need to look to require some owners to pay more than their share. While the NRC has conducted two reviews of the owners’ biennial reports to date, it has not established specific criteria for responding to any unacceptable levels of financial assurances that it finds in its reviews of the owners’ biennial reports. As we noted in our 1999 report, without such criteria, NRC will not have a logical, coherent, and predictable plan of action if and when it encounters owners whose plants have inadequate financial assurance. NRC officials said that their oversight of the owners’ decommissioning funds is an evolving process, and they are learning from their prior reviews. However, they also said that any specific changes they are considering are predecisional and final decisions have not yet been made. The absence of any specific criteria for acting on owners’ decommissioning financial reports contrasts with the agency’s practices for overseeing safety activities at nuclear power plants. According to NRC, its safety assessment process allows it to integrate information relevant to licensee safety performance, make objective conclusions regarding the information, take actions based on these conclusions in a predictable manner, and effectively communicate these actions to the licensees and to the public. Its oversight approach uses criteria for identifying and responding to levels of concern for nuclear plant performance. In determining its regulatory response, NRC uses an “Action Matrix” that provides for a range of actions commensurate with the significance of inspection findings and performance indicators. If the findings indicate that a plant is operating in a way that has little or no impact on safety, then NRC implements only its baseline inspection program. However, if the findings indicate that a plant is operating in a way that implies a greater degree of safety significance, NRC performs additional inspections and initiates other actions commensurate with the significance of the safety issues. A similar approach in the area of financial assurance for decommissioning would appear to offer the same benefits of objectivity and predictability that NRC has established in its safety oversight. Ensuring that nuclear power plant owners will have sufficient funds to clean up the radioactive waste hazard left behind when these plants are retired is essential for public health and safety. As our analysis identified, some owners may be at risk of not accumulating sufficient trust funds to pay for their share of decommissioning. NRC’s analysis was not effective in identifying such owners because it relied too heavily on the owners’ future funding plans without confirming that the plans were consistent with recent contributions. Moreover, it aggregated the owners’ trust funds plant- wide instead of assessing whether each individual owner was on track to accumulate sufficient funds to pay for its share of decommissioning costs. In addition, NRC has not explained to the owners and the public what it intends to do if and when it determines an owner is not accumulating sufficient trust funds. Without a more effective method for evaluating owners’ decommissioning trust funds, and without criteria for responding to any unacceptable levels of financial assurance, NRC will not be able to effectively ensure that sufficient funds will be available when needed. To ensure that owners are accumulating sufficient funds to decommission their nuclear power plants, we recommend that the Chairman, NRC, develop an effective method for determining whether owners are accumulating funds at sufficient rates to pay for decommissioning. For plants having more than one owner, this method should include separately evaluating whether each owner is accumulating funds at sufficient rates to pay for its share of decommissioning. We further recommend that the Chairman, NRC, establish criteria for taking action when NRC determines that an owner or co-owner is not accumulating decommissioning funds at a sufficient rate to pay for its share of the cost of decommissioning. We provided a draft of this report to NRC for its review and comment. NRC’s written comments, which are reproduced in appendix III, expressed three main concerns regarding our report. First, NRC disagreed with our observation that its analyses of funding levels of the co-owners of a nuclear plant are inconsistent with its internal guidance. We revised the report to remove any inferences that NRC was not complying with its own guidance. While clarifying this point, we remained convinced that NRC needs to do more to develop an effective method for assessing the adequacy of nuclear power plant owner’s trust funds for decommissioning. NRC’s current practice is to combine the trust funds for all co-owners of a nuclear plant, then assess whether the combined value of the trust funds is sufficient. However, as our analysis indicates, NRC’s practice of combining the trust funds of several owners for its assessment can produce misleading results because co-owners with more than sufficient funds can appear to balance out those with less than sufficient funds. As a practical matter, owners have a contractual agreement to pay their share of decommissioning costs, and owners generally cannot transfer funds from a trust fund with sufficient reserves to one without sufficient reserves. While NRC recognizes that private contractual arrangements among co-owners exist, the agency stated that it reserves the right, in highly unusual situations where adequate protection of public health and safety would be compromised if such action were not taken, to consider imposing joint and several liability on co-owners for decommissioning funding when one or more co-owners have defaulted. Nonetheless, we believe that NRC should take a proactive approach, rather than simply wait until one or more co- owners default on their decommissioning payment expenses, to ensure that sufficient funds will be available for decommissioning and that the adequate protection of public health and safety is not compromised. Such an approach, we believe, would involve developing an effective method that, among other things, separately evaluates the adequacy of each co- owner’s trust fund. Second, NRC disagreed with our view that some owners are not on track to accumulate sufficient funds for decommissioning. NRC’s position is that it has a method for assessing the reasonableness of the owners’ trust funds and that our method has not been reviewed and accepted by NRC. While we recognize that NRC has neither reviewed nor accepted our method, our report identifies several limitations in NRC’s method that raise doubts about whether the agency’s method can effectively identify owners who might be at risk of not having sufficient funds for decommissioning. A particularly problematic aspect of this method is NRC’s reliance on the owners’ future funding plans to make up any shortfalls without verifying whether those plans are consistent with the owners’ recent contributions. We found some owners’ actual contributions in 2001 were much less than what they stated in their 2001 biennial reports to NRC that they planned to contribute. For example, one owner contributed about $1.5 million (or 39 percent) less than the amount they told NRC that they planned to contribute. In addition, based on our analysis using actual contributions the owners had recently made to their trust funds, we found that 28 owners with ownership shares in 44 different plants contributed less than the amounts we estimate they will need to make over the remaining operating life of their plants to meet their decommissioning obligations. Therefore, we continue to believe that some owners are not on track to accumulate sufficient funds to pay for decommissioning. Finally, NRC disagreed with our view that it should establish criteria for responding to owners with unacceptable levels of financial assurance. NRC stated that its practice is to review the owners’ plans on a case-by-case basis, engage in discussions with state regulators, and issue orders as necessary and appropriate. Since NRC has never identified an owner with unacceptable levels of financial assurance, it has never implemented this practice. We believe that NRC should take a more proactive approach to providing owners and the public with a more complete understanding of NRC’s expectations of how it will hold owners who are not accumulating sufficient funds accountable. As stated in our draft report, this lack of criteria is in contrast to NRC’s practices in overseeing safety issues at nuclear plants, where the NRC uses an “Action Matrix” that provides for a range of actions commensurate with the significance of safety inspection findings and performance indicators. In the area of financial assurance, a similar approach could involve monitoring the trust fund deposits of those owners who NRC determines are accumulating insufficient funds to verify that the deposits are consistent with the owners’ funding plans. We conducted our review from June 2001 to September 2003 in accordance with generally accepted government auditing standards. Unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Chairman, NRC; Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-6877. Key contributors to this report are listed in appendix IV. This appendix describes the scope and methodology of our review for our first objective: the extent to which nuclear power plant owners are accumulating funds at sufficient rates to pay decommissioning costs when their plants’ licenses expire. In addressing this objective, we analyzed the status of the decommissioning trust funds from two perspectives. First, we analyzed whether the industry as a whole is accumulating funds at rates that would be sufficient for decommissioning. For this analysis, we combined the trust funds of the owners of 122 nuclear plants. We then compared our results with those of our 1999 report to see whether the industry’s status had changed. Second, because owners generally cannot transfer funds from a trust fund with sufficient reserves to one without sufficient reserves, we also analyzed the status of each owner’s trust fund for each plant in which the owner had an ownership share. For this analysis, we analyzed the status of 222 individual trust funds, representing 99 owners of all or parts of 122 plants. For both the combined industry-wide trust funds and the individual owners’ trust funds, we conducted two separate analyses (hereafter described in terms of our analysis of the individual owners’ trust funds). This method is the same as that used in our earlier report on the adequacy of decommissioning funding. First, we looked backward from a base year—2000—and assessed whether, when taking into account key economic factors such as decommissioning cost-escalation rates and after- tax rates of return on the funds (the discount rate), each owner’s decommissioning fund balance for its ownership share of each of its plants was consistent with the expended portion of the licensed operating life of that plant. In other words, we assessed whether the monies the owner had contributed to its fund as of the end of 2000, together with the past earnings on these monies, equaled a benchmark or expected balance the owner should have accumulated by that time. To determine the benchmark balance for 2000 for each plant (owner’s share), we multiplied the present value of the plant’s estimated future decommissioning costs (owner’s share) by the fraction of the plant’s operating life used up by 2000. For example, a plant that began operating in 1980 would have used up one-half of its 40-year operating life by the end of 2000. Therefore, by the end of 2000, the owner for this plant should be expected to have accumulated in its trust fund one-half of the present value (in constant 2000 dollars) of the estimated decommissioning costs. Over the life of a plant, our benchmark measure presumes that an owner would contribute an annual amount that increases (but constant in present-value terms) at the trust fund’s after-tax rate of return. The sum of these annual amounts plus the income earned on the investment of the funds would equal the total estimated (present value of) the decommissioning costs when the plant’s operating license expires. Although recent deregulation and restructuring of the electricity industry have led some owners to prepay decommissioning costs, many owners continue to fund the trust funds by collecting fees from electricity users. Thus, under our benchmark measure, by paying decommissioning “fees” that are deposited into the trust funds, electricity users pay for the present value of each year’s accrued decommissioning costs. As a result, the benchmark embodies the principle of economic efficiency in that the price of a product (i.e., electricity) should, if possible, equal all of its costs— current and accrued. In addition, by assuming that current and future users pay the same decommissioning fees, in constant present-value terms, our benchmark ensures that decommissioning costs are accrued transparently over time. In addition to the looking-backward analysis, we conducted a second analysis, a “looking forward” from a base year—end of 2000—and assessed whether each owner’s recent contributions to its decommissioning funds for respective shares of each of its nuclear power plants were at a level consistent with the remaining portions of the licensed operating lives of each plant. In other words, we assessed whether the owner recently added monies to its decommissioning trust fund for each plant at the benchmark contribution necessary to have enough funds to decommission the plant when its operating license expires. For example, an owner who is behind in terms of trust fund balance through the end of 2000 could have recently contributed to its fund at much higher rates than it had in the past to make up for its shortfall over the remaining operating life of the plant. To determine an owner’s benchmark annual contribution, for each of its plants, we computed the annual-average present value of the required future contributions that are summed over the remaining life of the plant. The total present value contribution must equal the present value of the total future decommissioning costs minus the value of the current trust fund balance. We then compared this annual amount with the average contribution to the trust fund that the owner made in 1999 and 2000 (cost adjusted to 2000). We assume that an owner will annually increase its most recent contribution (2-year average, cost adjusted to 2000) over the remaining life of its plant by the assumed after-tax rate of return on its decommissioning fund. Owners whose recent average contributions exceeded the benchmark amount would be adding funds at a rate that would be more than sufficient, while owners whose recent average contributions were below the benchmark rate would be adding funds at an insufficient rate to pay for future decommissioning costs (under our specific economic assumptions). For our assessment of the status of the industry as a whole (and for both the looking-backward and looking-forward analyses), we developed three different scenarios: baseline (i.e., most likely), pessimistic, and optimistic. For the baseline analysis, we used our most likely economic assumptions. For the pessimistic and optimistic scenarios, we used different values for several key assumptions, as described later in this appendix. For our assessment of the status of each individual owner’s trust funds, we looked at the status of each owner’s trust funds under baseline (most likely) assumptions (for both the looking-backward and looking-forward analyses). In addition, for owners who were below the benchmark on both balances and contributions under the baseline assumptions, we reviewed the 2003 and 2001 biennial reports to ascertain whether the owner has and/or had an additional method (e.g., parent company guarantee) to support financial assurance obligations. We indicate in our detailed results when an owner reported having an additional method (see app. II, table 4). However, we did not evaluate the adequacy of these methods. In addition, for selected owners depending upon our baseline results, we analyzed how these results might change under alternative conditions— optimistic or pessimistic assumptions. For example, for owners who were below the benchmark on both balances and contributions under the baseline (see app. II, table 5), we assessed the status of their trust funds under optimistic conditions to determine which of these owner’s funds would still remain below benchmark on both our looking-backward and looking-forward measures. In addition, for owners who were from zero to 100 percent above the benchmark, under baseline assumptions for either balances or contributions, we assessed the status of their funds under pessimistic assumptions to determine whether their funds would fall below benchmarks for both balances and contributions (see app. II, table 6). To conduct our analysis we used a spreadsheet simulation model that uses a base year of 2000. In addition, for the key data in our analysis, we used the owner’s 2001 biennial reports and responses from a mail survey that we administered to nuclear power plant owners. More specifically, the key data used in the model are the following: (1) Owner’s name, percentage of each plant in which the owner has a share, year the plant was licensed to operate (or commenced operation, if earlier), and year the plant’s license will expire. We obtained these data using the owners’ 2001 biennial reports to Nuclear Regulatory Commission (NRC) and other NRC publications. (2) A decommissioning cost estimate for each plant (that is, a current dollar amount for the year that the estimate was made). When available, we used a site-specific estimate of NRC-related costs (that is, radiation-related costs). If a site-specific estimate was not available, we used cost estimates derived from NRC’s generic formula for these NRC-related costs. We obtained these data using the owners’ 2001 biennial reports to NRC. (3) Decommissioning fund balances as of December 31, 2000 for each owner and its plant share. When indicated, we used that portion of the fund balance that the owner designated for NRC-type costs (that is, excluding the costs relating to nonradiation or spent-fuel activities). Otherwise we used the entire fund balance. We obtained these data from the owners’ responses to our survey or from their 2001 biennial reports. (4) Decommissioning fund contributions for 1999 and 2000 for each owner and its plant share. We assumed these contributions were for NRC-related costs only. We obtained these data from the responses to our survey, and for owners who did not respond to our survey, we do not report on the adequacy of their contributions. In some cases, the ownership shares of plants have changed hands since our survey and the 2001 biennial reports. In these cases, to make our analysis as current as possible, we assess the adequacy of the funds that were accumulated by the previous owner but report the results under the name of the new owner of the trust fund (see app. II, table 4). Nonetheless, the new owner might accumulate trust funds at a different rate than the former owner. The analysis of the industry-wide trust funds and the individual owners’ trust funds depends on the following six key assumptions. The values for these six assumptions vary based upon the scenario: baseline (most likely), pessimistic, or optimistic. For each scenario, we used the same assumption values for each owner and each plant in order to apply an “even-handed” standard. (1) Future after-tax rate of return on decommissioning fund assets (discount rate): An after-tax rate of return was used to discount future trust fund contributions and plant decommissioning costs. In our survey, we asked owners for information on the financial assets contained in their respective decommissioning funds. We grouped these assets into five basic financial categories and calculated estimated, industry-wide, average weights for each type, these asset weights themselves reflecting the weights of the varying fund sizes. These categories, and calculated weighted-averages were: equities (e.g., common stocks), 47.1 percent; U.S. securities (e.g., federal government bonds), 26.7 percent; corporate bonds, 9.8 percent; municipal bonds, 10.4 percent; and cash and short-term instruments, 6.0 percent. Therefore, on average, these decommissioning funds contained roughly a 50-50 split between equities and bonds. We used these results for all of the decommissioning funds, for all three scenarios, but recognize three qualifications: (1) the variation in these asset weights among individual funds for 2000 was quite large, (2) our asset composition data represent only a time “snapshot” of such allocation—for year 2000 only, and (3) these same (baseline) asset weights are also assumed for our other two scenarios, because appropriate data were lacking to do otherwise. Using a long-term forecast from Global Insight (an economic forecasting company), we developed a forecast for each asset category under a baseline, pessimistic, and optimistic forecast scenario. For the baseline scenario, we used Global Insight’s trend forecast; for the pessimistic scenario, we used their pessimistic forecast (representing slower real gross domestic product (GDP) growth); and for the optimistic scenario, we used their optimistic forecast (representing faster real GDP growth). For the baseline scenario, we calculated a forecast (current-dollar) growth rate of 6.26 percent for equities, 6.83 percent for U.S. securities, 7.83 percent for corporate bonds, 6.27 percent for municipal bonds, and 5.02 percent for cash and short-term instruments. Multiplying these forecast rates with their respective asset weights in the owners’ portfolios yielded a baseline “portfolio average” forecast pretax annual-average rate of return of 6.49 percent. Similarly, we calculated pretax rates of return for the pessimistic and optimistic forecasts of 7.27 percent and 6.45 percent, respectively. The rate under the pessimistic forecast is higher than the rate under the baseline or optimistic forecasts because of higher inflation in the Global Insight pessimistic forecast and because of the owners’ relatively high average allocation of trust fund investments in bonds. (In Global Insight’s pessimistic forecast, the nominal-rate return on bonds is greater than on equities.) To convert the “portfolio average” forecast pretax rate of return to an after- tax rate of return, we used the pre- and post-tax rates of return data that owners provided in our survey. Based on these data we determined that the pretax rate should be reduced by 0.87 percentage points to derive a baseline after-tax rate of return of 5.62 (6.49 – 0.87) percent. Similarly, we calculated an after-tax rate of return of 6.40 (7.27 – 0.87) percent for the pessimistic scenario and an after-tax rate of return of 5.58 (6.45 – 0.87) percent for the optimistic scenario. (2) Future decommissioning cost escalation rate: For our baseline scenario, we assumed that decommissioning costs would increase annually at a nominal rate of 4.60 percent. Combining the after-tax rate of return and the cost escalation rate gave us an implied real (cost-adjusted) after-tax rate of return of 1.02 (5.62 - 4.60) percent for the baseline scenario. To calculate real after-tax rates of return for the pessimistic and optimistic scenarios, we first adjusted the nominal after-tax rates of return using Global Insight’s inflation forecasts. Its annual-average inflation forecast was about 2.47 percent for trend, or baseline, 3.04 percent for pessimistic, and 2.15 percent for optimistic. Using these forecasts, the real forecast rates of return are 3.15 (5.62 - 2.47) percent for baseline, 3.36 (6.40 – 3.04) percent for pessimistic, and 3.43 (5.58 – 2.15) percent for optimistic. We then used proportionality ratios to obtain real cost adjusted after-tax rates of return of 1.09 percent for the pessimistic scenario and 1.11 percent for the optimistic scenario. From these real after-tax rates of return, we computed implied cost-escalation rates of 5.31 percent and 4.47 percent for the pessimistic and optimistic scenarios, respectively. Note that the real (cost-adjusted) after-tax rates of return are quite similar in value among our scenarios; therefore, any differing effect on model results caused by the combination of the fund rate of return and decommissioning cost-escalation assumptions will be fairly minimal. Nonetheless, all other things being equal, for these two assumptions only, the balance and contribution adequacy results for the pessimistic scenario will be slightly above those of the baseline scenario, and only slightly below those of the optimistic scenario. (3) Alternative initial decommissioning cost estimates: In our baseline scenario, for the “initial” decommissioning (NRC-related) costs, we used the site-specific estimates when available. Otherwise, we used the cost estimates derived from NRC’s generic formula. For the pessimistic and optimistic scenarios, we used professional judgment to adjust the estimate used in the baseline. For example, to reflect a general concern among industry observers that future decommissioning costs could be much higher than expected, we increased the initial cost estimate by 40 percent for the pessimistic scenario, and reduced the initial decommissioning cost estimate by only 5 percent for the optimistic scenario. (4) Alternative start of decommissioning--years after shutdown: For the baseline scenario, we assumed that decommissioning would occur within the immediate 5 years after license termination; for simplification, we assumed “instantaneous” decommissioning at 2.5 years after shutdown. For the pessimistic assumption, decommissioning is assumed to occur within the first 4 years—at 2 years after shutdown. For the optimistic assumption, we assumed a 5-year delayed start of decommissioning— within 5-10 years after license termination—at 7.5 years after shutdown. Under certain circumstances (e.g., co-located plants), NRC may permit a decommissioning delay. As long as the assumed after-tax rate of return exceeds the assumed cost-escalation rate (i.e., a positive, real, cost- adjusted rate of return), a delay in decommissioning will improve the outlook for an owner’s trust fund in both the looking-backward (trust fund balance) and looking-forward (trust fund contributions) analysis, all else the same. (5) Alternative operating license expiration year: The year of plant operating-license expiration is assumed to vary among our three scenarios to reflect that NRC has approved license renewals for some plants, and it may approve 20-year license renewals for other plants in the future. For the baseline and pessimistic scenarios, we include the renewals that have been approved for 16 plants, as of August 20, 2003. In addition, because NRC has received renewal applications from owners of 14 plants, and it anticipates applications from owners of another 8 plants by the end of 2003 (as of August 20, 2003), we assume in the optimistic scenario that license renewals will be approved for an additional 22 plants. In general, these plant license renewals suggest that the electricity market today is robust and owners expect higher electricity prices in the future. (6) Alternative market values for decommissioning funds: For the baseline and optimistic scenarios, we use the actual market value of the trust fund balances as of the end of 2000. In contrast, for the pessimistic scenario, we reduced the actual market value of the funds by 5 percent for 2000 to simulate the effect of a slowing economy on investment returns from 2000 through 2002. The simulated decline is modest, and over the period, the overall increase in bond prices would have offset to some degree the overall decline in the value of common stocks. This appendix presents the detailed results of our analysis of the decommissioning trust funds. Specifically, table 3 shows industry-wide, weighted-average results under three scenarios—baseline (most likely) assumptions, pessimistic assumptions, and optimistic assumptions. Table 4 presents the results for individual owners under baseline, or most likely assumptions. Table 5 shows the results of our analysis under optimistic assumptions for individual owners whose trust funds were below the benchmarks for both balances and recent contributions under the baseline scenario. Table 6 presents the results under pessimistic assumptions for individual owners whose trust funds were zero to 100 percent above the benchmark balances and/or contributions under the baseline scenario. See appendix I for a description of our methodology. The following are GAO’s comments on NRC’s letter dated October 3, 2003. 1. Rather than concluding that NRC does not have a method, we stated that the agency’s analysis was not effective in identifying owners who might be at risk of accumulating insufficient funds to pay for decommissioning. For example, NRC relied on the owners’ future funding plans to make up any shortfalls without verifying whether the plans are consistent with the owners’ recent contributions. See also our response in the Agency Comments and Our Evaluation section on page 16. 2. We agree that NRC should be primarily concerned with ensuring that owners of nuclear power plants will have sufficient funds for decommissioning. However, we believe that NRC should take a proactive, rather than reactive, approach to providing owners and the public with a more complete understanding of NRC’s expectations as to how they will hold owners who are not accumulating sufficient funds accountable. As discussed in the report, the lack of any specific criteria for acting on owners’ decommissioning financial reports contrasts with NRC’s practices in overseeing safety issues at nuclear plants, where the agency uses an “Action Matrix” that provides for a range of actions commensurate with the significance of safety inspection findings and performance indicators. Without similar criteria in its oversight of decommissioning funding assurance, NRC will not have a logical, coherent, consistent, and predictable plan of action if and when it encounters owners whose plants have inadequate financial assurance. See also our response in the Agency Comments and Our Evaluation section on page 16. 3. See our responses to comments 5, 6, and 9 in this appendix. 4. See our responses to comment 9. 5. We agree that current NRC regulations do not establish intermediate benchmarking levels, but rather establish the minimum balance that must be obtained when plants are retired. We also agree that the state regulatory authorities and Federal Energy Regulatory Commission play a role. However, we believe that NRC should take a more proactive approach in developing an effective method for ensuring that sufficient funds will be available for decommissioning. For example, a common expected rate of return could be used to project the earnings of each owner’s trust fund. NRC’s current method allows the owners to use up to 2 percent (real) or another rate if approved by its state regulator. As we stated in our report, one state regulator approved owners of the same plant to use widely varying rates of return to project earnings on their trust fund investments. Other factors being equal, the owner using the higher rate would need to collect fewer funds than the owner using a lower rate of return. While the actual rate the owners will earn on their funds could be higher or lower, NRC accepted the state regulator- approved rates without assessing whether they were consistent with market expectations. In another example, in its 2001 biennial report, one owner using NRC’s 2 percent rate of return estimated that the amount of funds needed for decommissioning under NRC’s regulations would be insufficient at five of its nuclear power plants. Therefore, the owner provided additional assurance in the form of a parent guarantee. However, the owner sought and subsequently received approval from its state regulator to use a higher real rate of return. After receiving the approval, the owner withdrew its parent guarantee since under the higher rate, the projected trust funds were sufficient to cover estimated decommissioning costs. We believe that by being more proactive, and not simply deferring to others, the NRC can develop a more effective and consistent method and better achieve its primary concern of ensuring that owners are accumulating funds at sufficient rates. 6. We found no evidence during our review that NRC has ever determined that an owner is not accumulating sufficient funds. Therefore, without any experience that its “practice” has been applied, we believe that without clear criteria, NRC will not have a logical, coherent, consistent, and predictable plan of action if and when it encounters owners whose plants have inadequate financial assurance. Accordingly, we are recommending that NRC establish criteria for responding to unacceptable levels of financial assurance. 7. We agree that our method is different from that used by NRC. Our draft discussed and reviewed NRC’s analysis. Based on our review, we concluded that NRC’s analysis was not effective in identifying owners who might be at risk of accumulating insufficient funds to pay for eventual decommissioning. For example, NRC relied on the owners’ future funding plans, or on rate-setting authority decisions, in concluding that the owners were on track to fully fund decommissioning. However, we found some owners’ actual contributions in 2001 were much less than what they stated in their 2001 biennial reports to NRC that they planned to contribute. For example, one owner contributed about $1.5 million (or 39 percent) less than the amount it told NRC that it planned to contribute. Moreover, using actual contributions the owners had recently made to their trust funds, we identified several owners that are at risk of accumulating insufficient funds to pay for eventual decommissioning. 8. We do not believe any changes are needed. 9. We agree, and the our draft report stated, that NRC does not separately assess the status of each co-owner’s decommissioning funding against the co-owner’s private contractual obligation to fund decommissioning. The NRC guidance states: “Some licensees are part owners of power reactors. In such cases, the reviewer should evaluate separately each licensee’s amortization schedule [i.e., decommissioning funding] for its share of the facility, unless the lead licensee has agreed to coordinate funding documentation and reporting for all co-owners.” Nonetheless, we revised the report to remove any inferences that NRC’s practice is inconsistent with its internal guidance. Notwithstanding NRC’s characterization of its practice, we believe that both the guidance and NRC’s actions do not go far enough. For example, the guidance allows for an exception when the lead licensee agrees to coordinate documentation and reporting. More importantly, the critical issue is that NRC should do more to develop an effective method for assessing the adequacy of nuclear power plant owner’s trust funds for decommissioning. Under NRC’s current method, it combines the trust funds for all co-owners of a nuclear plant and then assesses the adequacy of decommissioning funds on a plant-wide basis. However, as our analysis indicates, combining the trust funds of several owners can produce misleading results because those co-owners with more than sufficient funds can appear to balance out those with less than sufficient funds. In addition, as a practical matter, owners have contractual agreements to pay for their share of decommissioning, and the trust funds are generally not transferable among owners. Unless NRC separately evaluates the adequacy of each co-owners’ decommissioning trust fund, the agency’s existing process would appear to require some co-owners to pay more than their fair share of decommissioning costs. We believe this would be inconsistent with NRC’s stated policy of generally not looking to one co-owner to bail out another. 10. Rather than state that NRC has not developed and used a method, we found that the agency’s method was not effective in identifying owners who might be at risk of accumulating insufficient funds to pay for decommissioning. For example, we identified several limitations in NRC’s method, including the agency’s practice of combining the trust funds for all the co-owners of a nuclear plant and then assessing whether the combined value of the trust funds is sufficient. We believe that this practice can produce misleading results because those co- owners with more than sufficient funds can appear to balance out those with less than sufficient funds. In addition, we agree that NRC has not established criteria for taking action when it finds cases of unacceptable levels of financial assurance. According to NRC officials we spoke to, NRC has never identified an owner with unacceptable levels of financial assurance. Moreover, the general activities that NRC stated above are not included in its internal guidance for reviewing the owners’ biennial reports. We believe that NRC should take a more proactive approach to providing owners and the public with a more complete understanding of NRC’s expectations as to how they will hold owners who are not accumulating sufficient funds accountable. We believe having established criteria for taking action when it is determined that unacceptable levels of financial assurance exist will better prepare NRC to make this determination. Furthermore, having such criteria would not only increase public confidence that NRC has a plan to take action to ensure sufficient funds will be available for decommissioning but also would make its determination of inadequacy more transparent to owners. 11. As indicated in our draft report, we reviewed NRC’s analysis of the owners’ 2001 biennial reports. Our review clearly points out that the agency’s method has limitations that reduce its effectiveness. For example, NRC relied on the owners’ future funding plans to make up any shortfalls without verifying whether those plans are consistent with the owners’ recent contributions. We found that some owners’ actual contributions in 2001 were much less than what they stated in their 2001 biennial reports to NRC that they planned to contribute. For example, one owner contributed about $1.5 million (or 39 percent) less than the amount they told NRC that they planned to contribute. In addition, based on our analysis using the actual contributions the owners recently made to their trust funds, we found that 28 owners with ownership shares in 44 plants contributed less than the amounts we estimate they will need to contribute over the remaining life of their plants to meet their decommissioning obligations. Accordingly, we believe that our recommendation to NRC to develop an effective method is clearly warranted to ensure that all owners are accumulating funds at sufficient rates. See also our response to comment 12. 12. As stated in our draft, our conclusions are based on a method that uses a benchmark to assess the adequacy of each nuclear plant owner’s decommissioning trust fund. In addition, our draft stated that this benchmark is not the only way an owner could accrue enough funds to pay future decommissioning costs. Still, we believe that our benchmark is useful for assessing the status of the owners’ decommissioning trust funds because it (1) provides a common standard for comparisons among owners, (2) embodies the principle of economic efficiency in that the price of a product (i.e., electricity) should, if possible, equal all of its costs—current and accrued, and (3) provides for transparency in that it assumes that current and future users pay the same decommissioning fees, in constant present-value terms. 13. As we stated in our draft, NRC stated that it will not impose decommissioning costs on co-owners in a manner inconsistent with their agreed-upon shares, except in highly unusual circumstances when required by public health and safety considerations and that it would not seek more than the pro rata shares from co-owners with de minimis ownership. Nevertheless, unless NRC separately evaluates the adequacy of each co-owners’ decommissioning trust fund, the agency’s existing process would appear to require some co-owners to pay more than their fair share of decommissioning costs. We believe this would be inconsistent with NRC’s stated policy of generally not looking to one co-owner to bail out another one. In addition, Ronald La Due Lake, Carolyn McGowan, Cynthia Norris, Michael Sagalow, Barbara Timmerman, Daniel G. Williams, and Dwayne Weigel made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Following the shutdown of a nuclear power plant a significant radioactive waste hazard remains until the waste is removed and the plant site decommissioned. In 1999, GAO reported that the combined value of the owners' decommissioning funds was insufficient to ensure enough funds would be available for decommissioning. GAO was asked to update its 1999 report and to evaluate the Nuclear Regulatory Commission's (NRC) analysis of the owners' funds and its process for acting on reports that show insufficient funds. Although the collective status of the owners' decommissioning fund accounts has improved considerably since GAO's last report, some individual owners are not on track to accumulate sufficient funds for decommissioning. Based on our analysis and most likely economic assumptions, the combined value of the nuclear power plant owners' decommissioning fund accounts in 2000--about $26.9 billion--was about 47 percent greater than needed at that point to ensure that sufficient funds will be available to cover the approximately $33 billion in estimated decommissioning costs when the plants are permanently shutdown. This value contrasts with GAO's prior finding that 1997 account balances were collectively 3 percent below what was needed. However, overall industry results can be misleading. Because funds are generally not transferable from funds that have more than sufficient reserves to those with insufficient reserves, each individual owner must ensure that enough funds are available for decommissioning its particular plants. We found that 33 owners with ownership interests in a total of 42 plants had accumulated fewer funds than needed through 2000 to be on track to pay for eventual decommissioning. In addition, 20 owners with ownership interests in a total of 31 plants recently contributed less to their trust funds than we estimate they needed to put them on track to meet their decommissioning obligations. NRC's analysis of the owners' 2001 biennial reports was not effective in identifying owners that might not be accumulating sufficient funds to cover their eventual decommissioning costs. In reviewing the 2001 reports, NRC reported that all owners appeared to be on track to have sufficient funds for decommissioning. In reaching this conclusion, NRC relied on the owners' future plans for fully funding their decommissioning obligations. However, based on the owners' recent actual contributions, and using a different method, GAO found that several owners could be at risk of not meeting their financial obligations for decommissioning when these plants stop operating. In addition, for plants with more than one owner, NRC did not separately assess the status of each co-owner's trust funds against each co-owner's contractual obligation to fund decommissioning. Instead, NRC assessed whether the combined value of the trust funds for the plant as a whole was reasonable. Such an assessment for determining whether owners are accumulating sufficient funds can produce misleading results because owners with more than sufficient funds can appear to balance out owners with less than sufficient funds even, though funds are generally not transferable among owners. Moreover, NRC has not established criteria for taking action if it determines that an owner is not accumulating sufficient funds.
On October 30, 2000, the Congress enacted GISRA, which became effective November 29, 2000, for a period of 2 years. GISRA supplemented information security requirements established in the Computer Security Act of 1987, the Paperwork Reduction Act of 1995, and the Clinger-Cohen Act of 1996, and was consistent with existing information security guidance issued by OMB and NIST, as well as audit and best practice guidance issued by us. GISRA consolidated these separate requirements and guidance into an overall framework for managing information security and established new annual review, independent evaluation, and reporting requirements to help ensure agency implementation and both OMB and congressional oversight. GISRA assigned specific responsibilities to OMB, agency heads and chief information officers (CIOs), and IGs. OMB was responsible for establishing and overseeing policies, standards, and guidelines for information security. This included the authority to approve agency information security programs, but delegated OMB’s responsibilities regarding national security systems to national security agencies. OMB was also required to submit an annual report to the Congress summarizing results of agencies’ evaluations of their information security programs. OMB released its fiscal year 2001 report in February 2002 and its fiscal year 2002 report in May 2003. GISRA required each agency, including national security agencies, to establish an agencywide risk-based information security program to be overseen by the agency CIO and ensure that information security is practiced throughout the life cycle of each agency system. Specifically, this program was to include periodic risk assessments that consider internal and external threats to the integrity, confidentiality, and availability of systems, and to data supporting critical operations and assets; the development and implementation of risk-based, cost-effective policies and procedures to provide security protections for information collected or maintained by or for the agency; training on security responsibilities for information security personnel and on security awareness for agency personnel; periodic management testing and evaluation of the effectiveness of policies, procedures, controls, and techniques; a process for identifying and remediating any significant deficiencies; procedures for detecting, reporting, and responding to security incidents; an annual program review by agency program officials. In addition to the responsibilities listed above, GISRA required each agency to have an annual independent evaluation of its information security program and practices, including control testing and compliance assessment. The evaluations of non-national-security systems were to be performed by the agency IG or an independent evaluator, and the results of these evaluations were to be reported to OMB. For the evaluation of national security systems, special provisions included having national security agencies designate evaluators, restricting the reporting of evaluation results, and having the IG or an independent evaluator perform an audit of the independent evaluation. For national security systems, only the results of each audit of an evaluation were to be reported to OMB. For first-year GISRA implementation, OMB provided guidance to the agencies in January 2001, and in June issued final instructions on reporting results of annual agency security program reviews and inspector general independent evaluations to OMB to provide a basis for its annual report to the Congress. These instructions listed specific topics that the agencies were to address in their reporting, many of which were referenced back to corresponding GISRA requirements. Agencies were to report their results to OMB in September 2001—the same time they were to submit their fiscal year 2003 budget materials. In October 2001, OMB also issued detailed guidance to the agencies on reporting their strategies for correcting the security weaknesses identified through their reviews, evaluations, and other reviews or audits performed throughout the reporting period. This information was to include a “plan of action and milestones” (corrective action plan) that, among other things, listed the weaknesses; showed required resources, milestones, and completion dates; and described how the agency planned to address those weaknesses. The guidance also required agencies to submit quarterly status updates of their corrective action plans to OMB. Corrective action plans were due to OMB by the end of October, and the first quarterly updates were due January 31, 2002. For fiscal year 2002, OMB provided the agencies with updated reporting instructions and guidance on preparing and submitting corrective action plans. Agencies were again to report their GISRA review and evaluation results to OMB in September with corrective action plans due October 1, 2002, and the next quarterly update due on January 1, 2003. Although similar to its previous guidance, in response to agency requests and recommendations we made to OMB as a result of our review of fiscal year 2001 GISRA implementation, this guidance also incorporated several significant changes to help improve the consistency and quality of information being reported for oversight by OMB and the Congress. These changes included the following: Reporting instructions provided new high-level management performance measures that the agencies and IGs were required to use to report on agency officials’ performance. These included, for example, the number and percentage of systems assessed for risk, the number and percentage of systems certified and accredited, the number of contractor operations or facilities reviewed, and the number of employees with significant security responsibilities that received specialized training. OMB confirmed that agencies were expected to review all systems annually. It explained that GISRA requires senior agency program officials to review each security program for effectiveness at least annually, and that the purpose of the security programs discussed in GISRA is to ensure the protection of the systems and data covered by the program. Thus, a review of each system is essential to determine the program’s effectiveness, and only the depth and breadth of such system reviews are flexible. Agencies were generally required to use all elements of NIST’s Security Self-Assessment Guide for Information Technology Systems to review their systems unless an agency and its IG confirmed that any agency- developed methodology captured all elements of the guide. The guide uses an extensive questionnaire containing specific control objectives and techniques against which an unclassified system or group of interconnected systems can be tested and measured. OMB requested that IGs verify that agency corrective action plans identify all known security weaknesses within an agency, including components, and are used by the IG and the agency, major components, and program officials within them, as the authoritative agency management mechanism to prioritize, track, and manage all agency efforts to close security performance gaps. OMB authorized agencies to release certain information from their corrective action plans to assist the Congress in its oversight responsibilities. Agencies could release this information, as requested, excluding certain elements, such as estimated funding resources and the scheduled completion dates for resolving a weakness. In its fiscal year 2002 report to the Congress, OMB stated that the federal government had made significant strides in addressing serious and pervasive IT security problems, but that more needed to be done, particularly to address both the governmentwide weaknesses identified its fiscal year 2001 report to the Congress and new challenges. Also, as discussed in a later section, OMB reported significant progress in agencies’ IT security performance, primarily as indicated by the quantitative governmentwide performance measures that OMB required agencies to disclose beginning with their fiscal year 2002 reports. OMB previously reported six common security weaknesses for the federal government. Actions and progress for these weaknesses reported by OMB in its fiscal year 2002 report were as follows: Lack of senior management attention to information security. OMB reports that based on agencies’ security reviews, remediation efforts, and IT budget materials, it either conditionally approves or disapproves agency security programs, and the OMB Director communicates this decision directly to each agency head. Further, OMB used the President’s Management Agenda Scorecard to focus attention on serious IT security weaknesses and, along with senior agency officials, to monitor agency progress on a quarterly basis. As a result, OMB concluded that senior executives at most agencies are paying greater attention to IT security. Inadequate accountability for job and program performance related to IT security. OMB’s instructions to federal agencies for fiscal year 2002 GISRA reporting included high-level management performance measures to assist agencies in evaluating their IT security status and the performance of officials charged with implementing specific security requirements. Limited security training for general users, IT professionals, and security professionals. OMB stated that through the administration’s “GoLearn” e-government initiative on establishing and delivering electronic training, IT security courses were available to all federal agencies in late 2002. Initial courses are targeted to CIOs and program managers, with additional courses to be added for IT security managers and the general workforce. Inadequate integration of security into the capital planning and investment control process. OMB continues to address this issue through the budget process to ensure that adequate security is incorporated directly into and funded over the life cycle of all systems and programs before funding is approved. Further, OMB stated that through this process, agencies could demonstrate explicitly how much they are spending on security and associate that spending with a given level of performance. OMB also provided agencies with guidance in determining the security costs of their IT investments. Poor security for contractor-provided services. Through the administration’s Committee on Executive Branch Information Systems Security of the President’s Critical Infrastructure Protection Board (since eliminated), an issue group was created to review this problem and develop recommendations for its resolution, to include addressing how security is handled in contracts themselves. This issue is currently under review by the Federal Acquisition Regulatory Council to develop, for governmentwide use, a clause to ensure that security is appropriately addressed in contracts. Limited capability to detect, report, and share information on vulnerabilities or to detect intrusions, suspected intrusions, or virus infections. OMB stated that addressing this weakness begins through incident detection and reporting by individual agencies to incident response centers at the Department of Homeland Security (DHS), the FBI, the Department of Defense, or elsewhere. OMB also noted that agencies must actively install corrective patches for known vulnerabilities and reported that the Federal Computer Incident Response Center (FedCIRC) awarded a contract on patch management to disseminate patches to all agencies more effectively. Among other actions, OMB and the CIO Council have developed and deployed a process to rapidly identify and respond to cyber threats and critical vulnerabilities. Although not highlighted in OMB’s report, in our April 2003 testimony before this subcommittee, we identified other activities undertaken to address these common weaknesses. In particular, during the past year, NIST has issued related security guidance, including draft guidelines on designing, developing, implementing, and maintaining an awareness and training program within an agency’s IT security program; a draft guide on security considerations in federal IT procurements, including specifications, clauses, and tasks for areas such as IT security training and awareness, personnel security, physical security, and security features in systems; and procedures for handling security patches that provided principles and methodologies for establishing an explicit and documented patching and vulnerability policy and a systematic, accountable, and documented process for handling patches. In addition to these identified weaknesses, in its fiscal year 2001 report, OMB stated that it would direct all large agencies to undertake a Project Matrix review to more clearly identify and prioritize the security needs for government assets. Project Matrix is a methodology developed by the Critical Infrastructure Assurance Office (CIAO) (recently transferred to the Department of Homeland Security) that identifies the critical assets within an agency, prioritizes them, and then identifies interrelationships with other agencies or the private sector. OMB reported that once reviews have been completed at each large agency, it would identify cross- government activities and lines of business for Project Matrix reviews so that it will have identified both vertically and horizontally the critical operations and assets of the federal government’s critical enterprise architecture and their relationship beyond government. In its fiscal year 2002 report, OMB acknowledged this requirement, but did not assess agencies’ overall progress or indicate a goal for when this process will be complete. As we testified in April 2003, 14 agencies reported they had identified their critical assets and operations—10 using Project Matrix and 4 using other methodologies. Five more agencies reported that they were in some stage of identifying their critical assets and operations, and three more planned to do so in fiscal year 2003. However, this process may take several more years to complete because OMB has not established any deadlines for the completion of Project Matrix reviews. OMB’s fiscal year 2002 report also identifies several additional governmentwide issues and trends as concerns. These are as follows: Agencies identify the same security weaknesses year after year, such as a lack of system-level security plans. OMB reports that it will assist agencies in prioritizing and reallocating funds to address these problems. Some IGs and CIOs have vastly different views of the state of the agency’s security programs, and OMB reports that it will highlight such discrepancies to agency heads. Many agencies are not adequately prioritizing their IT investments and are seeking funding to develop new systems while significant security weaknesses exist in their legacy systems. OMB reports that it will assist agencies in reprioritizing their resources through the budget process. Based on the information in the reports, not all agencies are successfully reviewing all programs and systems each year, as required by information security law. More agency program officials must engage and be held accountable for ensuring that the systems that support their programs and operations are secure, rather than thinking of IT security as the responsibility of a single agency official or the agency’s IT security office. As part of its fiscal year 2002 report, OMB listed five areas in which it will continue to work with agencies to ensure progress in safeguarding the federal government’s information and systems: (1) the plan of action and milestones process, (2) IT security performance measures, (3) the President’s Management Agenda Scorecard, (4) governmentwide milestones for IT security, and (5) the threat and vulnerability response process. Key actions identified for these areas include the following: To ensure that remediation plans continue to be developed, implemented, and corrective actions prioritized and tracked, OMB guidance will instruct IGs, as part of their fiscal year 2003 FISMA work, to assess whether each agency has in place a robust agencywide plan of action and milestone process. A robust process, verified by agency IGs, is one of three criteria agencies must meet to “get to green” for security on the Expanding E- Government Scorecard. To assist agencies and OMB in better tracking progress, along with their plan of action and milestone updates, agencies will also be required to begin quarterly reporting of their status against the OMB-prescribed IT security performance measures. OMB set targeted milestones for improvement for some of the critical IT security weaknesses in the President’s FY 2004 budget. Targets for improvement include that by the end of 2003 all agencies are to have an adequate agencywide process in place for developing and implementing program- and system-level plans, 80 percent of federal IT systems shall be certified and accredited, and 80 percent of the federal government’s fiscal year 2004 major IT investments shall appropriately integrate security into the life cycle of the investment. Our analyses of agency performance measure data and individual agencies’ efforts to implement information security requirements showed limited progress in many cases. This limited progress is indicated despite other benefits that that have resulted from GISRA implementation, such as increased management attention to and accountability for information security; important actions by the administration, such as integrating information security into the President’s Management Agenda Scorecard; an increase in the types of information being reported and made available for oversight: and the establishment of a baseline for measuring agencies’ performance. As mentioned previously, for fiscal year 2002 OMB required agencies to report performance measure data related to key information security requirements, such as assessing systems for risk and having up-to-date system security plans. Summarizing these data for 24 large federal agencies and comparing results between fiscal years 2001 and 2002, OMB reported in its fiscal year 2002 report that these data indicated that agencies had made significant progress. Table 1 shows the governmentwide results of this analysis reported by OMB for selected performance measures, which indicates improvements for these measures ranging from 18 to 27 percentage points. However, our analyses showed that most agencies experienced more limited progress than the OMB analysis indicates. Specifically, excluding data for the National Aeronautics and Space Administration (NASA), our analysis showed that increases for these same measures only ranged from 3 to 10 percent. NASA’s performance measure data were excluded because fiscal year 2001 data were based on a sample of 221 of its most critical systems, but were compared with data for its total of 1,641 systems for fiscal year 2002. As a result, including NASA data significantly affected the overall levels of governmentwide progress shown. Figure 1 shows the percentage change in performance measures based on our analysis, excluding data for NASA. In addition to the impact of the NASA data, the performance data reported by the Department of Defense (DOD) also represents only a small sample of the thousands of systems DOD identified in total for the department, and could significantly affect overall governmentwide results if data on all systems were available. DOD reported that because of its size and complexity, the collection of specific metrics required sizable lead time to allow for the collection and approval process by each military service and agency. For this reason, DOD focused its fiscal year 2002 GISRA efforts on (1) a sample of 366 of its networks and (2) a sample of 155 systems that were selected from the sample of systems used for DOD’s fiscal year 2001 GISRA review. It is these 155 systems for which DOD reported performance measure data. In addition to the our analysis of these overall performance measures, we analyzed fiscal year 2002 GISRA reports by the 24 agencies and focused on the status of individual agencies in implementing federal information security requirements related to these and other measures. These analyses showed mixed agency progress but overall, many agencies still had not established information security programs that implement these requirements for most of their systems. Summaries of our analyses for selected information security requirements and reported performance measures follow. Agencies are required to perform periodic threat-based risk assessments for systems and data. Risk assessments are an essential element of risk management and overall security program management and, as our best practice work has shown, are an integral part of the management processes of leading organizations. Risk assessments help ensure that the greatest risks have been identified and addressed, increase the understanding of risk, and provide support for needed controls. Our reviews of federal agencies, however, frequently show deficiencies related to assessing risk, such as security plans for major systems that are not developed on the basis of risk. As a result, the agencies had accepted an unknown level of risk by default rather than consciously deciding what level of risk was tolerable. OMB’s performance measure for this requirement mandated that agencies report the number and percentage of their systems that have been assessed for risk during fiscal year 2001 and fiscal year 2002. Our analyses of reporting for this measure showed some overall progress. For example, of the 24 agencies, 13 reported an increase in the percentage of systems assessed for fiscal year 2002 compared with fiscal year 2001. In addition, as illustrated in figure 2, for fiscal year 2002, 11 agencies reported that they had assessed risk for 90 to 100 percent of their systems. However, figure 2 also shows that further efforts are needed by other agencies, including the 8 that reported that less than 50 percent of their systems had been assessed for risk. An agency head is required to ensure that the agency’s information security plan is practiced throughout the life cycle of each agency system. In its reporting instructions, OMB required agencies to report whether the agency head had taken specific and direct actions to oversee that program officials and the CIO are ensuring that security plans are up to date and practiced throughout the life cycle. Agencies also had to report the number and percentage of systems that had an up-to-date security plan. Our analyses showed that although most agencies reported that they had taken such actions, IG reports disagreed for a number of agencies, and many systems do not have up-to-date security plans. Specifically, 21 agencies reported that the agency head had taken actions to oversee that security plans are up to date and practiced throughout the life cycle. In comparison, of the 21 IGs that addressed this issue, 9 reported such actions had been taken and 12 reported that they had not. One IG reported that the agency’s security plan guidance predates revisions to NIST and OMB guidance and, as a result, does not contain key elements, such as the risk assessment methodology used to identify threats and vulnerabilities. In addition, another IG reported that although progress had been made, security plans had not been completed for 62 percent of the agency’s systems. Regarding the status of agencies’ security plans, as shown in figure 3, 9 of the 24 agencies reported that they had up-to-date security plans for less than 50 percent of their systems for fiscal year 2002. Of the remaining 15 agencies, 7 reported up-to-date security plans for 90 percent or more of their systems. As one of its performance measures for agency program official responsibilities, OMB required agencies to report the number and percentage of systems that have been authorized for processing following certification and accreditation. Our analysis of agencies’ reports showed mixed progress for this measure. For example, 10 agencies reported increases in the percentage of systems authorized for processing following certification and accreditation compared with fiscal year 2001, but 8 reported decreases and 3 did not change (3 others did not provide sufficient data). In addition, as shown in figure 4, 11 agencies reported that for fiscal year 2002, 50 percent or more of their systems had been authorized for processing following certification and accreditation, with only 3 of these reporting from 90 to 100 percent. And of the remaining 13 agencies reporting less than 50 percent, 3 reported that none of their systems had been authorized. In addition to this mixed progress, IG reports identified instances in which agencies’ certification and accreditation efforts were inadequate. For example, one agency reported that 43 percent of its systems were authorized for processing following certification and accreditation. IG reporting agreed, but also noted that over a quarter of the systems identified as authorized had been operating with an interim authorization and did not meet all of the security requirements to be granted accreditation. The IG also stated that, due to the risk posed by systems operating without certification and full accreditation, the department should consider identifying this deficiency as a material weakness. An agency head is responsible for ensuring that the appropriate agency officials evaluate the effectiveness of the information security program, including testing controls. Further, the agencywide information security program is to include periodic management testing and evaluation of the effectiveness of information security policies and procedures. Periodically evaluating the effectiveness of security policies and controls and acting to address any identified weaknesses are fundamental activities that allow an organization to manage its information security risks cost-effectively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. Further, management control testing and evaluation as part of the program reviews can supplement control testing and evaluation in IG and our audits to help provide a more complete picture of the agencies’ security postures. As a performance measure for this requirement, OMB required agencies to report the number and percentage of systems for which security controls have been tested and evaluated during fiscal years 2001 and 2002. Our analyses of the data agencies reported for this measure showed that although 15 agencies reported an increase in the overall percentage of systems being tested and evaluated for fiscal year 2002, most agencies are not testing all of their systems. As shown in figure 5, our analyses showed that 10 agencies reported that they had tested the controls of less than 50 percent of their systems for fiscal year 2002. Of the remaining 14 agencies, 4 reported that they had tested and evaluated controls for 90 percent or more of their systems. Contingency plans provide specific instructions for restoring critical systems, including such items as arrangements for alternative processing facilities, in case the usual facilities are significantly damaged or cannot be accessed. At many of the agencies we have reviewed, plans and procedures to ensure that critical operations can continue when unexpected events occur, such as temporary power failure, accidental loss of files, or major disaster, were incomplete. These plans and procedures were incomplete because operations and supporting resources had not been fully analyzed to determine which were critical and would need to be restored first. Further, existing plans were not fully tested to identify their weaknesses. As a result, many agencies have inadequate assurance that they can recover operational capability in a timely, orderly manner after a disruptive attack. As another of its performance measures, OMB required agencies to report the number and percentage of systems for which contingency plans have been tested in the past year. As shown in figure 6, our analyses indicated that for fiscal year 2002, only 2 agencies reported that they had tested contingency plans for 90 percent or more of their systems, and 19 had tested contingency plans for less than 50 percent of their systems. One reported that none had been tested. Agencies are required to provide training on security awareness for agency personnel and on security responsibilities for information security personnel. Our studies of best practices at leading organizations have shown that such organizations took steps to ensure that personnel involved in various aspects of their information security programs had the skills and knowledge they needed. They also recognized that staff expertise had to be frequently updated to keep abreast of ongoing changes in threats, vulnerabilities, software, security techniques, and security monitoring tools. However, our past information security reviews at individual agencies have shown that they have not provided adequate computer security training to their employees, including contractor staff. Among the performance measures for these requirements, OMB mandated that agencies report the number and percentage of employees—including contractors—who received security training during fiscal years 2001 and 2002, and the number of employees with significant security responsibilities who received specialized training. Our analyses showed that 16 agencies reported that they provided security training to 50 percent or more of their employees and contractors for fiscal year 2002, with 9 reporting 90 percent or more. Of the remaining 8 agencies, 4 reported that such training was provided for less than half of their employees/contractors, 1 reported that none were provided with this training, and 3 provided insufficient data for this measure. For specialized training for employees with significant security responsibilities, some progress was indicated, but additional training is needed. As indicated in figure 7, our analyses showed that 12 agencies reported that 50 percent or more of their employees with significant security responsibilities had received specialized training for fiscal year 2002, with 5 reporting 90 percent or more. Of the remaining 12 agencies, 9 reported that less than half of such employees received specialized training, 1 reported that none had received such training, and 2 provided insufficient data for this measure. Agencies are required to implement procedures for detecting, reporting, and responding to security incidents. Although even strong controls may not block all intrusions and misuse, organizations can reduce the risks associated with such events if they promptly take steps to detect intrusions and misuse before significant damage can be done. In addition, accounting for and analyzing security problems and incidents are effective ways for an organization to gain a better understanding of threats to its information and of the cost of its security-related problems. Such analyses can also pinpoint vulnerabilities that need to be addressed to help ensure that they will not be exploited again. In this regard, problem and incident reports can provide valuable input for risk assessments, help in prioritizing security improvement efforts, and be used to illustrate risks and related trends in reports to senior management. Our information security reviews also confirm that federal agencies have not adequately (1) prevented intrusions before they occur, (2) detected intrusions as they occur, (3) responded to successful intrusions, or (4) reported intrusions to staff and management. Such weaknesses provide little assurance that unauthorized attempts to access sensitive information will be identified and appropriate actions taken in time to prevent or minimize damage. OMB included a number of performance measures in agency reporting instructions that were related to detecting, reporting, and responding to security incidents. These included the number of agency components with an incident-handling and response capability, whether the agency and its major components share incident information with FedCIRC in a timely manner, and the numbers of incidents reported. OMB also required that agencies report on how they confirmed that patches have been tested and installed in a timely manner. Our analyses of agencies’ reports showed that although most agencies reported that they have established incident-response capabilities, implementation of these capabilities is still not complete. For example, 12 agencies reported that for fiscal year 2002, 90 percent or more of their components had incident handling and response capabilities and 8 others reported that they provided these capabilities to components through a central point within the agency. However, although most agencies report having these capabilities for most components, in at least two cases, the IGs’ evaluations identified instances in which incident-response capabilities were not always implemented. For example, one IG reported that the agency established and implemented its computer security incident-response capability on August 1, 2002, but had not enforced procedures to ensure that components comply with a consistent methodology to identify, document, and report computer security incidents. Another IG reported that the agency had released incident- handling procedures and established a computer incident-response team, but had not formally assigned members to the team or effectively communicated procedures to employees. Our analyses also showed that for fiscal year 2002, 13 agencies reported that they had oversight procedures to verify that patches had been tested and installed in a timely manner, and 10 reported that they did not. Of those that did not have procedures, several specifically mentioned that they planned to participate in FedCIRC’s patch management process. Agencies are required to develop and implement risk-based, cost-effective policies and procedures to provide security protection for information collected or maintained either by the agency or for it by another agency or contractor. In its fiscal year 2001 GISRA report to the Congress, OMB identified poor security for contractor-provided services as a common weakness and for fiscal year 2002 reporting, included performance measures to help indicate whether the agency program officials and CIO used appropriate methods, such as audits and inspections, to ensure that service provided by a contractor are adequately secure and meet security requirements. Our analyses showed that a number of agencies reported that they have reviewed a large percentage of services provided by a contractor, but others have reviewed only a small number. For operations and assets under the control of agency program officials, 17 agencies reported that for fiscal year 2002 they reviewed 50 percent or more of contractor operations or facilities, with 7 of these reporting that they reviewed 90 percent or more. Four agencies reported that they had reviewed less than 30 percent of contractor operations or facilities. For operations and assets under the control of the CIO, 13 agencies reported that for fiscal year 2002 they reviewed 50 percent or more of contractor operations or facilities, with 7 of these reporting that they reviewed 90 percent or more. Of the remaining agencies, 3 reported that they reviewed less than 30 percent of contractor operations or facilities and 5 reported that they had no services provided by a contractor or another agency. Developing effective corrective action plans is key to ensuring that remedial action is taken to address significant deficiencies. Further, a centralized process for monitoring and managing remedial actions enables the agency to identify trends, root causes, and entitywide solutions. OMB has required agency heads to work with CIOs and program officials to provide a strategy to correct security weaknesses identified through annual program reviews and independent evaluations, as well as other reviews or audits performed throughout the reporting period by the IG or us. Agencies are also required to submit corrective action plans for all programs and systems where a security weakness has been identified. OMB guidance requires that these plans list the identified weaknesses and, for each, identify a point of contact, the resources required to resolve the weakness, the scheduled completion date, key milestones with completion dates for the milestones, milestone changes, the source of the weakness (such as a program review, IG audit, or GAO audit), and the status (ongoing or completed). Agencies are also required to submit quarterly updates of these plans that list the total number of weaknesses identified at the program and system levels, as well as the numbers of weaknesses for which corrective actions were completed on time, ongoing and on schedule, or delayed. Updates are also to include the number of new weaknesses discovered subsequent to the last corrective action plan or quarterly update. As reported in its fiscal year 2002 report to the Congress, OMB requires that agencies establish and maintain an agencywide process for developing and implementing program- and system-level corrective action plans and that these plans serve as an agency’s authoritative management tool to ensure that program- and system-level IT security weaknesses are remediated. In addition, OMB requires that every agency maintain a central process through the CIO’s office to monitor agency remediation activity. Our analyses of agencies’ fiscal year 2002 corrective action plans, IGs’ evaluations of these plans, and available quarterly updates showed that the usefulness of these plans as part of agency management’s overall process to identify and correct their information security weaknesses could be limited when they do not identify all weaknesses or provide realistic completion estimates. For example, of 14 agency IGs that reported on whether or not their agency’s corrective action plan addressed all identified significant weaknesses, only 5 reported that their agency’s plan did so, and 9 specifically reported that their agency’s plan did not. Further, in several instances, corrective action plans did not indicate the current status of weaknesses identified or include information regarding whether actions were on track as originally scheduled. In addition, most agencies did not indicate the relative priority of weaknesses for corrective action. As a result, it was difficult to determine whether an agency’s actions are focused on achieving results for its most significant weaknesses. Further, three IGs reported that their agencies did not have a centralized tracking system to monitor the status of corrective actions, and one IG specifically questioned the accuracy of unverified, self- reported corrective actions reported in the agency’s plan. In its report, OMB highlighted several actions that may help to address such concerns. For example, OMB reported that since completion of their fiscal year 2002 reviews, agencies have been working to prioritize their IT security weaknesses. In addition, OMB stated that fiscal year 2003 FISMA reporting guidance would direct agency IGs to verify whether an agency has a central process to monitor remediation, as required by OMB. The governmentwide weaknesses identified by OMB in its reports to the Congress, as well as the limited progress in implementing key information security requirements, continue to emphasize that agencies have not effectively implemented programs for managing information security. For the past several years, we have analyzed the audit results for 24 of the largest federal agencies and found that all 24 had significant weaknesses in the policies, procedures, and technical controls that apply to all or a large segment of their information systems and help ensure their proper operation. In particular, our analyses in both 2001 and 2002 found that all 24 had weaknesses in security program management, which is fundamental to the appropriate selection and effectiveness of the other categories of controls. Security program management covers a range of activities related to understanding information security risks; selecting and implementing controls commensurate with risk; and ensuring that controls, once implemented, continue to operate effectively. Establishing a strong security management program requires that agencies take a comprehensive approach that involves both (1) senior agency program managers who understand which aspects of their missions are the most critical and sensitive and (2) technical experts who know the agencies’ systems and can suggest appropriate technical security control techniques. We studied the practices of organizations with superior security programs and summarized our findings in a May 1998 executive guide entitled Information Security Management: Learning From Leading Organizations. Our study found that these organizations managed their information security risks through a cycle of risk management activities. These activities, which are now among the federal government’s statutory information security requirements, included assessing risks and determining protection needs, selecting and implementing cost-effective policies and controls to meet promoting awareness of policies and controls and of the risks that prompted their adoption among those responsible for complying with them, and implementing a program of routine tests and examinations for evaluating the effectiveness of policies and related controls and reporting the resulting conclusions to those who can take appropriate corrective action. Although GISRA reporting provides performance information on these areas, it is important for agencies to ensure that they have the appropriate management structures and processes in place to strategically manage information security, as well as ensure the reliability of performance information. For example, disciplined processes can routinely provide the agency with timely, useful information for day-to-day management of information security. Also, development of management strategies that identify specific actions, time frames, and required resources may help to significantly improve performance. With GISRA expiring on November 29, 2002, FISMA was enacted on December 17, 2002, to permanently authorize and strengthen the information security program, evaluation, and reporting requirements established by GISRA. In particular, FISMA provisions established additional requirements that can assist the agencies in implementing effective information security programs, help ensure that agency systems incorporate appropriate controls, and provide information for administration and congressional oversight. These specific requirements are described and discussed below. FISMA requires an agency’s CIO to designate a senior agency information security officer who, for the agency’s FISMA-prescribed information security responsibilities, shall carry out the CIO’s responsibilities; possess professional qualifications, including training and experience, required to administer the required functions; have information security duties as that official’s primary duty; and head an office with the mission and resources to assist in ensuring agency compliance. In contrast, GISRA required the CIO to designate a senior agency information security official, but did not specify the responsibilities, qualifications, or other requirements for this position. Agencies’ fiscal year 2002 GISRA reports showed that the CIOs had designated a senior agency information security official for 22 of the 24 agencies (the remaining 2 agencies’ reports did not indicate whether they had designated such an official), but OMB did not require the agencies to report any additional information on the responsibilities of this official. FISMA requires each agency to develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. This inventory is also to include an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also mandates that OMB issue guidance and oversee the implementation of this requirement. Although GISRA did not specifically require that agencies maintain an inventory of major information systems, OMB reporting instructions for fiscal year 2002 did require agencies to report the total number of agency systems, and most agencies reported a total number in their GISRA reports. However, six IGs specifically reported problems with the completeness of their agencies’ system inventories. FISMA includes a number of requirements for NIST to develop security- related standards and guidelines. These include, for systems other than those dealing with national security, (1) standards to be used by all agencies to categorize all of their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels, (2) guidelines recommending the types of information and information systems to be included in each category, and (3) minimum information security requirements for information and information systems in each category. For the first of these requirements—standards for security categorization—NIST is to submit the standards to the Secretary of Commerce for promulgation no later than 12 months after enactment (December 17, 2003). The guidelines on the types of information and information systems to be included in each category are required to be issued no later than 18 months after enactment (June 17, 2004). The minimum information security requirements are required to be submitted to the Secretary for promulgation no later than 36 months after enactment (December 17, 2005). On May 16, 2003, NIST issued an initial public draft of the standards for security categorization for comment. These proposed standards would establish three levels of risk—low, moderate, and high—and would categorize information and information systems with respect to security by having an agency assign the appropriate level of risk to each of three security objectives: (1) confidentiality, defined as preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information; (2) integrity, defined as guarding against improper information modification or destruction, and including ensuring information nonrepudiation and authenticity; and (3) availability, defined as ensuring timely and reliable access to and use of information. Also according to the draft standard, because an information system may contain more than one type of information that is subject to security categorization (such as privacy information, medical information, proprietary information, financial information, and contractor-sensitive information), the security categorization of an information system that processes, stores, or transmits multiple types of information should be at least the highest risk level that has been determined for each type of information for each security objective, taking into account dependencies among the objectives. FISMA also requires NIST to develop, in conjunction with the Department of Defense, including the National Security Agency, guidelines for identifying an information system as a national security system. On June 3, 2003, NIST released a draft working paper of these guidelines that provides the basis and method for identifying national security systems, including agency determination and reporting responsibilities. For non-national-security programs, GISRA required those performing the annual independent evaluations (essentially the IGs) to report the results of their evaluations to OMB and required OMB to summarize these results in an annual report to the Congress. In addition, OMB required the agencies to report the results of their annual GISRA security reviews of systems and programs. FISMA now requires agencies to report annually to OMB, as well as to the House Committees on Government Reform and Science; the Senate Committees on Governmental Affairs and Commerce, Science, and Transportation; the appropriate congressional authorizing and appropriations committees; and the Comptroller General; on the adequacy and effectiveness of information security policies, procedures, and practices, including compliance with each of FISMA’s requirements for an agencywide information security program. In summary, with few exceptions, agencies’ implementation of federal information security requirements has not yet shown significant progress. Legislation, congressional oversight like today’s hearing, and efforts by OMB through the budget process, the President’s Management Agenda Scorecard, and other tools, such as corrective action plans and performance measures, have all contributed to increasing agency management’s attention to information security. Also, new techniques, such as establishing governmentwide performance goals and quarterly reporting of performance measures, may help to further encourage agency progress and facilitate congressional and administration oversight. However, in addition to these steps, achieving significant and sustainable results will likely require agencies to integrate such techniques into overall security management programs and processes that prioritize and routinely monitor and manage their information security efforts. These programs and processes must focus on implementing statutory security requirements, including performing risk assessments, testing and evaluating controls, and identifying and correcting weaknesses to ensure that the greatest risks have been identified, security controls have been implemented to address these risks, and that critical operations can continue when unexpected events occur. Development of management strategies that identify specific actions, time frames, and required resources may also help to significantly improve performance. Further, agencies will need to ensure that systems and processes are in place to provide information and facilitate the day-to-day management of information security throughout the agency, as well as to verify the reliability of reported performance information. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Subcommittee may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-3317. I can also be reached by E-mail at [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1996, GAO has reported that poor information security in the federal government is a widespread problem with potentially devastating consequences. Further, GAO has identified information security as a governmentwide high-risk issue in reports to the Congress since 1997--most recently in January 2003. To strengthen information security practices throughout the federal government, information security legislation has been enacted. This testimony discusses efforts by federal departments and the administration to implement information security requirements mandated by law. In so doing, it examines overall information security weaknesses and challenges that the government faces, and the status of actions to address them, as reported by the Office of Management and Budget (OMB). GAO's evaluation of agency efforts to implement federal information security requirements and correct identified weaknesses. New requirements mandated by the Federal Information Security Management Act of 2002 (FISMA). Based on the fiscal year 2002 reports submitted to OMB, the federal government has made limited overall progress in implementing statutory information security requirements, although a number of benefits have resulted. Among these benefits are several actions taken and planned to address governmentwide information security weaknesses and challenges, such as lack of senior management attention. Nevertheless, as indicated by selected quantitative performance measures for the largest federal agencies, progress has been limited. Specifically, excluding data for one agency that were not comparable for fiscal years 2001 and 2002, improvements for 23 agencies ranged from 3 to 10 percentage points for the selected measures. GAO's analyses of agencies' reports and evaluations confirmed that many agencies have not implemented security requirements for most of their systems, such as performing risk assessments and testing controls. Further, the usefulness of agency corrective action plans may be limited when they do not identify all weaknesses or contain realistic completion dates. Agencies also continue to face challenges in effectively implementing and managing their overall information security programs. FISMA provisions establish additional requirements that, among other things, can assist agencies in implementing effective information security programs. However, attaining significant and sustainable results in implementing such requirements will also likely require processes that prioritize and routinely monitor and manage agency efforts, as well as continued congressional and administration oversight.
Under the EB-5 Program Regional Center model, first enacted as a pilot program in 1992 and reauthorized numerous times since, a certain number of the EB-5 visas are set aside annually for immigrant investors investing within economic units called regional centers, which are established to promote economic growth. Most recently, the EB-5 Program Regional Center model was extended until September 30, 2016. Immigrant investors can choose to invest on their own or with others directly in a business, or they may use a regional center to pool their investment with those of other immigrant investors and other foreign and U.S. investors to develop larger projects owned and managed by others. Immigrant investors must demonstrate that their investment in a new commercial enterprise will result in the creation or, in the case of a troubled business, preservation or creation (or some combination of the two), of at least 10 full-time positions for qualifying employees. In recent years, the EB-5 Program has increased in popularity as a viable source of low-interest funding for major real estate development projects, such as the Barclays Center—a multipurpose indoor arena in Brooklyn, New York—and the Marriott Convention Center Hotel in Washington, D.C. Individuals seeking to establish a regional center under the EB-5 Program must submit an initial application and supporting documentation as well as an update for each fiscal year (or as otherwise requested by USCIS) showing that the regional center continues to meet the program requirements to maintain its regional-center designation. Prospective regional-center principals apply to the program by submitting Form I-924, Application for Regional Center under the Immigrant Investor Pilot Program. On this form, applicants are to provide a proposal, supported by economically or statistically valid forecasting tools, that describes, among other things, how the regional center (1) focuses on a geographic area of the United States; (2) will promote economic growth through increased export sales and improved regional productivity, job creation, and increased domestic capital investment; and (3) will create jobs directly or indirectly. Applicants must also include a detailed statement regarding the amount and source of capital committed to the regional center, as well as a description of the promotional efforts they have taken and planned. Once a regional center has been approved to participate in the program, a designated representative of the regional center must file a Form I-924A, Supplement to Form I-924, for each fiscal year, to provide USCIS with updated information demonstrating that the regional center continues to promote economic growth, improved regional productivity, job creation, or increased domestic capital investment within its approved geographic area. USCIS is to issue a notice of intent to terminate the participation of a regional center if the center fails to submit the required information or upon a determination that the regional center no longer serves the purpose of promoting economic growth. As shown in figure 1, prospective immigrant investors seeking to participate in the EB-5 Program must complete three forms and provide supporting documentation as appropriate. Supporting documentation is assessed to ensure that the prospective immigrant investors have met (1) the terms of participation for the program, (2) criteria for lawful admission for permanent residence on a conditional basis, and (3) requirements of the program to have the conditional basis of his or her lawful permanent resident status removed. As of August 2016, USCIS had approved approximately 851 regional centers spread across 48 states, the District of Columbia, and four U.S. territories; and had terminated the participation of 61 regional centers for not filing a Form I-924A or not promoting economic growth. The Fraud Risk Framework identifies leading practices for agencies to manage fraud risks. It includes control activities that help agencies prevent, detect, and respond to fraud risks as well as structures and environmental factors that influence or help managers achieve their objectives to mitigate fraud risks. The framework consists of four components for effectively managing fraud risks: commit, assess, design and implement, and evaluate and adapt (see fig. 2). Leading practices for each of these components include the following: (1) commit: create an organizational culture to combat fraud at all levels of the agency, and designate an entity within the program office to lead fraud risk management activities; (2) assess: assess the likelihood and impact of fraud risks, determine risk tolerance, examine the suitability of existing controls, and prioritize residual risks; (3) design and implement: develop, document, and communicate an antifraud strategy, focusing on preventive control activities; and (4) evaluate and adapt: collect and analyze data from reporting mechanisms and instances of detected fraud for real-time monitoring of fraud trends, and use the results of monitoring, evaluations, and investigations to improve fraud prevention, detection, and response. Since August 2015, USCIS has continued to take steps intended to enhance its fraud-detection activities. This includes conducting and planning risk assessments to gather additional information on potential fraud risks to the program. USCIS is also taking steps to collect more applicant and petitioner information through a random site visit pilot and expanding its use of background checks, among other things, to help improve its ability to identify specific incidence of fraud. Further, USCIS has taken preliminary steps to digitize and analyze the paper files submitted by petitioners and applicants to the program. However, the EB- 5 Program is hampered by a reliance on voluminous paper files, and failing to carry through with these planned efforts could limit USCIS’s ability to improve fraud risk management. USCIS is currently conducting multiple risk assessments to help assess fraud risks to the program and has plans for future assessments. DHS concurred with our August 2015 recommendation that USCIS plan and conduct regular fraud risk assessments of the EB-5 Program in the future. In January 2016 and April 2016, USCIS officials updated us about their ongoing actions to conduct and plan risk assessments for the program. These assessments include a current study of potential fraud associated with certain immigrant investors’ source of funds, a random site visit pilot that is planned for completion later this year, and a planned study of all national-security concerns associated with the program: Source-of-funds study: FDNS is leveraging overseas staff to attempt to identify potential sources of fraud stemming from immigrant investors’ false statements regarding their source of funds to provide their investment in the program. Site-visit pilot: According to agency officials’ statements and documentation we reviewed, FDNS is designing and implementing a plan to conduct random site visits. According to agency officials, the visits will also serve to improve their assessment of fraud risks, and, according to documentation, will be random, in-person, and unannounced, and are intended to, among other things, provide enhanced integrity of the EB-5 program by increasing compliance with statutory and regulatory requirements, as well as deterring regional center and investor fraud. According to agency officials and agency documentation, a total of 50 site visits in four different states are planned. An agency official stated that the first site visits began in August 2016. Another official stated that they anticipate conducting additional site visits on both a continual and as-needed basis. National-security concerns: In April 2016, FDNS officials stated that they also planned to conduct a risk assessment of all previously identified national-security concerns for the program but did not have final details to provide at the time of our review. Based on past and current risk assessments, a senior FDNS official stated that the most frequent incidents of fraud in the program were associated with securities fraud, whereby immigrant investors were defrauded by unscrupulous regional-center principals and their associates. The official’s comments are consistent with our August 2015 findings that over half of ongoing investigations associated with the program related to securities fraud. In our report we also noted that the EB-5 program faced unique fraud risks compared to other immigration programs that included uncertainties in verifying that the funds invested were obtained lawfully and various investment-related schemes to defraud investors. For example, we reported that in one instance, a couple created a regional center and solicited immigrant investors with promises of investing in a local energy company. Instead of investing in that project, the couple used investor funds to, among other things, buy cars for themselves and regional-center employees, and invest in a financially troubled restaurant. Along with its risk assessments, since August 2015, USCIS has continued to take steps to improve its ability to identify potential instances of fraud. USCIS is taking various steps to capture and utilize additional information from its immigrant investor and regional-center program participants and is obtaining additional resources to aid its oversight. These actions were taken in part as a response to our August 2015 recommendation for USCIS to develop a strategy to expand information collection to strengthen fraud prevention, detection, and mitigation capabilities. According to USCIS officials, the EB-5 program has issued updated petition and application forms for public comment and anticipates publishing the revised forms in final in fiscal year 2017. These updated forms would help capture additional information about petitioners and applicants that could be used to potentially identify fraud. Along with capturing additional self-reported information from petitioners and applicants, USCIS officials stated that they are exploring increased use of Financial Crimes Enforcement Network (FinCEN) checks to identify potential fraudulent actors in the program, especially for regional-center applicants. Moreover, USCIS is exploring the potential use of a new process to allow interviews of immigrant investor petitioners seeking the removal of their conditional status at the I-829 stage, according to agency officials and documentation. According to agency documentation, this effort is in part a response to our August 2015 report, which recommended that USCIS develop a strategy to expand information collection such as considering the increased use of interviews at the I-829 phase. If implemented, the interviews will be conducted by knowledgeable USCIS officials based in Washington, D.C. As of August 2016, the officials stated that a limited number of pilot interviews had already taken place, and, as a result of these and additional pilot interviews, USCIS plans to refine and develop a comprehensive interview strategy. Further, to help the EB-5 program improve oversight, USCIS officials stated that they are working to improve their ability to track and report data related to EB-5 investments and job creation through the planned development of a case-management system. In August 2016, USCIS officials reported that a preliminary scoping meeting had been held and that project completion was tentatively planned for some time in fiscal year 2017. USCIS is also taking steps to obtain additional resources for program oversight. For example, FDNS has increased its number of authorized positions from 21 to 25 full-time equivalent staff and has added student intern positions and administrative support. According to agency officials and documentation, USCIS’s IPO has also created a specialized group focused primarily on regulatory compliance of existing regional centers to help ensure that the centers continue to serve their purpose of promoting economic growth. Among other things, the compliance unit is expected to help coordinate referrals, such as potential fraud referrals, to FDNS and house an audit function, which anticipates conducting its first audit activities later in 2016. Officials expect that the audits will be conducted on-site at regional centers and that they will include a review of the centers’ compliance with applicable laws and regulations. Most recently, USCIS has also issued proposed changes for public comment to increase the filing fees for immigrant investors and regional-center applicants either seeking to participate or currently participating in the program. Specifically, USCIS has proposed more than doubling the initial Form I-526 filing fee from $1,500 to $3,675, nearly tripling the I-924 regional- center designation or amendment fee from $6,230 to $17,795, and adding a new I-924A annual filing fee of $3,035. These new fees are designed to allow USCIS to fully recover the costs of the services it provides and also aid its efforts in administering the program including fraud-identification efforts. For example, the proposed rule requests increased fees to conduct additional oversight work such as through site visits. Moreover, USCIS officials stated that they are also developing standard operating procedures for adjudication staff for each immigrant investor form, which the agency plans to finalize by the end of calendar year 2016 or the first quarter of fiscal year 2017. These new procedures could help standardize and improve USCIS’s adjudication processes, which could improve the ability of its staff to detect potential instances of fraud. We found in August 2015 that USCIS is unable to comprehensively identify and address fraud trends across the program because of its reliance on paper-based documentation and because it faces certain limitations with using available data and with collecting additional data on EB-5 immigrant investors or investments. Agency officials noted that the state of information within the program precluded certain fraud-detection and analysis efforts such as the development of an automated risk- weighting system to prioritize petitions and applications at higher risk of fraud. These issues continue to exist. Based on our review of 20 applications and petitions, we similarly determined that identifying fraud indicators in these petitions and applications is extremely challenging. These challenges exist in part because many of the files were several thousand pages long and would take significant time to review. According to USCIS documentation, the program anticipates receiving around 14,000 petitions and applications a year, and the average submission is approximately 1,000 pages in length, for a cumulative total of about 14 million pages that, based on current capabilities, would need to be reviewed manually. According to agency documentation, the average review time for an EB-5 filing can range from 5.5 hours for an I-829 removal of conditions petition to 40 hours for an I-924 application for a new regional center. We also found in August 2015 that USCIS planned to collect and maintain more readily available data on EB-5 Program petitioners and applicants through the deployment of electronic forms in its new system, the USCIS Electronic Immigration System (ELIS). However, according to agency officials, they do not anticipate capturing supporting information provided as evidence in the petitions and applications in USCIS ELIS in the near term. According to an FDNS official, this supporting information can be an important source of potential fraud indicators as it contains details such as business plans associated with the investment. Recognizing the limitations associated with its reliance on paper files, in January 2016 USCIS officials stated that they were analyzing alternatives to evaluate and compare the effectiveness, suitability, costs, and risks associated with different potential hardware and software solutions to detect fraud patterns across its EB-5 applications. In February 2016, USCIS completed a draft of this study, which evaluated both hardware and software options to support fraud detection in both the EB-5 and asylum programs. The study evaluated several document-conversion alternatives including hardware and software tools to scan paper files and convert them to digital text, as well as text analytic alternatives such as software that allows for the detailed analysis of text similar to that used for computer-aided plagiarism detection. For example, this software could be used to identify duplicate passages in different petitions and applications, which could indicate potential fraud. The study reported a number of challenges associated with digitizing and analyzing petitioner and applicant information for the EB-5 program. For example, the study noted that implementing text analytic techniques for computer-aided plagiarism detection will likely be costly since many software packages do not offer this capability without substantial modification. Based on the results of this study, USCIS officials told us that they were developing a set of proposals for management review and potential approval; however, these efforts remained in their development stages and were being reviewed by USCIS stakeholders for feedback, recommendation, and acceptance prior to procurement activities. FDNS officials stated that, at this time, they were optimistic that actions would be taken but were ultimately uncertain to what extent, if any, the proposals would be acted upon. USCIS continues to take steps to improve overall fraud risk management but has not incorporated certain leading practices that could benefit its efforts. Generally, USCIS has followed or partially followed selected leading practices identified in GAO’s Fraud Risk Framework, such as having a commitment to establishing a culture of fraud risk management in the agency. However, USCIS has not developed a fraud risk profile, a component of the Fraud Risk Framework that helps inform an agency’s decisions and plans to mitigate fraud risks. USCIS has taken some key steps that align with the leading practices in the Fraud Risk Framework. In particular, USCIS has taken actions that closely align with the first and second components, which call for federal managers to (1) commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management; and (2) plan regular fraud risk assessments and assess risks to determine a fraud risk profile. For example, USCIS officials stated that they are committed to creating an organizational culture to combat fraud at all agency levels through leadership that is committed to assessing fraud risks and for training at all levels for adjudicators. In August 2015, we reported that FDNS had also developed and provided training on specific fraud-related topics believed to be immediately relevant to adjudication of EB-5 Program petitions and applications. We found that FDNS is the unit charged with preventing, detecting, and responding to allegations of fraud in the program and represents a dedicated entity for managing fraud risks, which is consistent with the Fraud Risk Framework’s leading practices. With respect to assessing risks, as mentioned previously, USCIS is currently conducting multiple assessments to help it identify and manage EB-5 program fraud risks. The Fraud Risk Framework calls for agencies to plan regular fraud risk assessments to determine a fraud risk profile. We found that the risk assessments conducted for the program generally aligned with this component of the Fraud Risk Framework. For example, through one of these prior risk assessments, USCIS determined that enhanced security checks conducted by other federal entities, such as the Federal Bureau of Investigation and U.S. Customs and Border Protection, revealed information that could help identify potential fraud by the immigrant investor or the regional-center principal. As a result, USCIS now conducts selected background checks on all of its immigrant investors and regional-center principals. Most recently, according to a senior FDNS official, USCIS also signed a memorandum of understanding with FinCEN and anticipates conducting additional reviews to help identify potential fraudulent financial activity in its regional centers. While USCIS has taken steps to implement selected leading fraud- management practices, we found that USCIS has not developed a fraud risk profile—an overarching document that guides an organization’s fraud-management efforts—as called for by the Fraud Risk Framework. Specifically, a fraud risk profile involves identifying inherent fraud risks affecting a program; assesses the likelihood and impact of these risks; determines a fraud risk tolerance; and examines the suitability of existing fraud controls and prioritizes residual fraud risks. (App. I describes the key elements of a fraud risk profile.) Instead, USCIS’s completed and planned risk assessments span multiple years and were developed as separate documents and reports, and USCIS lacks a unifying document that consolidates these findings and informs the specific control activities managers design and implement. A senior FDNS official told us that FDNS is beginning the development of a fraud-management plan to help guide efforts for the program, but these efforts were in their early stages. Moreover, the official stated that they did not anticipate incorporating a fraud risk profile as part of their fraud-management plan. Without a profile, managers may lack an important tool that can serve as an internal benchmark in assessing the performance of fraud-control activities. Further, a profile can provide additional assurances to stakeholders and decision makers on the fraud risks to programs as well as steps taken to manage those risks. This fraud risk profile can include all elements of the prior risk assessments and any updates, serving as a central reference document that can inform and help initiate actions. Absent a fraud risk profile, USCIS may not be well positioned to identify and prioritize fraud risks in the EB-5 Program and ensure the appropriate controls are in place to mitigate these risks. USCIS has taken a number of steps recently to enhance its fraud- detection capabilities throughout the EB-5 Program. However, the anticipated benefits of these steps may take time to realize. In the meantime, the agency continues to be hindered by a reliance on time- consuming reviews of paper files that preclude certain potential fraud- detection activities such as the use of text analytics to help identify indicators of potential fraud in the applications and petitions of regional- center principals and immigrant investors. The continuation of planned efforts to digitize the files, including the supporting evidence submitted by applicants and petitioners, could help USCIS better identify fraud indicators in the program. Moreover, USCIS has incorporated several leading fraud risk management practices into its efforts, including committing to creating an organizational culture that combats fraud risks and assessing those risks through regular risk assessments. However, USCIS would be better positioned to prioritize and respond to evolving fraud risks by adopting an approach that is guided by a fraud risk profile, as called for by the Fraud Risk Framework. To strengthen USCIS’s EB-5 Program fraud risk management, we recommend the Director of USCIS develop a fraud risk profile that aligns with leading practices identified in GAO’s Fraud Risk Framework. We provided a draft of this report to DHS for its review and comment. In its written comments, reproduced in appendix II, DHS concurred with our recommendation and stated that USCIS will develop a fraud risk profile as described with estimated completion by September 30, 2017. We will continue to monitor the agency’s efforts in this area. Upon completion and use of a fraud risk profile to guide its fraud risk management, USCIS will be better positioned to prioritize and respond to evolving fraud risks. DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. In addition to the contact named above, Gabrielle Fagan and Linda Miller, Assistant Directors; Jon Najmi; Anna Maria Ortiz; Brynn Rovito; Kiran Sreepada; and Nick Weeks made key contributions to this report. The Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework) identifies leading practices for agencies to manage fraud risks. It includes control activities that help agencies prevent, detect, and respond to fraud risks as well as structures and environmental factors that influence or help managers achieve their objectives to mitigate fraud risks. The framework consists of four components for effectively managing fraud risks: commit, assess, design and implement, and evaluate and adapt. Leading practices for each of these components include the following: (1) commit: create an organizational culture to combat fraud at all levels of the agency, and designate an entity within the program office to lead fraud risk management activities; (2) assess: assess the likelihood and impact of fraud risks, and determine risk tolerance, examine the suitability of existing controls, and prioritize residual risks; (3) design and implement: develop, document, and communicate an antifraud strategy, focusing on preventive control activities; and (4) evaluate and adapt: collect and analyze data from reporting mechanisms and instances of detected fraud for real-time monitoring of fraud trends, and use the results of monitoring, evaluations, and investigations to improve fraud prevention, detection, and response. The fraud risk profile is an essential piece of the antifraud strategy, as described in the “Design and Implement” section of the Fraud Risk Framework, and informs the specific control activities managers design and implement. The elements in table 1 reflect key elements of fraud risk assessments and the fraud risk profile. The table is meant solely for illustrative purposes to show one possible format for agencies to document their fraud risk profile. The table shows information related to one fraud risk; however, a robust fraud risk profile would include information about all fraud risks that may affect a program. Documenting fraud risks together can aid managers in understanding links between specific risks. In addition, other tools a program uses to assess risks, such as the risk matrix discussed in the “Assess” section of the Fraud Risk Framework, can supplement the documentation for the fraud risk profile. We adapted the table and additional information below it from Standards for Internal Control in the Federal Government, as well as a publication by the Australian National Audit Office and one cosponsored by the Institute of Internal Auditors, the American Institute of Certified Public Accountants, and the Association of Certified Fraud Examiners. The following is additional information about the elements in table 1. Identified Fraud Risks. What fraud risks does the program face? Include a brief description of the fraud risk or scheme. This list will vary by program, and may be informed by activities to gather information during the fraud risk assessment, such as interviews with staff, brainstorming sessions, and information from hotline referrals. Fraud Risk Factors. What conditions or actions are most likely to cause or increase the chances of a fraud risk occurring? This may reflect fraud risk factors highlighted in Standards for Internal Control in the Federal Government, as well as other factors that provide additional details about specific fraud risks. Fraud Risk Owner. Which group or individual within the program is responsible for addressing the risk? The owner of the fraud risk will vary by program, but generally the owner this is the entity with accountability for addressing the fraud risk. Inherent Risk Likelihood and Impact. In the absence of controls, how likely is the fraud risk and what would the impact be if it were to occur? As noted in the “Assess” section of the Fraud Risk Framework, the specific methodology for assessing the likelihood and impact of risks will vary by agency. One option for assessing likelihood is to use a five-point scale, as noted in table 1. When considering impact, participants of the fraud risk assessment may consider the impact of fraud on the program’s compliance with laws and regulations, operations, and reputation. Inherent Risk Significance. In the absence of controls, how significant is the fraud risk based on an analysis of the likelihood and impact of the risk? While the specific methodology for assessing risks may vary by agency, including qualitative and quantitative methodologies, managers may multiply the likelihood and impact scores, or apply a five-point scale. Existing Antifraud Controls. What controls does the program already have in place to reduce the likelihood and impact of the inherent fraud risk? This is intended to assist with mapping the existing controls to the fraud risks or schemes, which would reduce the likelihood and impact of a fraud risk occurring. Residual Risk Likelihood and Impact. Taking into account the effectiveness of existing controls, how likely is the fraud risk and what would the impact be if it were to occur? Managers may consider assessing both the residual likelihood and impact of fraud risks using the five-point scale described in table 1. Controls that are not properly designed or operating effectively may contribute to high residual risk. Residual Risk Significance. How significant is the fraud risk based on an analysis of the likelihood and impact, as well as the effectiveness of existing controls? Like inherent risk significance, qualitative and quantitative methodologies may be used to establish residual risk significance. Fraud Risk Response. What actions does the program plan to address the fraud risk, if any, in order to bring fraud risks within managers’ risk tolerance?
Congress created the EB-5 visa category to promote job creation and capital investment by immigrant investors in exchange for lawful permanent residency and a path to citizenship. Participants must invest either $500,000 or $1 million in a business that is to create at least 10 jobs. Upon meeting program requirements, immigrant investors are eligible for conditional status to live and work in the United States and can apply to remove the conditional basis of lawful permanent residency after 2 years. In August 2015, GAO reported on weaknesses in certain USCIS fraud mitigation activities, and made two related recommendations. GAO was asked to review actions taken by USCIS to address fraud risks in the EB-5 program since its August 2015 report. This report examines the extent to which USCIS (1) has taken steps to enhance its fraud detection and mitigation efforts; and (2) has incorporated selected leading fraud risk management practices into its efforts. GAO reviewed relevant program documentation and information; selected and reviewed a random, nongeneralizable sample of immigrant investor petitions and regional-center applications submitted between fiscal years 2010 and 2014; and compared USCIS's actions against GAO's Fraud Risk Framework. The Department of Homeland Security's U.S. Citizenship and Immigration Services (USCIS) has recently taken steps intended to enhance fraud detection and mitigation activities for the Employment-Based Fifth Preference Immigrant Investor Program (EB-5 Program) and address previous GAO recommendations. This includes actions such as conducting and planning additional risk assessments to gather additional information on potential fraud risks to the program. For example, USCIS is leveraging overseas staff to investigate potential fraud associated with unlawful sources of immigrant investor funds and is conducting a site visit pilot to help assess the potential risks of fraud among EB-5 program investments. USCIS is also taking steps to collect more information about EB-5 program investments and immigrant investors through new, revised forms and expanding its use of background checks, among other things, to help improve its ability to identify specific incidence of fraud. However, fraud mitigation in the EB-5 Program is hindered by a reliance on voluminous paper files, which limit the agency's ability to collect and analyze program information. In its review of a nongeneralizable selection of files associated with EB-5 program regional centers and immigrant investors, GAO found that identifying fraud indicators is extremely challenging. For example, many of these files were several thousand pages long and would take significant time to review. According to USCIS documentation, the program anticipates receiving approximately 14 million pages of supporting documentation from its regional-center applicants and immigrant investor petitioners annually. Recognizing these limitations, USCIS has taken preliminary steps to study digitizing and analyzing the paper files submitted by petitioners and applicants to the program, which could help USCIS better identify fraud indicators in the program; however, these efforts are in the early stages. USCIS has incorporated selected leading fraud risk management practices into its efforts but could take additional actions to help guide and document its efforts. GAO's Fraud Risk Framework is a set of leading practices that can serve as a guide for program managers to use when developing efforts to combat fraud in a strategic, risk-based manner. USCIS's actions align with two key components of the Fraud Risk Framework: (1) commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management such as by providing specialized fraud awareness training; and (2) assess risks by planning and completing regular fraud risk assessments. However, USCIS has not developed a fraud risk profile, an overarching document that guides its fraud management efforts, as called for in the Fraud Risk Framework. Instead, USCIS's risk assessments, spanning multiple years, were developed as separate documents and reports, and there is not a unifying document that consolidates and systematically prioritizes these findings. Without a fraud risk profile, USCIS may not be well positioned to identify and prioritize fraud risks in the EB-5 Program, ensure the appropriate controls are in place to mitigate fraud risks, and implement other Fraud Risk Framework components. GAO recommends that USCIS develop a fraud risk profile that aligns with leading practices identified in GAO's Fraud Risk Framework. The Department of Homeland Security concurred with GAO's recommendation.
While part of today’s hearing focuses specifically on FDA’s responsibilities for the oversight of food safety, it is important to note that FDA is one of 15 federal agencies that collectively administer at least 30 laws related to food safety. This fragmentation is a key reason we designated federal oversight of food safety as a high-risk area. Two agencies have primary responsibility—FDA is responsible for the safety of virtually all foods except for meat, poultry, and processed egg products, which are the responsibility of USDA. In addition, among other agencies, the National Marine Fisheries Service (NMFS) in the Department of Commerce conducts voluntary, fee-for-service inspections of seafood safety and quality; the Environmental Protection Agency (EPA) regulates the use of pesticides and maximum allowable residue levels on food commodities and animal feed; and the Department of Homeland Security is responsible for coordinating agencies’ food security activities. This federal regulatory system for food safety, like many other federal programs and policies, evolved piecemeal, typically in response to particular health threats or economic crises. In January 2007, we added the federal oversight of food safety to our High- Risk Series, which is intended to raise the priority and visibility of government programs that are in need of broad-based transformation to achieve greater economy, efficiency, effectiveness, accountability, and sustainability. Over the past 30 years, we have reported on issues—for example, the need to transform the federal oversight framework to reduce risks to public health as well as the economy—that suggest that the federal oversight of food safety could be designated as a high-risk area. The fragmented nature of the federal food oversight system calls into question whether the government can plan more strategically to inspect food production processes, identify and react more quickly to outbreaks of foodborne illnesses, and focus on promoting the safety and integrity of the nation’s food supply. While we have reported on problems with the federal food safety system— including inconsistent oversight, ineffective coordination, and inefficient use of resources—most noteworthy for today’s hearing is that federal expenditures for the oversight of food safety have not been commensurate with the volume of foods regulated by the agencies or consumed by the public. We have reported that four agencies—USDA, FDA, EPA, and NMFS—spent a total of $1.7 billion on food safety-related activities in fiscal year 2003. USDA and FDA were responsible for nearly 90 percent of those federal expenditures. However, the majority of federal expenditures for food safety inspection were directed toward USDA’s programs for ensuring the safety of meat, poultry, and egg products even though USDA is responsible for regulating only about 20 percent of the food supply. In contrast, FDA accounted for only 24 percent of expenditures even though it is responsible for regulating about 80 percent of the food supply. Others have called for fundamental changes to the federal food safety system overall. In 1998, the National Academy of Sciences concluded that the system is not well equipped to meet emerging challenges. In response to the Academy’s report, the President established a Council on Food Safety which released a Food Safety Strategic Plan in January 2001. The plan recognized the need for a comprehensive food safety statute and concluded, “the current organizational structure makes it more difficult to achieve future improvements in efficiency, efficacy, and allocation of resources based on risk.” While many of the recommendations we made have been acted upon, a fundamental reexamination of the federal food safety system is warranted. Taken as a whole, our work indicates that Congress and the executive branch can and should create the environment needed to look across the activities of individual programs within specific agencies, including FDA, and toward the goals that the federal government is trying to achieve. To that end, we have recommended, among other things, that Congress enact comprehensive, uniform, and risk-based food safety legislation and commission the National Academy of Sciences or a blue ribbon panel to conduct a detailed analysis of alternative organizational food safety structures. We have also recommended that the executive branch reconvene the President’s Council on Food Safety to facilitate interagency coordination on food safety regulation and programs. According to documents on the council’s Web site, the current administration has not reconvened the council. These actions can begin to address the fragmentation in the federal oversight of food safety. Going forward, to build a sustained focus on the safety and integrity of the nation’s food supply, Congress and the executive branch can integrate various expectations for food safety with congressional oversight and through agencies’ strategic planning processes, including FDA’s. We have previously reported that the development of a governmentwide performance plan that is mission- based, is results-oriented, and provides a cross-agency perspective offers a framework to help ensure agencies’ goals are complementary and mutually reinforcing. Further, with pressing fiscal challenges, this plan can help decision makers balance trade-offs and compare performance when resource allocation and restructuring decisions are made. In response to the nation’s fiscal challenges, agencies may have to explore new approaches to achieve their missions, and we have identified options for FDA to better leverage its resources. Efficient use of resources is particularly important at FDA because, while its food safety workload has increased in the past decade, resources have not kept pace. FDA has proposed actions toward implementing some of these options. Our analysis of FDA data shows that while FDA received increased funding for new bioterrorism-related responsibilities in 2003, subsequent staffing levels and funding have not kept pace with the agency’s growing responsibilities. Specifically, the number of FDA-regulated domestic food establishments increased more than 10 percent from fiscal years 2003 to 2007––from about 58,260 in 2003 to about 65,520 in 2007. Additionally, FDA notes that there have been dramatic changes in the volume, variety, and complexity of FDA-regulated products arriving at U.S. ports, and recently reported that the number of food import entry lines has tripled in the past ten years. Meanwhile, staffing for FDA’s Center for Food Safety and Applied Nutrition (CFSAN) has decreased. According to the Science Board, the number of staff years for CFSAN operations at headquarters dropped about 14 percent, from 950 in fiscal year 2003 to 812 in fiscal year 2006. During that same time period, field-based staff responsible for carrying out inspection and enforcement activities for CFSAN-regulated products dropped by 255 staff years, or about 11.5 percent—from 2,217 in fiscal year 2003 to 1,962 in fiscal year 2006. In addition, while CFSAN- related funding at headquarters and in the field increased from $407 million in fiscal year 2003 to $439 million in fiscal year 2006, this represents a decrease in real terms from about $457 million to about $451 million during that period. One consequence is that foreign inspections have declined: GAO analysis of FDA data shows that inspections of foreign food firms, which number almost 190,000, decreased from 211 in fiscal year 2001 to fewer than 100 in fiscal year 2007. The Science Board considered the funding issues to be more acute for CFSAN than for other FDA programs: unlike the FDA programs responsible for drugs, biologics, and medical devices, which charge manufacturers hundreds of millions of dollars in user fees each year, CFSAN is not authorized to charge user fees for its services. Recent GAO work has identified opportunities for FDA to better leverage its resources. Specifically, in 2004 we reviewed FDA’s imported seafood safety program and identified several options that FDA could consider to augment its resources and enhance its current program. We found that FDA’s seafood safety program had shown some progress from a 2001 review. For example, FDA increased its laboratory testing of seafood products at ports of entry from less than 1.0 percent in fiscal year 1999 to about 1.2 percent in fiscal year 2002. We also recommended several options for enhancing FDA’s oversight of seafood while leveraging outside resources. Some of these options are presented in FDA’s Food Protection Plan. We recommended that FDA: Make it a priority to establish equivalence agreements with other countries. Subject to its jurisdiction, FDA could certify that countries exporting food products to the United States have equivalent food safety systems before food products from those countries can enter the United States. Such agreements would shift some of FDA’s oversight burden to foreign governments. While FDA has not yet established equivalence agreements with any foreign countries, the Food Protection Plan requests that Congress allow the agency to enter into agreements with exporting countries to certify that foreign producers’ shipments of designated high- risk products comply with FDA standards. Explore the potential for certifying third-party inspectors. FDA could consider developing a program that uses certified third-party inspectors to conduct inspections on its behalf, both at foreign processing firms and domestic importers of seafood. FDA’s Food Protection Plan requests authority from Congress to accredit third parties to conduct voluntary inspections for foods, and FDA officials told us that they envision using third-party inspectors to inspect foreign facilities, where FDA conducts few inspections. If FDA receives this authority, it can take lessons from its own implementation of third-party inspection programs for medical device manufacturing establishments. As we are reporting in a separate statement today, few inspections of these establishments have been conducted through FDA’s two accredited third-party inspection programs. Consider accrediting private laboratories to test seafood. Currently, FDA does not accredit or use any private laboratories to collect or analyze seafood samples. However, for some seafood violations, it allows seafood firms to use private laboratories to provide evidence that imported seafood previously detained because of safety concerns is now safe and can be removed from the detention list at the port of entry. We recommended that FDA consider accrediting private laboratories because it could leverage outside resources while providing FDA greater assurance about the quality of the laboratories importers use to demonstrate that their products are safe. FDA has not formally changed its policies or practices, but the Action Plan for Import Safety notes that FDA intends to issue guidance by mid-2008 on sampling and testing of imported products, including the use of accredited private laboratories submitting data to FDA on food safety. Develop a memorandum of understanding with the National Oceanic and Atmospheric Administration (NOAA) to use NOAA’s Seafood Inspection Program resources to complete inspections on FDA’s behalf. NOAA officials said that they could provide various services to augment FDA’s regulatory program for imported seafood, including inspection, training, and product sampling services. FDA has been working on a program to refer certain export-related work to NOAA, and it is in discussions with NOAA about commissioning its inspectors, but to date, nothing is finalized or operational. We have not reviewed these actions to determine whether they adequately address our recommendations. We separately reported on overlaps we identified in the federal oversight of food safety, such as overlapping inspection and training activities that exist among the agencies conducting food safety functions. Such overlaps mean that federal agencies are spending resources on similar activities, which may waste scarce resources and limit effectiveness. Specifically, we found that FDA food safety activities may overlap with, if not duplicate, the efforts of other agencies, including USDA and NMFS. FDA could take practical steps to reduce overlap and duplication and thereby free resources for more effective oversight of food safety, but FDA has made little progress since our report. For example: Domestic inspections. In fiscal year 2003, FDA and USDA spent most of their food safety resources—about $900 million—on inspection and enforcement activities. A portion of these activities included overlapping and even duplicative inspections of 1,451 domestic food-processing facilities that produce foods regulated by both agencies. Under authority granted by the Bioterrorism Act of 2002, FDA could authorize USDA to inspect these facilities on its behalf, but FDA has not yet reached an agreement with USDA to do this. We recommended that, if cost effective, FDA enter into an agreement to commission USDA inspectors at jointly regulated facilities. FDA told us that they are working with USDA to consider which products might be covered by each agency under such an agreement. Import inspections. FDA and USDA both inspect shipments of imported food at ports of entry and also visit foreign countries that export food to the United States. We found that both FDA and USDA maintain inspectors at 18 U.S. ports of entry to inspect imported food. In fiscal year 2003, FDA spent more than $115 million on imported food inspections, and USDA spent almost $16 million. The two agencies do not share inspection resources at these ports. Although USDA maintains a daily presence at these facilities, the FDA-regulated products may remain at the facilities for some time awaiting FDA inspection. Further, FDA conducted inspections in 6 of the 34 countries that USDA evaluated in 2004 to determine whether their food safety systems for ensuring the safety of meat and poultry are equivalent to that of the United States. We recommended that FDA consider the findings of USDA’s foreign country equivalence agreements when determining which countries to visit. In their response to our recommendation, the agency noted that they will consider USDA’s foreign country evaluations when making such determinations. Inspectors’ training. FDA and USDA spend resources to provide similar training to food inspection personnel. FDA spent about $1.6 million and USDA spent $7.8 million in fiscal year 2003. We found that, to a considerable extent, food inspection training addresses the same subjects, such as plant sanitation and good manufacturing practices. While other agencies have consolidated training activities that have a common purpose and similar content, FDA and USDA have not. We recommended that USDA and FDA consider joint training programs, but to date, FDA has told us that they have identified no training needs common to both agencies. FDA’s Food Protection Plan proposes several positive first steps that are intended to enhance food safety oversight, including requesting several authorities recommended by GAO, but more specific information about its strategies and the resources needed to implement the plan would facilitate congressional oversight. Positively, FDA’s Food Protection Plan aims to shift the agency’s focus to prevention of foodborne illness instead of intervention after contamination and resulting illnesses occur—an important shift given that experts consider prevention to be a core element of an effective food safety system. FDA says that its key prevention steps are promoting corporate responsibility, identifying food vulnerabilities, assessing risks, and expanding its understanding and use of effective mitigation measures. In addition to the actions we discuss earlier to address resource constraints, FDA’s Food Protection Plan requests other authorities to enhance oversight of food safety that begin to respond to prior GAO recommendations. Specifically, the plan requests authority for FDA to: Order food recalls. The Food Protection Plan requests the authority to order a recall when FDA has reason to believe that food is adulterated and presents a threat of serious adverse health consequences or death, to be imposed only if a company refuses or unduly delays conducting a voluntary recall. Currently, food recalls are largely voluntary—federal agencies responsible for food safety, including FDA, have no authority to compel companies to recall contaminated foods, with the exception of FDA’s authority to require a recall for infant formula. FDA does have authority, through the courts, to seize, condemn, and destroy adulterated or misbranded food under its jurisdiction and to disseminate information about foods that are believed to present a danger to public health. However, government agencies that regulate the safety of other products, such as toys and automobile tires, have recall authority not available to FDA for food and have had to use their authority to ensure that recalls were conducted when companies did not cooperate. These agencies have the authority to require a company to notify the agency when the company has distributed a potentially unsafe product, order a recall, establish recall requirements, and impose monetary penalties if a company does not cooperate. In a report and testimony before this subcommittee, we noted that limitations in the FDA’s food recall authorities heighten the risk that unsafe food will remain in the food supply and have proposed that Congress consider giving FDA similar authorities. While FDA’s Food Protection Plan requests mandatory recall authority, this request could also include recall authorities held by other agencies, including establishing recall requirements and imposing penalties for noncompliance. FDA officials noted that while recall requirements and penalties for noncompliance were not explicitly stated in the Food Protection Plan, they are encompassed in the request. Further, the plan does not propose a definition of “undue delay” by a company, another critical element of recall authority given that timing is essential in reacting to outbreaks, and delays can cost lives. Issue additional preventive controls for high-risk foods. FDA is requesting explicit authority from Congress to issue regulations requiring foods that have been associated with repeated instances of serious health problems or death to be prepared, packed, and held under a system of preventive food safety controls. According to FDA, this would clarify the agency’s ability to require industries to implement preventive Hazard Analysis and Critical Control Point (HACCP) systems, which it currently requires for companies that process seafood and juice. HACCP systems are designed to improve food safety by having industry identify and control hazards in products before they enter the market. FDA officials told us that they are asking for explicit authority to put measures in place for other high-risk foods, such as leafy greens. Officials told us that this request, if granted, would allow the agency to focus its preventive efforts on foods that present the highest risk for contamination, consistent with the agency’s risk-based focus. However, others have expressed concern that requiring a history of repeated outbreaks before issuing preventive controls would not allow FDA to proactively establish regulations for foods before they cause additional illnesses. While FDA officials have acknowledged that implementing the Food Protection Plan will require additional resources, FDA has not provided specific information on the resources it anticipates the agency will need to implement this plan. For example, the Food Protection Plan proposes to develop food protection guidelines for industry; however FDA’s Science Board reported that modernizing safety standards for fresh produce and other raw foods and developing and implementing inspection programs could cost $210 million. Additionally, the Food Protection Plan proposes to enhance FDA’s information technology systems related to both domestic and imported foods which the Science Board report suggests could cost hundreds of millions of dollars. FDA officials have declined to provide specific information on how much additional funding it believes will be necessary to implement the Food Protection Plan, saying that finalizing the amounts will take place during the budget process. Similarly, the Food Protection Plan does not discuss the strategies it needs in the upcoming years to implement this plan. FDA officials told us that they have internal plans for implementing the Food Protection Plan that detail timelines, staff actions, and specific deliverables. While FDA officials told us they do not intend to make these plans public, they do plan to keep the public informed of their progress. Without a clear description of resources and strategies, it will be difficult for Congress to assess the likelihood of the plan’s success in achieving its intended results. The Science Board cites numerous management challenges that have contributed to FDA’s inability to fulfill its mission, such as a lack of a coherent structure and vision, insufficient capacity in risk assessment, and inadequate human capital recruitment and retention. The Science Board also noted that public confidence in FDA’s abilities has diminished. In light of these challenges, we have identified through other work some tools that can help agencies improve their performance, which may also be relevant to FDA. For example, we reported on the use of a Chief Operating Officer (COO)/Chief Management Officer (CMO) as one way to address longstanding management problems that are undermining agencies’ abilities to accomplish their missions and achieve results. Agencies with such challenges, including FDA, could benefit from a senior leader serving as a COO/CMO who can elevate, integrate, and institutionalize responsibility for key management functions. While GAO has long advocated the need for a COO/CMO position at the Department of Defense and the Department of Homeland Security, a relatively stable or small organization could use the existing deputy or related position to carry out the role. In addition to GAO, a number of other organizations have supported the need for the creation of COO/CMO positions in federal agencies. McKinsey & Company recommended that a COO be established in many federal agencies as the means to help those agencies successfully achieve transformation. In addition, a working group within the National Academy of Public Administration (NAPA) recommended creating COO positions in federal agencies to oversee the full range of management functions, including procurement, finance, information technology, and human capital. Another tool that can help federal agencies address their management challenges is a well-designed commission that can produce specific practical recommendations that Congress can enact. For example, Congress created the National Commission on Restructuring the Internal Revenue Service (IRS) in 1995 to review current practices at IRS and report on requirements for improvement. Congress subsequently passed the IRS Restructuring and Reform Act of 1998, which was influenced by the Commission’s report, and reorganized the structure and management of IRS, revised the mission of IRS, and mandated numerous other detailed changes. Based on our recent analysis of several commissions, there are several critical success factors that can be applied to ensure a commission’s success including: A statutory basis with adequate authority. When provided with a clear mandate and adequate authority, a commission can comprehensively access and analyze information related to a given policy issue and thereby provide more informed policy options for the President and Congress to consider. A clear purpose and timeframe. A commission should have a clear purpose for its objectives and activities to help guide the members in carrying out their responsibilities. In addition, a fixed agenda and timeframe can help keep a commission focused and on track. However, a commission should have a broad enough scope to help ensure it has the authority to address all the issues necessary in order to come up with a comprehensive and integrated solution without encountering any constraints in the process as to what it can or cannot consider. Key leadership support. Institutional leadership, commitment, and support from the President and Congress are necessary to help a commission succeed. An open and transparent process. By having an open and transparent process, such as public hearings, a commission can help build consensus among the public for its goals by gaining their input and support. A balanced and capable membership. Balanced and capable membership can help lessen political influences and build consensus among the commission members when carrying out its purpose. Specifically, a commission should involve current or former Members of Congress as well as experts and professionals on the topic. Current or former elected officials can ensure viability of a commission’s legislative proposals due to their experience. Accountability. Clear accountability for a commission can help foster specific, useful outputs that could help inform the public and provide specific policy options and, hopefully, recommendations for Congress and the President. Resources. The success of the commission is dependent on having the adequate resources to carry out its purpose and any potential recommendations. Generally, one concern regarding commissions may be whether or not there is sufficient buy-in from key stakeholders on the purpose of the commission along with a commitment to act on any resulting recommendations. Any recommendations by a commission in a final report are generally advisory in nature and may not automatically result in any public policy changes. Congressional action through subsequent legislation with Presidential support may be necessary for the commission’s recommendations to be implemented and for any changes to occur. Food safety concerns not only continue but will likely become more urgent in view of changing demographics and consumption patterns. Clearly, FDA plays a critical role in the federal oversight of food safety because of the breadth of its responsibilities. Thus its ability to carry out those responsibilities is necessary to help ensure the safety of the nation’s food supply in the most efficient, effective, accountable, and sustainable way. Nevertheless, in light of the federal government’s long-term fiscal challenges, agencies, including FDA, need to seek out opportunities to better leverage their resources. FDA’s Food Protection Plan is a step in the right direction and proposes to implement many of the recommendations made by GAO. However, additional information on the strategies and resources needed to implement the plan can help Congress assess the likelihood of its success. Further, concerns over FDA’s management challenges, such as those identified by the Science Board could hinder the implementation of the plan. Tools such as commissions and positions like a COO/CMO can help agencies address management challenges and make needed progress to achieve their missions. Continued congressional oversight, including today’s hearing, and additional legislative action are key to achieving that progress and to promoting the safety and integrity of the nation’s food supply. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames, Director, Natural Resources and Environment at (202) 512-3841 or [email protected]. Key contributors to this statement were Candace Carpenter, Bart Fischer, José Alfredo Gómez, and Alison O’Neill. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Food and Drug Administration (FDA) is responsible for ensuring the safety of roughly 80 percent of the U.S. food supply, including $417 billion worth of domestic food and $49 billion in imported food annually. The recent outbreaks of E. coli in spinach, Salmonella in peanut butter, and contamination in pet food highlight the risks posed by the accidental contamination of FDA-regulated food products. Changing demographics and consumption patterns underscore the urgency for effective food safety oversight. In response to these challenges, in November 2007, FDA and others released plans that discuss the oversight of food safety. FDA's Food Protection Plan sets a framework for food safety oversight. In addition, FDA's Science Board released FDA Science and Mission at Risk, which concluded that FDA does not have the capacity to ensure the safety of the nation's food supply. This testimony focuses on (1) federal oversight of food safety as a high-risk area that needs a governmentwide reexamination, (2) FDA's opportunities to better leverage its resources, (3) FDA's Food Protection Plan, and (4) tools that can help agencies to address management challenges. To address these issues, GAO interviewed FDA officials; evaluated the Food Protection Plan using a GAO guide for assessing agencies' performance plans; and reviewed pertinent statutes and reports. GAO also analyzed data on FDA inspections and resources. FDA is one of 15 agencies that collectively administer at least 30 laws related to food safety. This fragmentation is the key reason GAO added the federal oversight of food safety to its High-Risk Series in January 2007 and called for a governmentwide reexamination of the food safety system. We have reported on problems with this system--including inconsistent oversight, ineffective coordination, and inefficient use of resources. FDA has opportunities to better leverage its resources. Efficient use of resources is particularly important at FDA because we found that its food safety workload has increased in the past decade, while its food safety staff and funding have not kept pace. GAO has recommended that FDA establish equivalence agreements with other countries to shift some oversight responsibility to foreign governments, explore the potential for certifying third party inspections, and consider accrediting private laboratories to inspect seafood, among other actions. We also reported that FDA and the U.S. Department of Agriculture (USDA) conduct similar inspections at 1,451 facilities that produce foods regulated by both agencies. To reduce overlaps, we recommended that, if cost-effective, FDA enter into an agreement to commission USDA inspectors at such facilities. FDA incorporated some of these recommendations in its Food Protection Plan. FDA's Food Protection Plan also proposes some positive first steps intended to enhance its oversight of food safety. Specifically, FDA requests authority to order food safety recalls and issue additional preventive controls for high-risk foods, both of which GAO has previously recommended. However, more specific information about its strategies and the resources FDA needs to implement the plan would facilitate congressional oversight. FDA officials acknowledge that implementing the Food Protection Plan will require additional resources. Without a clear description of resources and strategies, it will be difficult for Congress to assess the likelihood of the plan's success in achieving its intended results. The Science Board cites numerous management challenges that have contributed to FDA's inability to fulfill its mission, including a lack of a coherent structure and vision, insufficient capacity in risk assessment, and inadequate human capital recruitment and retention. In light of these challenges, GAO has identified through other work some tools that can help agencies improve their performance over time. For example, a Chief Operating Officer/Chief Management Officer can help an agency address longstanding management problems that are undermining its ability to accomplish its mission and achieve results. In addition, a well-designed commission can produce specific practical recommendations that Congress can enact. Critical success factors that can help ensure a commission's success include a statutory basis with adequate authority, a clear purpose and timeframe, leadership support, an open process, a balanced membership, accountability, and resources.
HMOs are expected to provide all covered services to members in return for fixed premiums. HMOs may have their own staff of physicians and other providers to deliver care, or they may contract with individual providers or medical groups to deliver services. From the Medicare beneficiaries’ perspective, HMOs may offer a more comprehensive package of services than fee-for-service Medicare and at a lower cost than beneficiaries might incur if they purchased such coverage through supplementary insurance. HMOs were designed to offer preventive health services inexpensively and to constrain the provision of expensive services, primarily through the use of a reimbursement method known as capitation. Under capitation, Medicare pays HMOs a fixed amount per month for each beneficiary. This method typically places HMOs at risk for health costs, thereby giving them a financial incentive to control the use of services, emphasize preventative care, and avoid unnecessary care. Some HMOs, in turn, transfer a portion of their financial risk to care providers, such as physicians or physician groups. To encourage commercial and Medicare use of HMOs, in the early 1970s, the Congress authorized federal standards and oversight to ensure reasonable care and service. For example, initial legislation authorizing a Medicare HMO program required quality of care standards at least equal to those prevailing in the HMO’s service area. In addition, HMOs had to have sufficient operating experience and enrollment to permit evaluation of their capacity to provide appropriate care, and to sustain financial losses if the Medicare payment did not cover costs. Federal standards were strengthened as the government gained experience with HMOs. Currently, performance standards for HMOs serving Medicare beneficiaries are designed to safeguard beneficiary interests by ensuring the following: Plans have adequate finances and management. HMOs must meet financial solvency requirements, have minimum enrollments necessary to assume the financial risks, and provide adequate administration and management. Plans manage quality, utilization, and access to medical care. Plans must operate internal quality assurance systems to detect and correct patterns of underservice and poor quality care, provide reasonable access to specialists and services, and not transfer excessive financial risk to providers. Plans treat enrollees fairly. HMOs must use fair marketing practices that do not mislead or confuse enrollees, must provide necessary and covered services, and must follow equitable grievance and appeal procedures. Before an HMO may participate in Medicare, HCFA conducts a review to determine if the HMO meets federal requirements. During this “certification” review, HCFA looks at several indicators of the HMO’s ability to provide services to Medicare beneficiaries. These indicators include documentation of financial condition, marketing projections, qualifications of management staff, and management information systems. After awarding a Medicare contract to an HMO, HCFA monitors its performance for continued compliance with federal requirements. HCFA also contracts with peer review organizations (PRO)—independent, state-based organizations that use local doctors and nurses—to assess quality of care provided to beneficiaries. HCFA has a variety of tools available to enforce compliance with standards. These include authority to sanction an HMO by terminating or not renewing its contract, stopping enrollment, or imposing monetary penalties. Additionally, HCFA has numerous administrative means to encourage compliance, such as withholding an HMO’s request for expanding service areas. HMO care has become widespread in the private sector. In seeking ways to ensure quality and value in HMO care, large employers and the HMO industry are demanding HMO accountability beyond existing federal protections. Some large employers, for example, are requiring that HMOs undergo quality accreditation reviews conducted by the National Committee for Quality Assurance (NCQA). NCQA is a private agency that works with large employers to set and enforce HMO quality standards. Its standards focus primarily on quality assurance in HMOs’ management of medical operations, and its review and enforcement methods differ from HCFA’s. In addition, a group of large employers and HMOs are working with NCQA to develop standard performance measures for HMOs that would enable employers and consumers to compare HMOs and make informed choices. Recent enforcement cases show that HCFA processes remain slow at addressing problems in HMOs that do not readily comply with federal standards. PRO sampling of care in Florida during 1991 raised concerns with the quality of care at one HMO. The HMO questioned these findings, leading to a special PRO review and to the events discussed below. In 1992 and 1993 the PRO found serious quality problems in most of the risk contract HMOs in the Florida Medicare market, which has about 17 percent of all Medicare HMO enrollees. PRO review of hospitalized patients at one HMO, for example, raised serious issues about quality in 25 percent of the 109 cases sampled. After reviewing PRO findings, an internal HCFA task force suggested special investigations of quality assurance and utilization management practices at all Florida Medicare risk HMOs. PRO sampling of care in Florida Medicare HMOs, for example, found patterns of quality problems, including incorrect diagnoses, inappropriate assessment of test results, inappropriate treatment plans, underutilization, access concerns, delays in treatment, and treatment that was not competent or timely. Specific cases included the following: Delay in treatment. A beneficiary suffered recurrent urinary tract infections, tested positive for protein and blood in the urine, and had test results that suggested the presence of prostate cancer. Several months passed before the HMO referred him to a urologist and before the urologist performed further tests. The patient ended up in a hospital emergency room, and was treated for undiagnosed bladder cancer that had perforated the large intestine. Treatment not competent or timely. A beneficiary was treated with a blood-thinning drug that requires careful monitoring to avoid excessive bleeding. As a result of inadequate monitoring, the patient was admitted to a hospital with internal bleeding, which was found to be due to an excess of the blood-thinning drug. Denial of access. In a 24-hour period, a beneficiary with signs and symptoms of both pneumonia and a heart attack twice sought and was denied admission to a hospital. The HMO primary care physician concurred with both denials of admission. After the second attempt, the patient died on the way to his primary care physician. HCFA’s oversight of one of the Florida risk HMOs illustrates the pattern that has emerged between HCFA and HMOs that do not take prompt actions to correct performance problems. This HMO won a HCFA demonstration contract in 1982. After a series of financial problems and other compliance violations, the HMO was declared insolvent in 1987, and another HMO acquired its assets. The new HMO operates in four Florida markets under a single contract with HCFA. The HCFA enforcement activity documented in the following paragraphs relates to the South Florida market, but enrollment and payment statistics in table 1 relate to all four markets. HCFA data systems track the HMO’s contract as though it were a single-market HMO—the normal contracting arrangement for Medicare. Since 1987, HCFA repeatedly found that the HMO’s South Florida operations did not meet federal standards for quality assurance. During this period, HCFA undertook special studies and received PRO reports that indicated continuing problems with quality of care. Nevertheless, it allowed the HMO to continue enrolling beneficiaries and operating as freely as a fully compliant HMO, until HCFA’s 1994 investigation of quality assurance practices led to voluntary enrollment restrictions at selected medical centers. From 1988 to 1994, the HMO maintained and increased its revenues from Medicare by enrolling over 336,500 beneficiaries to replace the over 269,000 who disenrolled. The HMO had Medicare revenues in 1994 that exceeded $1 billion and constituted 72 percent of its total revenues. HCFA recently determined that the South Florida HMO has been responsive to its 1994 findings and in January 1995 approved the HMO’s corrective action plan. HCFA has since visited the HMO’s offices and declared the HMO in compliance with requirements, effective July 5, 1995. (Table 1 presents the history of HCFA’s oversight and enforcement related to the HMO’s quality assurance practices, including PRO reviews of quality of care.) Slow enforcement by HCFA was not unique to quality problems, or to the Florida market. Two other enforcement cases we reviewed to test HCFA’s processes were in California and Illinois and included marketing abuses, including high-pressure, illegal practices resulting in high levels of complaints and disenrollments from misinformed beneficiaries (see p. 12); nonpayment or slow payment of claims for beneficiaries’ out-of-plan services, which can result in out-of-pocket expenses or bill collection actions against the beneficiaries; and not following prescribed appeal processes, resulting in some Medicare beneficiaries not receiving the services they are entitled to under the HMOs’ Medicare contracts (see p. 13). HCFA’s monitoring and certification process has not been adequate to ensure that Medicare HMOs comply with standards for ensuring quality care. This has been confirmed by HCFA, PRO, and NCQA findings showing a mismatch between what HMOs are supposed to do and what they actually do to manage and ensure quality of care. NCQA recently found many HMOs out of compliance with its standards. Of the 15 Florida Medicare plans NCQA had reviewed as of December 1994, only 1 received full accreditation, 8 received less than full accreditation, and 6 were denied accreditation. HCFA efforts to ensure that Medicare beneficiaries receive quality care from HMOs continues to be inadequate for three reasons. Specifically, HCFA conducts limited quality assurance reviews, does not routinely collect utilization data that could most directly indicate potential quality problems, and does not assess HMO risk-sharing arrangements with providers that can trigger quality problems. HCFA’s routine compliance monitoring reviews do not go far enough to verify that HMOs monitor and control quality of care as federal standards require. The reviews check only that an HMO has procedures and staff capable of quality assurance and utilization management—they do not check for effective operation of these processes. While HCFA has PROs under contract to review the medical care provided to HMO enrollees, HCFA does not link its contract compliance monitoring with PROs’ monitoring, nor does it draw on PRO staff expertise that could help verify whether HMOs’ quality assurance programs actually work. This explains why PROs were able to identify patterns of quality of care problems—as they did in 1988, 1991, and 1993 at the South Florida HMO—at the same time HCFA contract monitors cited no problems with the HMO’s compliance with quality standards. HCFA’s routine review and certification of an HMO’s quality assurance program is completed without the participation of trained clinical staff and without systematic consideration of PRO findings. A routine HCFA review visit at an HMO generally involves about three people, without specialized clinical or quality assurance training, who spend a week or less focused largely on Medicare requirements for administration, management, and beneficiary services rather than on medical quality assurance. About a third of staff time is typically spent on quality-related matters. Monitoring officials of HCFA headquarters and in regions expressed the need for added trained staff to properly assess HMOs’ quality assurance systems. In contrast with HCFA’s approach, NCQA reviews last about a week, but focus primarily on quality assurance. The NCQA review team typically consists of three people, including two physicians and another clinician or administrator experienced in HMO operations. In addition to reviewing an HMO’s quality assurance program design, NCQA reviewers also test it by reviewing records and interviewing providers, to assess whether the system is functioning as designed. Since 1994, HCFA has been studying ways to improve both quality standards for HMOs and its methods for reviewing quality assurance. Through these efforts, HCFA is seeking to improve its current HMO certification process and to assess ways to coordinate with other organizations that oversee HMOs. Three other internal HCFA studies of its quality assurance certification practices were done over the past 2 years. Another factor limiting the effectiveness of HCFA’s monitoring of quality of care in HMOs has been the lack of data on beneficiaries’ utilization of services. In the fee-for-service sector, claims data are available and can be used to detect potential overutilization of services. No comparable data exist in the Medicare HMO program to detect potential underutilization. As a result, even such basic information as hospitalization rates; the use of home health care; or the number of people receiving preventive services, such as mammograms, is unknown. Federal standards require that HMOs have information systems to report utilization data and management systems to monitor utilization of services. Yet HMOs often lack these data, and HCFA has not required that such data be standardized or that the data be submitted to HCFA and to the PROs. HCFA has broad legal authority to require that HMOs regularly report a wide variety of statistical information to the federal government. This includes specific authority to require data on patterns of utilization of medical services in HMOs, and the availability and acceptability of those services. In contrast with HCFA, the private sector has, over the past few years, moved to develop information and data standards that could enable purchasers and consumers to compare different HMOs. To enable such assessment of health plans’ cost effectiveness and performance, a group of large employers and HMOs working with NCQA are developing the Health Plan Employer Data and Information Set (HEDIS). These data constitute a set of performance measures to evaluate plans’ quality of care, access to care, member satisfaction, utilization of services, and financial stability. Some employers already require their plans to submit HEDIS-based information. HCFA has now picked up on the private sector’s approach and has begun developing HEDIS-type HMO performance measures geared to services provided to elderly Medicare beneficiaries. A test of an initial set of measures covers the preventive services of flu immunization and mammography screening and will document the care of beneficiaries with diabetes. HCFA recently began testing the measures in 5 states and 23 HMOs. HCFA’s HMO quality assurance monitoring processes also do not adequately address risk-sharing arrangements between HMOs and their providers. The agency does not routinely assess whether HMO risk-sharing arrangements create a significant incentive to underserve, although the Congress gave the Department of Health and Human Services (HHS) authority, beginning April 1, 1991, to limit arrangements that it found provided excessive incentive to underserve. As of April 1995, HHS was still developing regulations and had not developed methods for gauging how much risk an HMO can legitimately pass to providers, or requirements that providers must meet to accept such risk. One HMO that a PRO identified as having an unusually high number of quality of care problems has been a concern to HCFA reviewers for several years because of its financial risk arrangements with providers. Under its risk-sharing arrangement, the HMO takes about 23 percent of its capitated payment for ambulatory services to administer the program and uses the remaining 77 percent to make capitated payments to providers. Over the years, several providers have lost money on care they provided to the HMO’s patients. These providers—often individual physicians or small physician groups—are financially responsible to provide the HMO’s enrollees all of their needed ambulatory services. Under such arrangements, every time a patient uses a primary care physician or specialist, or obtains diagnostic tests, the money comes out of the capitated payments the HMO makes to the provider responsible for delivering and managing the care. This could give providers financial incentives to withhold services, particularly if they are losing money on the HMO’s patients. In 1988 and again in 1991 we reported that HCFA was not using its enforcement authority effectively to obtain corrective action from those HMOs that were slow in correcting problems. As highlighted earlier, the cases we reviewed for this report illustrate this problem. The Congress granted HCFA authority to impose sanctions or monetary penalties on HMOs that fail to meet federal standards. HCFA’s sanction authorities include stopping enrollment, stopping payment for new Medicare enrollees, imposing monetary penalties, and revoking Medicare contracts. HCFA can impose these sanctions or penalties for such actions as abusive marketing or underserving beneficiaries. Although the Congress first gave HCFA sanction authority in 1986, it was not until 1994 that HCFA issued regulations implementing this authority. Pursuing sanctions against noncompliant HMOs can be an administratively cumbersome and staff-intensive process, according to HCFA officials. HCFA’s enforcement approach is to seek to document the causes of an HMO’s problems and to attempt to get the HMO to correct problems, without resorting to sanctions. Under this approach, after HCFA staff show that an HMO is not meeting federal standards, the HMO then has an opportunity to address deficiencies by developing a corrective action plan. If the HMO does not implement the corrective action or the action is inadequate, then HCFA staff investigate the HMO’s operations to further document the problems. An investigation could result in HCFA finding noncompliance and requesting a new corrective action plan. The process can then repeat itself. The outcome of this approach is that an HMO, without sanction, can take years before correcting identified deficiencies. We question whether this serves the best interests of Medicare or HMO beneficiaries. Two cases illustrate this: An Illinois HMO enrolled 29,600 people during a period of marketing abuses. In 1991, while the HMO was under investigation nationally for Medicare HMO marketing abuses, it purchased an Illinois HMO with a Medicare contract. By early 1992, HCFA noted that one-third of new enrollees in the plan disenrolled within 3 months. Moreover, HCFA began receiving beneficiary complaints about salespersons’ misrepresentations and high-pressure tactics. HCFA’s March 1994 review of the HMO’s marketing cited numerous instances of deceptive and high-pressure sales tactics, including misrepresentation. HCFA also found instances of prohibited payments or gifts to induce people to enroll. In April 1994, the HMO submitted a corrective action plan addressing its marketing tactics and supervision of commissioned sales agents. In August 1994, HCFA and the HMO agreed on milestones for lowering the HMO’s disenrollment rates. HCFA is monitoring the HMO’s progress toward lowering its disenrollment and complaint rates. (See table I.1.) A California HMO tripled Medicare membership during a period when provider claims were not promptly paid and beneficiaries did not receive their appeal rights. HCFA’s 1992 monitoring report noted the HMO’s late payment of claims from providers and failure to process beneficiary appeals in a timely manner. The HMO submitted a corrective action plan to HCFA and for the next 2 years reported progress in achieving compliance. In 1994, however, HCFA found that the problems persisted. HCFA concluded that the HMO lacked sufficient staff and systems to organize, plan, control, and evaluate the administrative and management aspects of its Medicare operations. For example, HCFA found that the HMO failed to pay in a timely manner over 64 percent of the claims in a sample HCFA reviewed. In over 62 percent of a sample of appeals cases, HCFA found that the HMO failed to forward beneficiaries’ appeals to HCFA within the specified 60 days. HCFA’s February 1995 visit found that the HMO had made substantial improvements in processing claims and appeals, although problems remained. HCFA found additional unrelated problems as well. The HMO submitted a corrective action plan—its third in 3 years—in April 1995. In May 1995, HCFA approved most of the elements in the plan. The HMO submitted a revised corrective action plan addressing the remaining elements in June 1995. (See table I.2.) HCFA does not routinely release the results of its monitoring visits, or the comparative performance indicators it collects, to the public. Consequently, when an HMO violates federal standards, Medicare beneficiaries could remain unaware of problems that could influence their decision to join or remain enrolled in that HMO. HCFA’s reluctance to disclose HMO-specific information it develops can work to the benefit of poor-performing HMOs, to the detriment of beneficiaries who make less-informed selections, and to the detriment of HMOs that comply with standards. Although intended to be a beneficiary protection against potential underservice by HMOs, the appeal process is too slow to effectively resolve disputes over services that beneficiaries believe are urgently needed. Moreover, some HMOs have extended the process even more by not processing beneficiaries’ appeals within the prescribed time frames. This results in some beneficiaries returning to fee-for-service Medicare to obtain the services they believe they need, while others remain in HMOs but incur substantial out-of-pocket expenses with little certainty of repayment. Under Medicare regulations, beneficiaries in HMOs may appeal denials of service or the HMO’s refusal to pay for services obtained from out-of-plan providers. The appeal process requires first that the HMO deny the service and second that the beneficiary ask for a reconsideration of the denial. If the reconsideration decision is not fully favorable to the beneficiary, the HMO is required to send the denial, along with medical information concerning the disputed services, to a HCFA contractor that adjudicates such denials. Since 1989, HCFA has performed its appeal reconsideration function through a contractor—the Network Design Group (NDG) of Pittsford, New York. NDG hires physicians, nurses, and other clinical staff to evaluate beneficiaries’ medical need for contested services and make reconsideration decisions. Under current HCFA standards, the process allows up to 6 months from the initial determination before an HMO must forward an appeal to HCFA, as shown in figure 1. Some HMOs take longer than HCFA standards, contributing to further delays. For instance, although HCFA allows HMOs a maximum of 60 days to reconsider a beneficiary’s appeal, HCFA has found that several HMOs in California and Florida inappropriately retained beneficiary appeals between 130 and 200 days, on the average, before forwarding them to HCFA’s adjudication contractor. Beneficiaries appealing their HMOs’ coverage denial for nursing home care, home health care, or urgently needed care may find the process does not work quickly enough. In addition to the time it takes for an appeal to reach HCFA, most cases that reach HCFA for reconsideration have taken longer to resolve than the target of 30 days that HCFA and its contractor strive for. In 1993, 38 percent of appeals to HCFA were straightforward enough for HCFA’s contractor to decide within 30 days. About 45 percent required about 3-1/2 months. More complex cases, where medical information was missing or where Medicare coverage rules were unclear, took over 6 months. Three examples illustrate how the process works for Medicare beneficiaries: A newly enrolled beneficiary requested physical therapy from an HMO physician to alleviate back pain. The beneficiary had suffered for years from severe back problems, which had been controlled by physical therapy. Although the HMO physician prescribed 17 sessions of physical therapy, the plan covered only one session. The beneficiary unsuccessfully appealed to the HMO and HCFA. More than a year after her therapy services were denied, the beneficiary was still waiting for a decision from an Administrative Law Judge. A beneficiary finding himself unable to walk or urinate was admitted to a hospital not affiliated with his HMO. He was discharged 2 weeks later only to be readmitted the next day after falling at home. The HMO denied the hospital’s claim for $23,600 in services because it did not consider the need for care an emergency. The hospital billed the beneficiary. HCFA’s reconsideration contractor concluded that the hospital services were needed to prevent renal failure, infection, and other complications. HCFA’s contractor found the HMO liable for the cost of the hospital services—over 7 months after the HMO’s initial denial. Following surgery for lung cancer, a beneficiary repeatedly complained of pain and tenderness in the chest. X rays done shortly after surgery indicated possible remaining cancer, but no follow-up was done. After 14 months of continued complaints of pain, externally visible swelling led to new tests and the diagnosis that cancer had spread to the chest wall. An HMO oncologist explained that the only treatment available through the HMO had a modest success rate, and expressed willingness to refer the patient to a non-HMO center offering another treatment with a reported high success rate. The HMO denied the beneficiary’s request. The beneficiary requested the services two more times. Although the HMO denied the services three times, it did not inform the beneficiary of his right to appeal. The HMO forwarded the case to HCFA for reconsideration after the third denial. At this point, the beneficiary learned from HCFA’s contractor of the ongoing appeal and his right for reconsideration. HCFA’s contractor upheld the HMO denial because of the experimental nature of the requested treatment and because the HMO offered a treatment considered appropriate. The beneficiary paid for $13,000 in services he obtained from the non-HMO center prior to deciding to return to fee-for-service, where Medicare covered the treatment for some beneficiaries. Medicare HMO beneficiaries who pay for services they believe are needed may be liable for those costs. In 1994, HCFA decided over 3,100 appeals, 80 percent of which were denied claims for reimbursement of services obtained from providers not affiliated with the HMO. The average claim was about $4,300, totaling over $15 million in disputed claims. HCFA’s reconsideration contractor upheld HMO denials in 64 percent of the appeals, leaving beneficiaries liable for over $11 million in claims. HCFA is aware of the potential for improving the appeal process and has taken some steps toward this end. In November 1994, HCFA clarified its rules, allowing a beneficiary to appeal without a written denial notice from the plan. This could remove a significant barrier that beneficiaries in some HMOs faced in initiating appeals. HCFA also issued a rule in November 1994 extending to beneficiaries in HMOs the right to obtain expedited PRO review of HMO decisions to discharge them from a hospital. Since 1986, fee-for-service Medicare beneficiaries have been able to request such a PRO review of a hospital’s discharge order when they believe they should remain hospitalized. HCFA operations officials also recognize the potential for further improvements. They have proposed an expedited review process for decisions on care perceived as urgently needed. They also propose to look at ways to better educate beneficiaries on their appeal rights and the appeal process. Some large employers, acting as the sponsor of their employees in selecting health care plans, have begun to use accreditation and performance data in checking HMOs’ value, and in deciding whether to accept an HMO into their health plans. Nearly half the HMOs in the country will have undergone NCQA review by the end of 1995. NCQA accreditation focuses primarily on standards related to quality assurance and use of services—the areas in which federal certification reviews are relatively weak. The HEDIS performance measurement set is expected to take the place of the varying data requests employers already make to evaluate plans’ quality of care, access to care, member satisfaction, utilization of services, and financial stability. The private sector also disseminates quality-related information to purchasers and users. NCQA publicizes its accreditation decisions, which allows employers and employees to consider accreditation status in their HMO decisions. The effect is that HMOs that do not obtain accreditation can lose business. For example, a consortium of employers has elected to exclude a Florida HMO from new business with their employer-sponsored health plans because of the HMO’s failure to obtain accreditation. HCFA is the sponsor for Medicare beneficiaries in the selection and oversight of Medicare contract HMOs, much like employers are for their employees’ health plans. HCFA, however, does not routinely provide beneficiaries the results of its monitoring reviews or other performance-related information such as HMO disenrollment rates or beneficiary complaints. HCFA does routinely collect and analyze data on Medicare HMOs’ enrollment and disenrollment rates, appeals, beneficiary complaints, financial condition, availability and access to services, and marketing strategies. HCFA has made ongoing improvements that enhance its ability to monitor HMOs and enforce federal standards. These improvements in HMO contract oversight are in addition to those already mentioned. For example, HCFA has progressively improved its collection and summarization of comparative performance indicators on individual HMOs and makes these available to contract monitoring staff. This can aid HCFA in detecting problems in some cases. The indicators include enrollment and disenrollment statistics, including rapid or early disenrollments, and rates of beneficiaries’ appeals of denied care. In addition, three HCFA regional offices, accounting for about three-fourths of Medicare HMO enrollments, have implemented an automated tracking system for complaints. Beginning in 1994, HCFA has more aggressively used its regulatory authority under title 13 of the Public Health Service Act to get at root causes of HMO quality assurance problems. HCFA officials explained that they use the results to work cooperatively with plans’ top management to correct weaknesses. Four investigations have been conducted since July 1994 on HMOs with apparent quality assurance problems, and a fifth was recently started. The first three of these investigations were done at Florida HMOs and resulted in findings of noncompliance with federal standards. The experience of designing and conducting these investigations provides an excellent basis for HCFA to design routine monitoring reviews that test HMOs’ internal quality assurance. However, the experience gained from these investigations shows that increased staffing with better training or qualifications may be necessary for HCFA’s routine monitoring. HCFA also announced that it plans to begin site visits to HMOs annually, beginning in fiscal 1996. Annual reviews may benefit where HCFA needs follow-up verification that HMOs have corrected deficiencies. They also may permit HCFA to focus in any one year on a particular aspect of HMO operations, potentially increasing effectiveness. HCFA recognizes that it needs to be more active as a sponsor for beneficiaries enrolling in Medicare HMOs. This entails selecting qualified HMOs to participate in the program, protecting beneficiaries’ interests after they join an HMO, and informing beneficiaries of HMO performance. Although HCFA, to its credit, has taken a number of positive actions, it has not adequately developed and staffed routine monitoring of HMOs’ quality assurance and other key operations to protect beneficiaries’ interests; taken actions to obtain prompt compliance with existing quality-of-care or other beneficiary protection standards from those HMOs that are slow to correct problems; or given Medicare beneficiaries available information that could help them decide to enroll or to remain enrolled in an HMO. Moreover, HCFA has not issued regulations, originally called for in 1986 legislation, defining acceptable levels of financial risk an HMO can transfer to subcontracted providers. Private sector progress, weighed against continued shortcomings in HCFA’s current compliance approach, suggests that HCFA needs to overhaul its compliance approach to be more consumer-oriented. This would include forbidding noncompliant HMOs from continuing to enroll beneficiaries, and publishing available data that beneficiaries can use to gauge HMOs’ relative performances. In addition, HCFA could strengthen its quality assurance review efforts and streamline its beneficiary appeal process. We have recommended a variety of similar changes over the past decade and have observed some improvements in monitoring. But HCFA has remained reluctant to take strong enforcement actions and continues to rely on reviews of HMOs’ quality assurance practices that do not verify their effectiveness. We recommend that the Secretary of HHS direct the HCFA Administrator to develop a new, more consumer-oriented strategy for administering the Medicare HMO program. This should include directing that HCFA routinely publish (1) comparative data it collects on HMOs such as complaint rates, disenrollment rates, and rates and outcomes of appeals, and (2) the results of its investigations or any findings of noncompliance by HMOs; verify the effective operation of all HMOs’ quality assurance and utilization management practices, by applying sufficient trained staff during routine monitoring, and integrating PRO findings into HCFA’s compliance monitoring reviews; and explore further options to streamline the appeal process. The Department of Health and Human Services disagreed with many of the report’s findings, emphasizing that the report discusses monitoring and enforcement problems that occurred years ago and largely ignores substantial changes made in the last 2 years. HHS agreed, however, that there is room for improvement in the appeal process and in providing information to consumers. The full text of HHS’ comments appears in appendix III. With regard to HHS’ concerns about our use of old information, the three enforcement cases presented in this report were as timely a test of HCFA processes as we could select at the time of our review. They were the only cases identified by HCFA as either under investigation or having the potential for legal action when we began our fieldwork in June 1994. In addition, the South Florida quality assurance monitoring case was the first HMO to undergo HCFA’s enhanced investigation effort to get at root causes of problems. HHS was also concerned that we did not examine important initiatives HCFA has recently undertaken to improve its HMO quality assurance monitoring. On the basis of additional information provided by HHS, we revised the report to recognize those initiatives that were relevant to the issues we addressed. Although we agree that HCFA’s recent efforts have improved its monitoring capability, they do not change our conclusion that HCFA’s routine monitoring of HMOs’ quality assurance practices does not go far enough to verify compliance with federal requirements. This is primarily an issue of applying sufficient and appropriately trained staff to the task, something recognized by HCFA’s own internal studies and endorsed by HCFA operations staff we met with. Other issues that affect the quality of this monitoring—including the clarity and currency of regulations and standards—are the subject of ongoing HCFA studies. HHS also disagreed with our position “that the number of times HCFA levies monetary penalties against HMOs is a measure of the intensity of . . . [the agency’s] . . . oversight efforts.” While we agree with HHS that monetary penalties can “simply become a cost of doing business for HMOs,” our point is that more aggressive enforcement can be more effective in bringing about HMO compliance. Our emphasis was on limiting HMOs’ enrollment of new members as a penalty until the HMOs can clearly demonstrate that they have identified and corrected the root causes of problems. Our report also highlights another method of enforcing HMO compliance, which focuses on providing comparative HMO performance information to Medicare beneficiaries, who make marketplace decisions in selecting particular HMOs. HHS noted that we should have more comprehensively compared HCFA and NCQA quality assurance standards. This was not done for two reasons. The difference between NCQA and HCFA reviews that we judged most relevant was that NCQA reviews apply sufficient numbers of trained staff to provide some verification that HMOs have effective quality assurance and utilization management operations, while HCFA’s routine reviews do not. The requirement for effective quality assurance and utilization management is common to both organizations’ standards. Also, HCFA had a contract in process to compare its standards and review process with NCQA’s and with several others. In the final analysis, our report emphasizes that HCFA is the primary sponsor of Medicare beneficiaries’ interests when they enroll in HMOs. As such, HCFA has a responsibility to be proactive in its role, by collecting and publishing data to consumers in the marketplace, and by acting quickly and firmly to protect beneficiary interests when it has indications of poor care or abusive practices. As arranged with your offices, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Health and Human Services. We will also make copies available to others upon request. If you or your staffs have any questions about this work, please call me on (202) 512-7123. Major contributors to this report are listed in appendix IV. HCFA reviews plan that the HMO proposes to purchase. Based on troubles in Florida and Texas, HCFA starts investigation of marketing in all the HMO’s markets. Investigation continues through 1992. HCFA notes that the HMO plans to raise marketing budget from $2.4 million to $6.4 million in fiscal 1991. The HMO completes purchase of Chicago HMO. HCFA requests that HMO investigate high rate of early disenrollments and complaints about marketing. HMO agrees that data indicate problem. HCFA and HMO debate disenrollment data. HCFA site visit report notes marketing area “met,” but disenrollments continue to be twice the national average. HCFA to continue to monitor. HCFA asks HMO to investigate complaints about high-pressure and illegal marketing practices. HCFA meets with HMO to “encourage” it on sales agent turnover, early disenrollment. HMO questions the accuracy of HCFA’s disenrollment data. HCFA decides to review HMO’s marketing operations. HCFA conducts marketing review, then issues formal notice of investigation asking what the HMO will do to correct apparent violations of Medicare law. HCFA also threatens notice in Federal Register and suspension of enrollment. HCFA acknowledges HMO’s corrective action plan and timetable to reduce early disenrollments. HMO reports progress in reducing rate of early disenrollment. Target date for closing title 13 investigation. Table I.2: A California HMO Case History—Claims, Appeals, and Enrollment Processing The HMO enters a Medicare risk contract as a competitive medical plan. HMO obtains its first service area expansion. HCFA conducts its first monitoring visit and finds that HMO lacks systems to ensure timely payment of claims and notification of denials. HCFA approves a new service area expansion that was pending at the time of first monitoring visit. HCFA report of first visit informs HMO that HCFA will withhold approval of any expansions until acceptable corrective action is implemented. HMO submits a corrective action plan (CAP). HCFA finds HMO’s CAP insufficient to improve claims processing. HCFA meets with the HMO to discuss revisions to CAP. HMO submits an entirely new CAP addressing claims processing deficiencies. HMO is granted third service area expansion. HCFA approves HMO’s CAP. HMO sends HCFA three progress reports indicating its medical groups are in compliance or near compliance with federal claims processing requirements. (continued) HCFA’s routine monitoring visit finds that claims processing problems persist; HMO does not provide beneficiaries notice of denials and does not notify beneficiaries of reconsideration decisions within the allowed 60 days. HCFA monitoring report stops further service area expansion until HMO can demonstrate that its operations are in compliance with Medicare standards. HCFA sends HMO a letter of concern because several of the problems addressed in the 1992 monitoring report and corresponding CAP persisted. HMO submits a CAP in response to the 1994 monitoring report. HCFA approves 8 of 19 elements of HMO’s CAP. HMO submits a significantly revised CAP to HCFA. HCFA approves the 11 remaining elements of HMO’s CAP. HCFA review of inquiries and complaints received from HMO’s members indicates problems in enrollment and disenrollment. HCFA asks HMO to investigate. HCFA evaluates HMO’s response. Because of problems in the HMO’s operations, HCFA warns HMO it will evaluate the necessity of terminating Medicare contract. HCFA meets with HMO’s Chief Executive Officer over concerns raised by the influx of member-specific problems received by HCFA. HCFA conducts a follow-up visit. HCFA reports substantial improvements in HMO’s processing of claims and appeals, but finds significant problems in HMO’s handling of enrollments and disenrollments. HCFA returns HMO’s application for a fourth service area expansion, until HMO can demonstrate compliance with enrollment/disenrollment requirements. HMO submits a third CAP. HCFA approves 10 of the 14 elements in HMO’s CAP. HMO submits a new CAP on the rejected elements. We reviewed HCFA’s current HMO monitoring and enforcement practices and discussed them with managers and staff at HCFA’s Office of Managed Care, Health Standards and Quality Bureau, Region IX—San Francisco, Region IV—Atlanta, and Region V—Chicago. In addition, we interviewed the PROs for California and Florida to obtain their views about the Medicare HMO oversight process. We accompanied HCFA on an investigation of quality assurance practices at a Florida-based HMO with a Medicare contract. In addition, we selected ongoing enforcement cases to verify the effectiveness of HCFA’s oversight practices. We contacted officials from the HMOs cited as examples in this report. We reviewed the statutory and regulatory requirements for the appeal process and discussed them with HCFA staff at the Office of Managed Care. We also interviewed a representative of Network Design Group, HCFA’s contractor for processing appeals. In addition, we obtained and analyzed data on the timeliness, types, and outcomes of beneficiary appeals to HCFA. We also discussed with HCFA officials proposals for improving the appeal process. We discussed federal, state, and private review, licensing, and accreditation practices with officials from Florida’s Agency for Health Care Administration, the National Committee for Quality Assurance, the Group Health Association of America, and the Los Angeles-based consumer advocacy group, Center for Health Care Rights. We also discussed beneficiaries’ rights to appeal denials of care with this group. Medicare: Opportunities Are Available to Apply Managed Care Strategies (GAO/HEHS-T-95-81, Feb. 10, 1995). Health Care: Employers Urge Hospitals to Battle Costs Using Performance Data Systems (GAO/HEHS-95-1, Oct. 3, 1994). Health Care Reform: “Report Cards” Are Useful But Significant Issues Need to Be Addressed (GAO/HEHS-94-219, Sept. 29, 1994). Medicare: HCFA Needs to Take Stronger Actions Against HMOs Violating Federal Standards (GAO/HRD-92-11, Nov. 12, 1991). Health Care: Actions to Terminate Problem Hospitals From Medicare Are Inadequate (GAO/HRD-91-54, Sept. 5, 1991). Medicare: PRO Review Does Not Assure Quality of Care Provided by Risk HMOs (GAO/HRD-91-48, Mar. 13, 1991). Medicare: Physician Incentive Payments by Prepaid Health Plans Could Lower Quality of Care (GAO/HRD-89-29, Dec. 12, 1988). Medicare: Experience Shows Ways to Improve Oversight of Health Maintenance Organizations (GAO/HRD-88-73, Aug. 17, 1988). Medicare and Medicaid: Stronger Enforcement of Nursing Home Requirements Needed (GAO/HRD-87-113, July 22, 1987). Medicare: Issues Raised by Florida Health Maintenance Organization Demonstrations (GAO/HRD-86-97, July 16, 1986). Problems in Administering Medicare’s Health Maintenance Organization Demonstration Projects in Florida (GAO/HRD-85-48, Mar. 8, 1985). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal oversight of health maintenance organizations (HMO) that enroll Medicare beneficiaries, focusing on: (1) the Health Care Financing Administration's (HCFA) monitoring of HMO compliance with federal quality assurance standards; (2) HCFA enforcement actions against HMO that do not meet federal standards; (3) the process available to beneficiaries to appeal HMO decisions to deny care; and (4) approaches the private sector is taking to assure HMO beneficiaries of quality care. GAO found that although HCFA has instituted promising improvements, its process for monitoring and enforcing Medicare HMO performance standards still suffers because: (1) HCFA quality assurance reviews are not comprehensive; (2) HCFA does not adequately assess the financial risk arrangements that HMO have with providers that can create incentives to underserve beneficiaries; (3) HCFA has been reluctant to use available enforcement tools to correct HMO deficiencies and improprieties; and (4) beneficiaries who appeal HMO denials often wait 6 months or more for resolution, causing them to incur extraneous costs. In addition, GAO found that HCFA could improve its regulatory approach to ensuring good HMO performance by adopting private-sector practices, such as: (1) requiring that HMO undergo accreditation reviews to obtain contracts with Medicare; and (2) requiring information about the care provided to beneficiaries to evaluate HMO performance when making contract decisions.
Although policies concerning compensation for deployed civilians are generally comparable across agencies, we found some issues that affect the amount of compensation they receive—depending on such things as the agency’s pay system or the civilian’s grade/band level—and the accuracy, timeliness, and completeness of this compensation. Specifically, the six agencies included in our review provided similar types of deployment related compensation to civilians deployed to Iraq or Afghanistan. Agency policies regarding compensation for federal employees—including deployed civilians—are subject to regulations and guidance issued either by OPM or other executive agencies, in accordance with underlying statutory personnel authorities. In some cases, the statutes and implementing regulations provided agency heads with flexibility in how they administer their compensation policies. For example, agency heads are currently authorized by statute to provide their civilians deployed to combat zones with certain benefits—such as death gratuities and leave benefits—comparable to those provided the Foreign Service, regardless of the underlying pay system of the employee. However, some variations in compensation available to deployed civilians result directly from the employing agency’s pay system and the employee’s pay grade/band level. For example, deployed civilians, who are often subject to extended work hours, may expect to work 10-hour days, 5 days a week, resulting in 20 hours of overtime per pay period over the course of a year-long deployment. A nonsupervisory GS-12 step 1 employee receives a different amount of compensation for overtime hours than a nonsupervisory NSPS employee who earns an equivalent salary. Specifically, the NSPS nonsupervisory employee is compensated at a rate equivalent to 1.5 times the normal hourly rate for overtime while the GS nonsupervisory employee is compensated at a rate equivalent to 1.14 times the normal hourly rate for overtime hours. Additionally, deployed civilians may receive different compensation based on their deployment status. Agencies have some discretion to determine the travel status of their deployed civilians based on a variety of factors— DOD, for example, looks at factors including length of deployment, employee and agency preference, and cost. Generally though, deployments scheduled for 180 days or less are classified as “temporary duty” assignments, whereas deployments lasting more than a year generally result in an official “change of station” assignment. Nonetheless, when civilians are to be deployed long term, agencies have some discretion to place them in either temporary duty or change of station status, subject to certain criteria. The status under which civilians deploy affects the type and amount of compensation they receive. For example, approximately 73 percent of the civilians who were deployed between January 1, 2006, and April 30, 2008, by the six agencies we reviewed were deployed in temporary duty status and retained their base salaries, including the locality pay associated with their home duty stations. Civilians deployed to Iraq or Afghanistan as a change of station do not receive locality pay, but do receive base salary and may be eligible for a separate maintenance allowance which varies in amount based on the number of dependents the civilian has. The civilian’s base salary also impacts the computation of certain deployment-related pays, such as danger pay and post hardship differential, as well as the computation of premium pay such as overtime. Consequently, whether a civilian’s base salary includes locality pay or not can significantly affect the total compensation to which that civilian is entitled—resulting in differences of several thousand dollars. As a result of these variations, deployed civilians at equivalent pay grades who work under the same conditions and face the same risks may receive different compensation. As mentioned previously, the Subcommittee on Oversight and Investigations, House Armed Services Committee, recommended in April 2008 that OPM develop a benefits package for all federal civilians deployed to war zones, to ensure that they receive equitable benefits. But, at the time of our review, OPM had not developed such a package or provided legislative recommendations. OPM officials stated that DOD had initiated an interagency working group to discuss compensation issues and that this group had developed some proposals for legislative changes. However, they noted that these proposals had not yet been submitted to Congress, and they do not, according to DOD officials, represent a comprehensive package for all civilians deployed to war zones, as recommended by the Subcommittee. Furthermore, compensation policies were not always implemented accurately or in a timely manner. For example, we project that approximately 40 percent of the estimated 2,100 civilians deployed from January 1, 2006, to April 30, 2008, experienced problems with compensation—including not receiving danger pay or receiving it late, for instance—in part because they were unaware of their eligibility or did not know where to go for assistance to start and stop these deployment- related pays. In fact, officials at four agencies acknowledged that they have experienced difficulties in effectively administering deployment- related pays, in part because there is no single source delineating the various pays associated with deployment. As we previously reported, concerning their military counterparts, unless deployed personnel are adequately supported in this area, they may not be receiving all of the compensation to which they are entitled. Additionally, in January 2008, Congress authorized an expanded death gratuity—under the Federal Employees’ Compensation Act (FECA)—of up to $100,000 to be paid to the survivor of a deployed civilian whose death resulted from injuries incurred in connection with service with an armed force in support of a contingency operation. Congress also gave agency heads discretion to apply this death gratuity provision retroactively for any such deaths occurring on or after October 7, 2001, as a result of injuries incurred in connection with the civilian’s service with an armed force in Iraq or Afghanistan. At the time of our review, Labor—the agency responsible for the implementing regulations under FECA—had not yet issued its formal policy. Labor officials told us that, because of the recent change in administration, they could not provide us with an anticipated issue date for the final policy. Officials from the six agencies included in our review stated that they were delaying the development of policies and procedures to implement the death gratuity until after Labor issues its policy. As a result, some of these agencies had not moved forward on these provisions. We therefore recommended that (1) OPM oversee an executive agency working group on compensation for deployed civilians to address any differences and if necessary make legislative recommendations; (2) the agencies included in our review establish ombudsman programs or, for agencies deploying small numbers of civilians, focal points to help ensure that deployed civilians receive the compensation to which they are entitled; and (3) Labor set a time frame for issuing implementing guidance for the death gratuity. We provided a copy of the draft report to the agencies in our review. With the exception of USAID, which stated that it already had an ombudsman to assist its civilians, all of the agencies generally concurred with these recommendations. USAID officials, however, did not provide any documentation to support the establishment of the ombudsman position. In the absence of such documentation, we continue to believe our recommendation has merit. Finally, the Department of Labor has subsequently published an interim final rule implementing the $100,000 death gratuity under FECA. Although agency policies on medical benefits are similar, we found some issues with policies related to medical treatment following deployment and with the implementation of workers’ compensation and post- deployment medical screening that affect the medical benefits of these civilians. DOD and State guidance provides for medical care of all civilians during their deployments—regardless of the employing agency. For example, DOD policies entitle all deployed civilians to the same level of medical treatment while they are in theater as military personnel. State policies entitle civilians serving under the authority of the Chief of Mission to treatment for routine medical needs at State facilities while they are in theater. While DOD guidance provides for care at military treatment facilities for all DOD civilians—under workers’ compensation—following their deployments, the guidance does not clearly define the “compelling circumstances” under which non-DOD civilians would be eligible for such care. Because DOD’s policy is unclear, confusion exists within DOD and other agencies regarding civilians’ eligibility for care at military treatment facilities following deployment. Furthermore, officials at several agencies were unaware that civilians from their agencies were potentially eligible for care at DOD facilities following deployment, in part because these agencies had not received the guidance from DOD about this eligibility. Because some agencies are not aware of their civilians’ eligibility for care at military treatment facilities following deployment, these civilians cannot benefit from the efforts DOD has undertaken in areas such as post traumatic stress disorder. Moreover, civilians who deploy may also be eligible for medical benefits through workers’ compensation if Labor determines that their medical condition resulted from personal injury sustained in the performance of duty during deployment. Our review of all 188 workers’ compensation claims related to deployments to Iraq or Afghanistan that were filed with the Labor between January 1, 2006, and April 30, 2008, found that Labor requested additional information in support of these claims in 125 cases, resulting in increased processing times that in some instances exceeded the department’s standard goals for processing claims. Twenty-two percent of the respondents to our survey who had filed workers’ compensation claims stated that their agencies provided them with little or no support in completing the paperwork for their claims. Labor officials stated that applicants failed to provide adequate documentation, in part because they were unaware of the type of information they needed to provide. Furthermore, our review of Labor’s claims process indicated that Labor’s form for a traumatic injury did not specify what supporting documents applicants had to submit to substantiate a claim. Specifically, while this form states that the claimant must “provide medical evidence in support of a disability,” the type of evidence required is not specifically identified. Without clear information on what documentation to submit in support of their claims, applicants may continue to experience delays in the process. Additionally, DOD requires deploying civilians to be medically screened both before and following their deployments. However, post-deployment screenings are not always conducted, because DOD lacks standardized procedures for processing returning civilians. Approximately 21 percent of DOD civilians who responded to our survey stated that they did not complete a post-deployment health assessment. In contrast, State generally requires a medical clearance as a precondition to deployment but has no formal requirement for post-deployment screenings of civilians who deploy under its purview. Our prior work has found that documenting the medical condition of deployed civilians both before and following deployment is critical to identifying conditions that may have resulted from deployment, such as traumatic brain injury. To address these matters, we recommended that (1) DOD clarify its guidance concerning the circumstances under which civilians are entitled to treatment at military treatment facilities following deployment and formally advise other agencies that deploy civilians of its policy governing treatment at these facilities; (2) Labor revise the application materials for workers’ compensation claims to make clear what documentation applicants must submit with their claims; (3) the agencies included in our review establish ombudsman programs or, for agencies deploying small numbers of civilians, focal points to help ensure that deployed civilians get timely responses to their applications and receive the medical benefits to which they are entitled; (4) DOD establish standard procedures to ensure that returning civilians complete required post-deployment medical screenings; and (5) State develop post-deployment medical screening requirements for civilians deployed under its purview. The agencies generally concurred with these recommendations, with the exception of USAID, which stated that it already had an ombudsman to assist its civilians. USAID officials, however, did not provide any documentation to support the establishment of the ombudsman position. In the absence of such documentation, we continue to believe our recommendation has merit. While each of the agencies we reviewed was able to provide a list of deployed civilians, none of these agencies has fully implemented policies and procedures to identify and track its civilians who have deployed to Iraq and Afghanistan. DOD, for example, issued guidance and established procedures for identifying and tracking deployed civilians in 2006 but concluded in 2008 that its guidance and associated procedures were not being consistently implemented across the agency. In 2008 and 2009, DOD reiterated its policy requirements and again called for DOD components to comply. The other agencies we reviewed have some ability to identify deployed civilians, but they did not have any specific mechanisms designed to identify or track location-specific information on these civilians. As we have previously reported, the ability of agencies to report location-specific information on employees is necessary to enable them to identify potential exposures or other incidents related to deployment. Lack of such information may hamper these agencies’ ability to intervene quickly to address any future health issues that may result from deployments in support of contingency operations. We therefore recommended that (1) DOD establish mechanisms to ensure that its policies to identify and track deployed civilians are implemented and (2) the five other executive agencies included in our review develop policies and procedures to accurately identify and track standardized information on deployed civilians. The agencies generally concurred with these recommendations, with the exception of USAID, which stated that it already had an appropriate mechanism to track its civilians. We disagree with USAID’s position since it does not have an agencywide system for tracking civilians and continue to believe that our recommendation is appropriate. Deployed civilians are a crucial resource for success in the ongoing military, stabilization, and reconstruction operations in Iraq and Afghanistan. Most of the civilians—68 percent of those in our review— who deploy to these assignments volunteered to do so, are motivated by a strong sense of patriotism, and are often exposed to the same risks as military personnel. Because these civilians are deployed from a number of executive agencies and work under a variety of pay systems, any inconsistencies in the benefits and compensation they receive could affect that volunteerism. Moreover, ongoing efforts within DOD and State to establish a cadre of deployable civilians further emphasizes that the federal government realizes the important role these federal civilians play in supporting ongoing and future contingency operations and stabilization and reconstruction efforts throughout the world. Given the importance of the missions these civilians support and the potential dangers in the environments in which they work, agencies should make every reasonable effort to ensure that the compensation and benefits packages associated with such service overseas are appropriate and comparable for civilians who take on these assignments. It is equally important that federal executive agencies that deploy civilians make every reasonable effort to ensure that these civilians receive all of the compensation and medical benefits to which they are entitled. These efforts include maintaining sufficient data to enable agencies to inform deployed civilians about any emerging health issues that might affect them. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) and other executive agencies increasingly deploy civilians in support of contingency operations in Iraq and Afghanistan. Prior GAO reports show that the use of deployed civilians has raised questions about the potential for differences in policies on compensation and medical benefits. When these civilians are deployed and serve side by side, differences in compensation or medical benefits may become more apparent and could adversely impact morale. This statement is based on GAO's June 2009 congressionally requested report, which compared agency policies and identified any issues in policy or implementation regarding (1) compensation, (2) medical benefits, and (3) identification and tracking of deployed civilians. GAO reviewed laws, agency policies and guidance; interviewed responsible officials at the Office of Personnel Management (OPM) and the six selected agencies, including DOD and State; reviewed workers' compensation claims filed by deployed civilians with the Department of Labor from January 1, 2006 through April 30, 2008; and conducted a survey of deployed civilians. GAO made ten recommendations for agencies to take actions such as reviewing compensation laws and policies, establishing medical screening requirements, and creating mechanisms to assist and track deployed civilians. At the time of this testimony, the agencies were in various stages of taking action. While policies concerning compensation for deployed civilians are generally comparable, GAO found some issues that affect the amount of compensation--depending on such things as the pay system--and the accuracy, timeliness, and completeness of this compensation. For example, two comparable civilian supervisors who deploy under different pay systems may receive different rates of overtime pay because this rate is set by the employee's pay system and grade/band. While a congressional subcommittee asked OPM to develop a benefits package for all civilians deployed to war zones and recommend enabling legislation, at the time of GAO's review, OPM had not yet done so. Also, implementation of some policies may not always be accurate or timely. For example, GAO estimates that about 40 percent of the deployed civilians in its survey reported experiencing problems with compensation, including danger pay. GAO recommended, among other things, that OPM oversee an agency working group on compensation to address differences and, if necessary, make legislative recommendations. OPM generally concurred with this recommendation. Although agency policies on medical benefits are similar, GAO found some issues with medical care following deployment, workers' compensation, and post deployment medical screenings that affect the benefits of deployed civilians. Specifically, while DOD allows its treatment facilities to care for non-DOD civilians following deployment in some cases, the circumstances are not clearly defined and some agencies were unaware of DOD's policy. Civilians who deploy also may be eligible for benefits through workers' compensation. GAO's analysis of 188 such claims revealed some significant delays resulting in part from a lack of clarity about the documentation required. Without clear information on what documents to submit, applicants may continue to experience delays. Further, while DOD requires medical screening of civilians before and following deployment, State requires screenings only before deployment. Prior GAO work found that documenting the medical condition of deployed personnel before and following deployment was critical to identifying conditions that may have resulted from deployment. In June 2009, GAO recommended, among other things, that State establish post-deployment screening requirements and that DOD establish procedures to ensure its post-deployment screenings requirements are completed. Each agency provided GAO with a list of deployed civilians, but none had fully implemented policies to identify and track these civilians. DOD, for example, had procedures to identify and track civilians but concluded that its guidance was not consistently implemented. While the other agencies had some ability to identify and track civilians, some had to manually search their systems. Thus, agencies may lack critical information on the location and movement of personnel, which may hamper their ability to intervene promptly to address emerging health issues. GAO recommended that DOD enforce its tracking requirements and the other five agencies establish tracking procedures. DOD and four agencies concurred with the recommendations; one agency did not.
The Cleveland voucher program is officially called the Cleveland Scholarship and Tutoring Program and provides state funding to help primarily low-income children in kindergarten through the eighth grade attend private schools in Cleveland or to attend public schools in districts adjacent to the Cleveland school district. The voucher program was implemented in the 1996–97 school year, and only private schools have participated in it. Students new to the program generally start in kindergarten through the third grade and may have previously attended a public or a private school or never attended school. In June 2000, the Cleveland program had about 3,400 voucher students enrolled in 52 private schools, which received about $5.2 million in publicly funded voucher payments for the 1999–2000 school year. By comparison, the Cleveland school district in 1999–2000 had about 76,000 students enrolled in its 121 schools supported by $712 million in total revenues. In Cleveland, actual voucher payments follow the student to the school attended, even when he or she changes schools. Voucher checks are made out to the student’s parent or guardian and require endorsement before the school can use the funds. These funds are sent to the participating schools in two payments. Prior to payment, a voucher payment report is generated for each participating school listing all current voucher students. Each school verifies this report as accurate or updates it before it is sent to the Ohio Department of Education’s School Finance Division to be processed for payment. For low-income voucher students, the voucher amount is limited to 90 percent of school tuition up to a maximum of $2,250. For those voucher students who do not come from low-income families, the voucher amount is limited to 75 percent of school tuition up to a maximum of $1,875. Any payments sent to a voucher school are proportionately reduced if a student is not enrolled in the school for the entire period covered by the scheduled voucher payment. About 90 percent of the Cleveland voucher schools are religious schools. The constitutionality of providing state-funded vouchers for attendance at religious schools has been challenged in the courts since the program’s inception. In December 2000, the U.S. Court of Appeals for the Sixth Circuit ruled that the program is unconstitutional because it has the effect of advancing religion, and that the program constitutes an endorsement of religion and sectarian education in violation of the first amendment. Subsequently, the court of appeals decided that the program could continue operating while interested parties seek U.S. Supreme Court review of the court of appeals’ ruling. Two teams have conducted research on the academic achievement of students in Cleveland’s voucher program. The first was the contract researcher, a team from Indiana University, which was contracted by the Ohio Department of Education to conduct a multiyear evaluation of the program. The second team, supported by Harvard University’s Program on Education Policy and Governance (Harvard researchers), conducted its own studies. The contract research team analyzed students’ academic achievement in school years 1996–97 and 1997–98, the first 2 years of the voucher program. The Harvard team reanalyzed the contract researcher team’s data for the first year and 1996–97 data from two additional private schools participating in the voucher program. The Milwaukee voucher program, officially called the Milwaukee Parental Choice Program, provides state funding exclusively for low-income children in Milwaukee to attend private schools and was first implemented in the 1990–91 school year. Wisconsin initially limited participation to nonsectarian private schools, but amended the program to include religious schools in 1995. For the 1994–95 school year, 771 full-time equivalent voucher students attended 12 nonreligious schools. Following legal challenges to the 1995 program revision permitting religious school participation, the Wisconsin Supreme Court upheld the revision in 1998, and program enrollment tripled when Milwaukee voucher students began attending religious schools in 1998–99 school year. Subsequently, the U.S. Supreme Court chose not to hear an appeal of the Wisconsin Supreme Court decision that the program did not violate the First Amendment of the United States Constitution. In school year 1998–99, nearly three- quarters of the participating schools were religious. Currently, students new to the program may start in kindergarten through the 12th grade if, in the year prior to enrolling, they attended a Milwaukee public school; a Milwaukee private school in kindergarten, first, second, or third grade; or never attended school anywhere. In the 1999–2000 school year, the Milwaukee program had 7,621 voucher students enrolled in 91 schools, which received about $38.9 million in publicly funded voucher payments. The Milwaukee school district in 1999–2000 had about 105,000 students enrolled in 165 schools supported by $917 million in total revenues. The Wisconsin Department of Public Instruction makes voucher payments in four installments during the school year. Similar to Ohio’s program, the voucher check is payable to the voucher family. The Department mails the checks to the schools where the parent or guardian endorses them to the schools. If the school cannot obtain a signature because, for example, the student is no longer enrolled, it returns the check to the Department. The school keeps the lesser of the voucher amount or an amount equal to their per-pupil operating and debt service costs as determined by an independent financial audit. Because a school’s actual costs may be less than the maximum allowable payment, and because of other factors that may require adjustments to payments—such as audited enrollment reports—the Department makes adjustments after the completion of the school year. Schools with lower costs must return excess payments, and schools that gain students receive an additional amount. Wisconsin has required the Department of Public Instruction and the Legislative Audit Bureau to evaluate the voucher program. The Department contracted with an independent researcher to conduct an evaluation over the first 5 years of the program. The evaluation focused on students’ academic achievement, at a time when student and private school participation was limited to less than one-tenth of what it was in 1999–2000 and was limited to nonreligious schools. The evaluation was terminated at the end of school year 1994–95, and data on students’ characteristics have not been collected for an evaluation since then, nor has student academic achievement been evaluated. Three teams conducted research on Milwaukee’s voucher program during its early years: (1) the contract researchers, a group of investigators affiliated with the Department of Political Science and the Robert M. La Follette Institute of Public Affairs, University of Wisconsin–Madison; (2) the Harvard team that also conducted research on the Cleveland program; and (3) a researcher affiliated with Princeton University. All three teams used the data set on Milwaukee voucher and public school students and parents created by the contract researcher team. All three teams also analyzed students’ academic achievement as measured by scores on the Iowa Test of Basic Skills administered by the Milwaukee school district in school years 1990–91 to 1993–94. In addition to the Cleveland and Milwaukee voucher programs, state- funded voucher programs operate in Florida, Maine, and Vermont. Although not an integral part of this review, some information on these three additional state programs is provided to help put the Cleveland and Milwaukee programs in a more complete context of publicly funded voucher programs. The Florida voucher program began operating in the 1999–2000 school year. The program provides a private school choice to students whose public schools have been judged by the state as failing. The Maine and Vermont programs have been operating for more than 100 years and provide for the private, secular education of students whose public school districts do not have sufficient school capacity. More detail on the Florida, Maine, and Vermont voucher programs is provided in appendix II. Although not a direct sponsor of voucher programs, the federal government in the past has sponsored research into alternative educational programs, including a voucher program operated in a public school system. The National Institute of Education sponsored research on an education voucher demonstration program begun during the 1972–73 school year in six schools of the Alum Rock Union Elementary School District of San Jose, California. In this demonstration, parents could choose from among these six public schools and receive a voucher equal to the cost of the child’s education at that school. The voucher amount was paid to the chosen school when the child enrolled. After a 5-year implementation period, an evaluation found little difference in the benefits to students of the voucher and regular school programs. The U.S. Department of Education is considering funding a grant to study Florida’s school accountability system, which may include the voucher program and its effect on improving school quality. In accordance with state laws and regulations for student and school participation in the Cleveland and Milwaukee voucher programs, both programs target students from low-income families residing within the city or school district. Income eligibility is determined by comparing applicant family income to federal poverty guidelines. Participating private schools must be located within the city or school district, comply with state requirements for private schools—such as those covering health and safety—and randomly select students when applications exceed available slots. In the Cleveland voucher program, an eligible student must reside within the Cleveland school district. Generally, first-time program enrollees must be in kindergarten or grades one, two, or three. Priority for a voucher award is given to students from families whose income is less than 200 percent of federal poverty guidelines. However, the state determines the number of new vouchers that will be awarded each year within the limitations of the amount of annual program funds appropriated. Any student who has received a voucher in the preceding year may continue to receive one until the student has completed grade eight. Assuming students’ residency requirements are maintained, school admission priority is given to students who were enrolled in the school during the preceding year and to siblings of these students, at the school’s discretion. A student’s family income is also a key eligibility criterion for determining the monetary size of the voucher award offered each student. Students who meet the low-income definition qualify for a voucher amount equal to 90 percent of school tuition, not to exceed $2,250. This voucher amount has not changed over time. Students not meeting the low-income definition qualify for 75 percent of the tuition amount, not to exceed $1,875. For the Milwaukee voucher program, all students must reside within the city of Milwaukee and come from families whose incomes do not exceed 175 percent of the federal poverty guidelines. In addition, in the year prior to entering the program, the student must have been enrolled in either a Milwaukee public school or in kindergarten, first, second, or third grade in a Milwaukee private school; or not enrolled in any school. The number of students allowed to participate in the voucher program cannot exceed 15 percent of the public school district’s enrollment. Voucher students may attend a voucher school at no charge for tuition up to an amount equal to the lesser of the school’s per-pupil operating and debt service costs or a state-determined maximum voucher amount. For 1999–2000, the Milwaukee maximum voucher amount was set at $5,106. Cleveland private schools participating in the voucher program must be physically located within the Cleveland school district. However, the state also allows public schools located in any school district adjacent to the Cleveland school district to participate in the voucher program. No public schools have chosen to participate. Participating private schools must be registered with the Ohio State Superintendent of Public Instruction. Registered schools must adhere to a variety of requirements such as (1) not discriminating on the basis of race, religion, or ethnic background; (2) agreeing not to charge tuition to low-income voucher families in excess of 10 percent of the maximum voucher amount or the established school tuition, if lower; and (3) permitting any such tuition over the voucher amount, at the discretion of the parent, to be satisfied by the low-income family’s provision of in-kind contributions or services. In addition, registered schools must generally meet all of the state of Ohio’s minimum standards for nonpublic schools that have been chartered by the state board of education—which are essentially the same school standards, with some modifications, as for public schools. These standards provide guidance and direction on such things as a school’s educational goals, curriculum and instruction, teacher qualifications, instructional materials and equipment, and the quantity and quality of facilities. In addition, schools’ educational programs must be evaluated at least once every 5 years in accordance with professionally recognized criteria and procedures. For the 1999–2000 school year, 51 of the 52 private schools participating in the Cleveland voucher program were chartered by the state. Random selection of voucher students can be implemented by both the state program office or by a school that enrolls students. For example, if the number of Cleveland vouchers to be awarded to first-time voucher applicants in any school year is less than the number of eligible applicants, the state program office uses a random selection process in which low- income applicants are given priority. The director of the Cleveland voucher program stated that random selection has generally been used at some point during each year’s selection process. However, if the number of available vouchers exceeds the number of low-income applicants, applicants above the low-income threshold may be awarded the remaining vouchers. Once first-time voucher applicants have been awarded a voucher, they seek enrollment in a voucher-participating school. After enrolling voucher students who attended a voucher school during the preceding year or siblings of those students, schools must admit low- income, first-time voucher students by random selection if potential enrollees exceed the number of spaces in the school. The school is to admit such students to kindergarten, first, second, and third grades up to 20 percent of the total number of students enrolled in the school during the preceding year for those grades. The extent to which schools have used random selection for their enrollments is unknown because the voucher program office has not monitored its use. In the Milwaukee voucher program, participating schools must be located within the city of Milwaukee. They must also be private schools as defined in Wisconsin statute, which requires them to provide at least 875 hours of instruction each school year and to have a sequentially progressive curriculum of instruction in subjects such as mathematics and reading. Schools must also meet applicable health and safety codes; meet at least one of the state performance standards, such as for academic progress or attendance; and comply with federal antidiscrimination laws. Participating schools are subject to uniform financial accounting standards and must submit an annual independent financial audit to the state. Similar to the Cleveland program, a key provision is the use of a random selection process when the number of eligible applicants exceeds the number of spaces a school has designated for students. Each school has discretion in setting the number of voucher students it will accommodate in each grade and must specify this number at the time it notifies the state of its intent to participate in the program. Each school must also submit an annual written plan describing its intended method for randomly selecting voucher students when the number of applicant voucher students exceeds the number of available spaces allocated for them. As in Cleveland, the extent to which schools have used random selection is unknown because schools are not required to report on its use. The school must accept all eligible applicants if space is available. In addition, schools cannot select students on the basis of race, religion, gender, prior achievement, or prior behavioral records. Continuing students and their siblings are exempt from the random selection requirement. Compared to public school students, voucher students in both Cleveland and Milwaukee came from families with less income and that were more likely to be headed by parents who were single or not married to the person they were living with. Voucher students’ parents were also more likely to have completed at least high school than were public school students’ parents. Some research for Milwaukee also provided reliable information on students’ academic achievement prior to their participation in a voucher program. The contract research team for Wisconsin found that voucher schools in Milwaukee were attracting lower-performing public school students as evidenced by their prior achievement test results. We used the student characteristic data presented by the Cleveland contract research team because their data were more reliable than that of other researchers. With the exception of achievement test score data, data on Milwaukee student characteristics collected by the contract research team were less reliable, but we corroborated some of the information on public school students. Data that addressed school characteristics showed that in Cleveland, voucher schools had less- experienced teachers and smaller class sizes than public schools. No comparable school data were collected by the contract research team for the Milwaukee program. Other data indicate that the majority of participating voucher schools have been religious in Cleveland since the program’s inception, whereas the majority have been religious in Milwaukee since the 1998–99 school year (when religious schools were admitted to the program). Student characteristics most commonly reported by the contract researchers (excluding race) were family income, the family’s living arrangement, and parents’ education. In Cleveland, these data came from a 1999 survey, while in Milwaukee the data came from annual surveys conducted between 1990 and 1994. The Milwaukee surveys went to parents of all voucher student applicants each year but only to parents of a random sample of public school students—the comparison group—in 1991. Since Wisconsin ended such surveys in 1995, the number of voucher students and participating schools has grown significantly (roughly tenfold and ninefold, respectively), thereby potentially changing the character of the program since it was evaluated in earlier years. In Cleveland, average family income for voucher students was $18,750 compared to $19,814 for families of students attending public schools. These average incomes fall within the definition of low income under the Cleveland voucher program for a family of two or more members. For example, under Cleveland’s criterion for a low-income family—less than 200 percent of the federal poverty guideline—a two-person family in 1999 would have qualified with an annual income under $22,120. The research team reported that 70 percent of voucher families were headed by a single mother, compared to 62–65 percent for public school families. Despite lower incomes and a higher rate of single-mother households for voucher students, voucher student parents had a higher level of education than did the parents of public school students. For example, 91.6 percent of voucher student mothers had completed high school compared to 78.1 percent for mothers of public school students. In addition, 14.2 percent of voucher mothers had a 4-year postsecondary degree compared to 7.8 percent of public school mothers (see table 1). In Milwaukee, the average voucher family annual income was $11,340 in the first 5 years of the program. The comparison group, 1991 public school students, had a family income that averaged $22,000 in 1991. Average voucher family incomes were less than the program’s low-income requirement for a family of two or more members. For example, under Milwaukee’s criterion for a low-income family, 175 percent of the federal poverty guideline, a two-person family in 1990 would have qualified with an annual income under $14,735. Voucher families were also more likely to be headed by a nonmarried parent (76.5 percent) than public school families (49 percent). As shown in table 2, 84.9 percent of voucher students’ mothers reported at least a high school degree or General Education Development (GED) diploma as compared to 75 percent of mothers of public school students (see table 2). However, fewer voucher students’ fathers completed high school or a GED (73.1 percent) than did public school students’ fathers (76 percent). Research indicates that Milwaukee voucher students already had low academic achievement when they entered the voucher program. During the first 5 years of the program, voucher students had lower prior achievement test results—as measured by the Iowa Test of Basic Skills, a standardized math and reading test given in first through eighth grade— than the average public school student. Only the contract research team of the Cleveland voucher program compared private school characteristics to those of public schools for overall school enrollment, numbers of teachers employed, average number of students per classroom, and average years of teacher classroom experience. These data were obtained from teacher and principal surveys conducted during the 1997–98 and the 1998–99 school years, respectively. One of the contract researchers for the Milwaukee program conducted case studies from 1991 to 1993 as the basis for comments on staffing and curriculum, but the data were limited to voucher schools, thereby precluding comparison to public school characteristics. The Cleveland data showed that private voucher schools were smaller on average than public schools in terms of student enrollments and numbers of teachers employed. For example, voucher schools had average student enrollments of 201 to 300 students compared to 401 to 500 for public schools. The average class size was somewhat smaller for voucher schools at 20.6 students compared to 23.6 for public schools. The amount of classroom experience reported by public school teachers was significantly higher than the classroom experience reported by their voucher school peers (14.2 years versus 8.6 years). (See table 3.) Other data indicate that the majority of participating voucher schools in Cleveland since the program’s inception were religious, whereas the majority have been religious in Milwaukee since the 1998–99 school year (when religious schools were admitted to the program). Some information about the racial and ethnic composition of Cleveland’s and Milwaukee’s public school and voucher student populations is available, but whether the composition has changed as a result of the voucher programs is unclear. During school year 1998–99, well over two- thirds of the students enrolled in Cleveland’s and Milwaukee’s voucher programs and public schools were minority group members. Most of the minority students were African-American. The 1998–99 school year data are reliable, but examining changes in racial and ethnic composition since the voucher programs’ inception is difficult for a variety of reasons. For example, data available from existing research for the first 2 years of Cleveland’s program were unreliable or did not fully represent the voucher and public school student population. Further, studies that have analyzed changes in the racial and ethnic composition of voucher and public schools in both Cleveland and Milwaukee did not examine factors other than the voucher program, such as birth rates, that may have influenced the changes. Research on Cleveland’s voucher program provides information on the racial and ethnic composition of Cleveland’s public school and voucher student populations in school year 1998–99, the most recent year for which reliable information is available. As shown in table 4, of Cleveland students in kindergarten through fifth grade, most of the public school students and students enrolled in the voucher program in school year 1998–99 were minority group members. However, data available for the first 2 years of the Cleveland program that would indicate whether the racial and ethnic composition of public school and voucher students has changed over the course of the Cleveland voucher program were unreliable or did not fully represent the voucher and public student populations. For example, data collected by the contracted Cleveland evaluation for voucher students were limited to third graders in school year 1996–97 and fourth graders in school year 1997–98.For the same years, the data collected for public school students were limited to classmates of students who applied for tutoring grants in the third and fourth grade. Although the evaluator reported the proportion of all voucher students who were minority group members, and that, over the first 3 years, 60 percent were African-American, he did not report the composition of other minority groups. However, a survey conducted by another research team provided racial and ethnic composition data for voucher students in school year 1996–97. This team reported that of the voucher students, 61.3 percent were African-American, 4.4 percent were Hispanic, 1.4 percent were some other minority group, 4 percent were multiracial, and 28.9 percent were white. Of Milwaukee public school and voucher students, African-American students were the majority, but the proportion of African-American students in both the public school and voucher program student body has changed over the course of the voucher program. Research on the Milwaukee voucher program provided reliable data about Milwaukee public school students during two time periods: the beginning of the program and school year 1998–99. Of public school students, about 71 percent were minority group members in school year 1990–91, the first year of the voucher program. African-American students represented 55 percent of the total. By school year 1998–99, minority group members represented almost 80 percent of public school students and African- American students represented 61.4 percent of the total. The detailed racial and ethnic composition of Milwaukee public school students for these years, including minority subgroup composition, is shown in table 5. Data for the intervening years were not reported in the voucher program research. Somewhat more information was available on the racial and ethnic composition of Milwaukee voucher students. Table 5 shows the average racial and ethnic composition of enrolled voucher students for school years 1990–91 to 1992–93, the composition in school year 1994–95—before the program was changed to permit religious school participation—and the composition in school year 1998–99, the first school year the court allowed voucher students to attend religious schools. These data, and the data on Milwaukee public school students, describe the racial and ethnic composition of Milwaukee students at different stages of the voucher program and indicate that some changes in the composition have occurred. For example, of voucher students, 96.5 percent were minority group members in school year 1994–95. By school year 1998–99, after religious schools were admitted to the program, 79 percent of voucher students were minority group members. However, the data do not explain why the changes occurred. Table 5 does show that, of both public school and voucher students, African-Americans were the largest minority group in all time periods. None of the contract research teams’ studies addressed changes in the racial and ethnic composition of voucher and public school students over the course of the voucher program. However, three other studies of the Cleveland and Milwaukee voucher programs have examined changes in the racial composition of students at voucher and public schools but have not developed complete explanations of the changes. They reached conclusions about the voucher programs’ effect on racial composition within voucher schools without considering the full range of factors that could account for changes in the composition. These studies identified the proportion of white and minority students in public schools and in voucher programs in terms of a standard for racial isolation. A school was defined as racially isolated when 90 percent or more of the enrolled students were members of a minority group or white. One study of the Cleveland voucher program identified the proportion of students attending racially isolated public schools in Cleveland and its suburbs, and in private schools participating in the voucher program. For example, at the beginning of the 1999–2000 school year, two-fifths of Cleveland public school students attended schools that had fewer than 10 percent white students and more than three-fifths of suburban public school students attended schools in which the student body was more than 90 percent white. When the researcher combined the public schools in these metropolitan areas, he found that 60.5 percent of the students attended schools that either had more than 90 percent or fewer than 10 percent white students. On the other hand, among Cleveland’s voucher students, fewer than two-fifths attended a private school that had fewer than 10 percent white students and less than one-fifth attended a private school that had more than 90 percent white students. On the basis of such comparisons, the researcher concluded that school choice helps promote integration. However, factors other than the Cleveland voucher program— such as all population groups’ moves into and out of the city, their birth and death rates, and students’ movement among schools and school systems—that contributed to the racial and ethnic composition of Cleveland’s public and private schools were not identified or isolated in the analysis. Two studies of the Milwaukee voucher program examined the proportion of public school students and voucher students who attended racially isolated schools and reached conclusions about the effect of the Milwaukee voucher program on voucher students’ racial isolation. One study examined the proportion of students attending racially isolated schools in the 1998–99 school year and found that approximately 20 percent more Milwaukee public school students attended racially isolated schools than did voucher students who attended 26 Catholic elementary schools. The authors concluded that the Milwaukee voucher program appeared to have increased racial and ethnic enrollment balance for students participating in the program and for students at participating private schools. However, the 26 Catholic elementary schools examined in this study were not selected randomly and represented only 41 percent of the 63 religious schools participating in the voucher program in the 1998– 99 school year. The second study, which examined the proportion of Milwaukee students who attended racially isolated schools in the 1999–2000 school year, found that 50.3 percent of Milwaukee public school students attended schools that were racially isolated. Among the 86 private schools participating in the voucher program that year, students attending religiously affiliated voucher schools had a different experience than students attending voucher schools with no religious affiliation. Among the 56 religiously affiliated voucher schools, 30.1 percent of the students attended racially isolated schools. Among the 30 private voucher schools with no religious affiliation, 83.1 percent of students attended racially isolated schools. The authors concluded that the addition of religiously affiliated schools had led to a lower level of racial isolation in private schools participating in the voucher program than in Milwaukee public schools. However, neither this study, nor the first study of Milwaukee students’ racial isolation, ruled out routinely occurring demographic changes, such as births, deaths, moves into and out of the city, or students’ movement among schools and school systems, as factors contributing to the proportion of racially isolated schools they identified. Ohio and Wisconsin use different methods to fund their school voucher programs and spend less on each voucher student than on each public school student. Ohio funds the Cleveland voucher program with Disadvantaged Pupil Impact Aid moneys appropriated from the state’s general revenue funds and reduces the Cleveland school district’s state revenues by the amount of the voucher program appropriation. Wisconsin funds its voucher program with general state aid and reduces the Milwaukee school district’s state revenues by half the amount of the program cost. The full impact of these funding methods on the public schools is unknown. In the 1999–2000 school year, Ohio spent $1,832 per voucher student compared to $4,910 for each student in the Cleveland school district. For the same year, Wisconsin spent $5,106 per voucher student compared to $6,011 for each student in the Milwaukee school district. Public school students in both Cleveland and Milwaukee receive additional support from local taxes and federal sources, which results in a larger difference in per pupil amounts between voucher and public school students than the states’ figures indicate. The Cleveland voucher program is funded from the Cleveland public school district’s share of state Disadvantaged Pupil Impact Aid, based on an annual appropriation determined by the Ohio legislature. For the 1999– 2000 school year, the legislature appropriated $11.2 million for the Cleveland voucher program. Based on this appropriation, the Cleveland school district’s $80.5 million in Disadvantaged Pupil Impact Aid was reduced by $11.2 million to $69.3 million. In the context of the school district’s revenues from all sources for 1999–2000, the $11.2 million amounted to nearly 1.6 percent of the district’s $712.1 million total. Actual voucher program expenditures were $6.2 million—only 55.4 percent of what was appropriated. Voucher program expenditures are charged to a designated state account and the Cleveland school district does not monitor the program’s expenditures. School district officials stated that the district has not obtained additional property tax levies for the purpose of recovering state revenue deductions from the district’s Disadvantaged Pupil Impact Aid funds. According to these officials, the last major school levy for funding school operations was passed in 1996 and provided $67 million to the district annually over a period of 5 years. The state of Wisconsin funds the Milwaukee voucher program from a separate general-purpose revenue appropriation. The state deducts the amount of the appropriation from general school aid payments to all 426 school districts statewide. Once the state determines the total amount needed to fund the voucher program for the year, it reduces the aid payable to the Milwaukee public school district by half that amount. The other half of program funding is drawn from aid authorized for the remaining 425 school districts in proportion to the total state aid to which each district is entitled. The school districts have the option of increasing property tax levies to offset reductions in general state aid related to the voucher program. According to a Milwaukee school district official, the district has generally levied taxes to the maximum extent possible under state school revenue limits. For the 1999–2000 school year, the Milwaukee school district absorbed half of the voucher program’s $38.9 million cost. That amount, $19.45 million, represented about 2.1 percent of the district’s $917 million in total revenues. Because there are no definitive studies, state and school district officials did not have definitive explanations as to what extent the voucher programs negatively or positively affected the Cleveland or Milwaukee public school districts. In Cleveland, with the exception of a public accounting firm’s management study touching on this issue, state and school district officials were unaware of any studies addressing the financial impact of the voucher program. Official and unofficial studies of the Milwaukee voucher program have described possible effects ranging from slightly negative to indeterminate. According to some of these studies, changing the assumptions of such studies could modify the results. Assumptions, for example, could include estimates of the number of voucher students who were formerly enrolled in the public school districts and where they might have been enrolled in the absence of a voucher program. In addition, the amount of funding that the Milwaukee public school district has received from state revenues and local property tax levies has been affected by policy decisions that have not necessarily been driven by the voucher programs. For example, the Milwaukee public school district has experienced an increase in total state aid, largely because of the state’s policy of funding two-thirds of certain school costs beginning in the 1996–97 school year. In Cleveland, local school revenues are not based on enrollments. Consequently, when students leave public schools to attend private schools, the public school retains the same amount of local revenue and thus has a higher expenditure of local funds per pupil. However, in Milwaukee, the amount that may be contributed from the local tax levy is determined by the difference between the school revenue cap and state school aid, which are based on the school district’s enrollment. Ohio provides less state revenue for each voucher student than for each public school student in the Cleveland school district. For example, on a per-student basis, the state spent $1,832 on voucher payments for each voucher student and program administration, compared to $4,910 for each Cleveland school district student for the 1999–2000 school year. The $4,910 per public school student paid by the state does not include the per student amounts of $3,212 in local taxes and $745 in federal funds that were received by the Cleveland school district for the same year. Two factors may help to explain why the amount spent by the state for voucher students was only about 37 percent of the amount the state spent for public school students in Cleveland. First, the private schools participating in the program generally have low tuition. For example, the estimated average voucher amount for low-income students at 33 Catholic schools was $1,592 in 1999–2000, which is well below the maximum voucher amount of $2,250. Several representatives from participating religious schools stated that their schools’ missions were to provide a private-school education to children in their communities, many of whom come from low-income families. The schools purposely subsidize the cost of educating all enrolled students to achieve this mission. Representatives from nonreligious schools with higher tuition (about $4,000) stated that they could afford to accommodate just a few voucher students because they must find corporate or other sponsors to subsidize the difference between the maximum voucher amount allowed and the tuition charged. Second, the maximum voucher amount ($2,250 for low-income students) established by the Ohio legislature at the beginning of the voucher program appears to have limited the program primarily to low-tuition religious schools. The 6th Circuit U.S. Court of Appeals stated in December 2000 that, practically speaking, the tuition restrictions mandated by the statute limit the ability of nonreligious private schools to participate in the program, since religious schools often have lower overhead costs, supplemental income from private donations, and consequently lower tuition needs. In the 1999–2000 school year, 90 percent of the participating schools were religious and 97 percent of the voucher students attended these schools. Wisconsin also provides less state revenue for voucher students than for public school students in the Milwaukee school district. For 1999–2000, the estimated number of voucher students was 7,621—therefore the total budgeted amount for just the cost of voucher payments was about $38.9 million, or $5,106 per voucher student. By comparison, this per-student voucher amount is about 85 percent of the $6,011 per student in state aid received by the Milwaukee school district. The $6,011 per public school student paid by the state does not include the per student amounts of $1,573 in local taxes and $1,073 in federal funds that were received by the Milwaukee school district for the 1999–2000 school year. The Wisconsin Department of Public Instruction, which administers the voucher program, establishes its budget for the voucher program in two steps. First, it computes a set amount per student—the amount paid in the previous school year to voucher schools plus the amount of per-student revenue increase provided to public school districts taking into account revenue limits in the current year. For example, the 1999–2000 per-student payment of $5,106 was based on the 1998–99 per-student payment of $4,894 plus $212, the statewide per-student increase. The second step is estimating the number of students who will participate in the voucher program. This estimate comes from participating schools’ annual estimates of the number of voucher students they intend to admit in the next school year. The Department adjusts this estimate based on its experience with the accuracy of schools’ projections in prior years. The contract researcher teams for Cleveland and Milwaukee found little or no statistically significant differences in voucher students’ achievement test scores compared to public school students, but other investigators found that voucher students did better in some subject areas tested. None of the findings can be considered definitive because the researchers obtained different results when they used different methods to compensate for weaknesses in the data. Most of the studies satisfied basic criteria for research quality, such as using study designs and data analysis methods that isolate the program’s effect, but suffered from missing test score data and low survey response rates. For example, scores from incompatible tests limited the contracted Cleveland evaluation in the first year. In Milwaukee, the contracted evaluations had low response rates for survey data and missing test scores due to school policy changes. In addition, a substantial proportion of students left the voucher program or left the Milwaukee public school system when they were not selected for a voucher. The loss of these students made it difficult to design a rigorous evaluation. The researchers’ different findings likely were due to the different study designs, comparison groups, and statistical tests they used to address these limitations. The contract research team found no statistically significant difference in the academic achievement test scores of Cleveland voucher and public school students at the end of the first year of the program, school year 1996–97, when they controlled for differences in background—but not classroom—characteristics that might affect their performance. At the end of the second year, school year 1997–98, the evaluator found that voucher students’ scores in language achievement—one of six subject areas tested—were higher than those of public school students when previous academic achievement, background, and classroom characteristics were controlled. In contrast, the test scores of the voucher students in the two additional private schools, which the evaluator was able to include in the second-year analyses, were lower than those of public school students in every area, at a statistically significant level. The Harvard team’s reanalysis of the contract research team’s data for the first year of the program did not control completely for influences on student achievement other than the voucher program. The team used a statistical analysis method that allowed them to isolate the effect of the voucher program, but they did not include all potential influences in the analysis. Two Harvard team analyses of Hope school students’ achievement test scores in the first 2 years of the voucher program also identified changes in the scores. However, neither of these two studies ruled out any student or classroom characteristics that may have influenced the direction of those changes. Because these three studies did not meet our criteria for analyses of the effect of the voucher program, their findings are not reported here. The findings and the methodological strengths and weaknesses of the contract research team’s and the Harvard team’s research are described in greater detail in appendix IV. Milwaukee’s contract research team concluded that there was no consistent evidence that Milwaukee’s voucher program had positively or negatively affected student achievement. The team used three comparison groups and multivariate analysis methods that controlled for prior student achievement and student and family characteristics to isolate the program’s effect. They adjusted the sample survey data on students’ and families’ background characteristics for low survey response rates, and estimated test scores in the fourth year of the program—when test score data were missing for about two-thirds of the sample—to improve the reliability of their estimates. They also examined whether the substantial proportion of students who left the voucher program or who left the Milwaukee public school system when they were not selected for a voucher was affecting their analysis of achievement of students who remained in the voucher program and in Milwaukee public schools. They concluded that losing these students made it difficult to be certain about the differences between students’ scores. The Harvard team found improvements in voucher students’ language and math scores. This team was the first to use a study design and multivariate analysis procedure that reproduced the Milwaukee voucher program assignment process, assuming that it was random, and to use nonselected voucher applicants as a comparison group. Under this study design, the Harvard team isolated the effect of the voucher program by controlling for factors related to students’ assignment to schools. However, this design was unable to account precisely for departures from random assignment to the voucher program and the team did not test their assumption of random assignment completely by analyzing whether applicants not selected for the voucher program who left the public school system were different from the nonselected applicants who remained. To identify improvements in students’ scores, they used a statistical test that assumed a change in voucher students’ achievement would be more favorable than would a change in the comparison group’s and, for some results, used confidence levels that were less stringent than conventional standards. Moreover, the analyses of students who left the voucher program and the Milwaukee public school system that the contract research team conducted, and additional analyses included in the Princeton researcher’s evaluation, cast doubt on whether the students remaining in the study samples over the 4 years being analyzed could be considered randomly assigned. These findings also call into question the Harvard team’s findings of improvements in students’ test scores. The Princeton researcher found positive effects of the Milwaukee voucher program on students’ achievement in math but not in reading. Like the Harvard team’s research, the Princeton researcher’s study design focused on voucher program applicants, but did not assume that voucher recipients had been randomly selected for the voucher program. The researcher used a multivariate analysis procedure that estimated differences in achievement between voucher students and students in two comparison groups after controlling for all observed and unobserved fixed student characteristics, including background characteristics and prior achievement. She used both nonselected voucher applicants and a random sample of Milwaukee public school students as comparison groups. The Princeton researcher estimated missing test scores, allowed for the dependence of later scores on earlier ones and analyzed whether the proportion of students who left the voucher program or who left the public school system because they were not selected for a voucher affected her estimates of student achievement. Her tests showed that there were systematic differences in the students in her analysis groups but that her statistical procedures had controlled for these differences to the extent possible with statistical methods. Her findings were consistent using either comparison group. The student achievement research we reviewed for the Milwaukee voucher program was reported in four major studies. The findings and the methodological strengths and weaknesses of these studies are described in greater detail in appendix V. From a national policy perspective, school choice has become a frequent topic of discussion as a way of delivering elementary and secondary education to the nation’s youth and giving parents more control over their children’s education. Although voucher programs represent a small segment of school choice options, interest in the academic achievement of voucher program students is likely to continue and new evaluations of voucher program initiatives may be undertaken in the future. The studies we reviewed offer some useful lessons on the difficulties in achieving definitive assessments of voucher programs and of other alternative education programs targeted to low-income or disadvantaged students. First, reliance on administrative data for achievement test scores and student background information can conserve time and resources in data collection where school records are complete and the data system is automated, as in Milwaukee. However, even when complete and automated records are available, reliance on scores from school- administered tests can result in data gaps if the school district changes its testing policy, as did the Milwaukee system. On the other hand, when the evaluation team selects and administers the achievement tests, as in Cleveland, the cooperation of all schools in the study population must be negotiated. The separate analysis of Cleveland’s Hope school results in the first-year evaluation, which the contract research team felt was required because the schools had not yet agreed to be tested, limited the applicability of the first-year findings. Second, the Milwaukee team’s experience with survey data collection from the program’s low-income families confirms that special data collection and followup procedures are needed to achieve survey response rates that meet minimum data quality standards when low-income households are members of the study population. For example, although the Milwaukee team sent its survey twice to voucher and public school parents, response rates were very low—from 30 to 50 percent. Additional strategies, such as offering respondents a cash incentive and conducting several rounds of follow-up by telephone with nonrespondents, may increase response rates further. Finally, vital information about voucher program performance may be lost if adequate funding is not provided for program evaluations. For example, Wisconsin has not funded voucher student academic achievement evaluations since 1995, thereby losing data on program performance during the years when the program had grown the most. Because such school choice initiatives are of national interest, it would be useful to have more definitive research about their effect. Through its role as a sponsor of research on education programs, the Department of Education can encourage state departments of instruction and others interested in the outcomes of voucher programs to conduct additional research of a quality that leads to conclusive findings on emerging programs. We obtained comments on a draft of this report from Education, the Ohio Department of Education, the Wisconsin Department of Public Instruction, and the Wisconsin Legislative Audit Bureau. These entities provided several technical clarifications, which we incorporated as appropriate. In addition, the Legislative Audit Bureau questioned our description of the use of local tax levies to offset the cost of the voucher program in Milwaukee. We obtained and added clarifying information from the Milwaukee school district. We also obtained comments from the researchers whose work we assessed. Both Education and the Harvard researchers commented that we did not mention research studies on privately funded voucher programs. We anticipate initiating a review of these programs shortly. The Harvard researchers also commented that we did not mention other research on the Cleveland and Milwaukee voucher programs covering subjects such as parental satisfaction and the effect of voucher schools’ competition on public schools. While we recognize such research exists, we focused on those topics of greatest concern to our requestor. Most of the researchers also provided technical comments that we incorporated as appropriate. The contracted researchers for Cleveland and Milwaukee generally agreed with the findings in the report. The Princeton researcher generally agreed with the findings, but questioned our summary of the differences among the studies’ findings. However, her analysis of differences focused only on the differences between her work and each of the other researchers, whereas our assessment included the comparisons she made as well as a comparison of the differences between the Milwaukee contract researchers’ and the Harvard researchers’ findings. She also pointed out that a published version of the working paper we originally analyzed better met our criteria for inclusion in the report. We reviewed and included information from this article. The Harvard researchers disagreed with our assessment of their studies and provided additional information to support their findings about the Cleveland and Milwaukee programs. After reviewing this information, we determined that the additional material they provided did not support their view of our assessment. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time we will send copies of this report to the Secretary of Education, appropriate congressional committees, representatives of the Ohio and Wisconsin Departments of Education, and other interested parties. If you or your staff have any questions or wish to discuss this material further, please call me or Diana Pietrowiak at (202) 512-7215. Much of the public debate about the Cleveland and Milwaukee voucher programs has concerned research findings and the quality of the research available. In developing our report, we addressed this aspect of the debate in two ways. First, in our analyses of student characteristics and racial and ethnic composition, we used background data collected for studies of the voucher programs. These studies included assessments of the racial and ethnic composition of public school and voucher student populations and studies designed to evaluate Cleveland and Milwaukee students’ academic achievement. Second, we included an assessment of the research on Cleveland and Milwaukee students’ academic achievement, a major outcome of interest for the voucher programs. We selected studies for these analyses that met two or more of the following criteria: The study was performed under contract to the state in which the voucher program was implemented. The study was published in a peer-reviewed journal. The study was issued under the auspices of a research institution that reviews work prior to release. The study employed quantitative data analysis to examine student academic achievement. We assessed both the quality of the data we used in our analyses of students’ characteristics and racial and ethnic background and the methodology of the student academic achievement studies. Studies of the Cleveland and Milwaukee programs have used both administrative data collected and maintained by the school districts and voucher program offices and surveys conducted for the studies. Most of the studies described the completeness of the administrative data and the elements it contained, and the methods used to conduct the surveys. The criteria we used for assessing the data’s quality are shown in table 6. While we recognized that the administrative data were not collected to meet research standards and that surveys of low-income families like those participating in the voucher programs often obtain low response, we paid particular attention to the administrative data’s completeness and the surveys’ response rates. When 30 percent or more of the administrative or survey data were missing, we looked for analyses showing no important difference between individuals represented in the data and those who were not included. If such an analysis had not been conducted, we did not select the data for our analyses, except for the analysis of Milwaukee voucher and public school student characteristics because other data sources were limited. The research on voucher students’ academic achievement included both evaluations of the voucher program’s impact on students’ performance, and analyses and papers discussing methodological issues involved in conducting the research. We reviewed the methodological papers for contextual understanding, but our assessment of the research focused on the impact evaluations. Our assessment included both the quality of the data used in the evaluation and the methodological quality of the research. The criteria we used in the assessment are shown in table 7. An impact evaluation determines a program’s effect on its participants by isolating a program’s contribution from the effects of other influences that could have affected participant outcomes. To isolate the program’s influences, an impact evaluation studies two groups: those receiving program services and a similar group not receiving program services. Researchers compare the relevant outcomes of these two groups, such as students’ achievement test scores, to determine the program’s effect. The criteria for study design in table 7 apply to the two types of impact evaluation used to analyze the effect of an educational program on its participants: an experimental design and a quasiexperimental design. The two designs differ primarily in the way that the comparison groups are developed. In an experimental design, the comparison group is referred to as the control group. This group is composed of students randomly selected from possible program participants, such as applicants to a voucher program. Because control group members are selected randomly, researchers can compare outcomes to determine the program’s effect without using statistical controls for other factors that could have influenced the outcomes. In a quasiexperimental design, the comparison group is composed of individuals who share characteristics with program participants, but who have not been randomly selected and who have may or may not have sought program services. For example, applicants to a voucher program who did not receive a voucher might serve as a comparison group, because they share with voucher recipients an interest in alternative educational services. With this design, statistical controls, such as those provided by a multivariate analysis procedure, are needed to isolate the program from other factors that could influence outcomes. The same data quality criteria we discussed above were used for assessing the administrative and survey data used in the impact evaluations. The criteria for data analysis in table 7 refer to the need to control for factors other than the program when program participants and comparison group members are not randomly selected. They also encompass additional analyses that may be needed if the group receiving program services and the comparison group were not randomly selected or to determine if missing data affect the reliability of the estimates of the program’s effect. We obtained the data for our analyses of eligibility criteria and the funding of the voucher programs from other sources. The information on the eligibility criteria for schools and students participating in the Cleveland Scholarship and Tutoring Program and the Milwaukee Parental Choice Program came from documents issued by the program offices. We also reviewed relevant state laws and regulations. To describe the funding of the voucher programs and compare the amounts spent on voucher and public school students, we used information provided by the state departments of education and the program offices, as well as information found in program evaluations. We also examined relevant state and school district budget and financial reports. We conducted site visits to Ohio and Wisconsin and interviewed officials of the program offices, school districts, state departments of education, and several private schools to obtain their views on the financial impact of the voucher programs. We also interviewed the contract researchers and key researchers from Harvard University’s Program on Education Policy and Governance and from Princeton University. While an official evaluation of student academic achievement in the Cleveland voucher program continues, analyses of student academic achievement in the Milwaukee program are based on Milwaukee data collected before 1995, when the legislature was still funding data collection. Since student characteristic and achievement data have not been collected for the past 7 years, conclusions reached by both the contract researchers and other researchers may not be applicable to the current voucher program, which has grown tenfold in the interim. Objective: This program is intended to support the state constitutional requirement that the state provide students with the opportunity to obtain a high-quality education. Therefore, the program provides state tuition grants to permit students attending a failing public school (that is, “F”– rated) to attend an eligible higher–performing public school or a private school of choice. Student Eligibility: Any student who spent the last year at a Florida public school that received an “F” rating from the state for the second time in 4 years qualifies for an Opportunity Scholarship. Also eligible are students who did not attend an “F”-rated school in the previous year but are now assigned to such a school. Private School Eligibility: A private school must be located in Florida and may be sectarian or nonsectarian. Other requirements for private school participation include: (1) demonstrating fiscal soundness; (2) accepting scholarship students on an entirely random and religious-neutral basis without regard to the student’s past academic history; (3) being subject to the instruction, curriculum, and attendance criteria adopted by an appropriate nonpublic school accrediting body; (4) employing or contracting with teachers who hold a baccalaureate or higher degree, or have at least 3 years of teaching experience in public or private schools; and (5) accepting as full tuition and fees the amount provided by the state for each student. Maximum Student Participation: Participation is limited to the total number of students attending or assigned to qualifying “F”–rated schools for a given school year. For the 1999–2000 school year, two such schools with an approximate total population of 900 students were designated as failing. There were no new scholarships for the 2000–01 school year because, as of July 2000, no public schools received a grade of “F” for 2 of 4 years. Maximum Voucher Amount: The maximum voucher amount is based on (1) a calculated amount equivalent to what would have been provided for the student in the district school to which he or she was assigned; or (2) the amount of the private school’s tuition and fees, whichever is less. Eligible private school fees may include book fees, lab fees, and other fees related to instruction, including transportation. The voucher maximum for 1999–2000, based on the calculated costs for the two “F” schools, was $3,353 per student for kindergarten through third grade and $3,178 per student in fourth through eighth grades. 1999–2000 Enrollment in Private Schools: The first program year was the 1999–2000 school year. In that year, 143 out of about 900 students chose not to attend their assigned, failing public school. Fifty-eight enrolled in participating private schools and 85 enrolled in other, higher-performing public schools. Total Florida enrollments for students in kindergarten through 12th grade totaled 2,381,860 for public schools and 288,248 for private schools in 1999–2000. 1999-2000 Participating Private Schools: Five schools: four religious and one nonreligious. Status of Legal Challenges: In October 2000, the First District Court of Appeal for the State of Florida ruled that the Opportunity Scholarship Program was consistent with Article IX, Section 1 of the Florida Constitution. That provision requires the state to maintain a uniform system of free public schools. The appellate court ruling reversed a trial court decision holding that the Opportunity Scholarship Program violated Article IX, Section 1. In April 2001, the Florida Supreme Court declined to review the appellate court’s ruling. The appellate court also deferred consideration of whether the scholarship program statute was unconstitutional under the religion clauses in the Florida and U.S. constitutions, concluding that the trial court must first consider these allegations. Objective: Districts that do not have their own schools must provide tuition to resident families for use in other schools. Students may attend a private school approved for tuition purposes, a public school in an adjoining district which accepts tuition students, or a school approved for tuition purposes in another state or country upon permission of officials of the receiving school. The “tuitioning” system has existed in some form for over 200 years but has excluded religious schools from receiving state funds since 1981. It especially benefits students living in the rural part of the state. Student Eligibility: Children of parents residing in a district which does not maintain elementary or secondary schools. Private School Eligibility: To receive public funds for tuition purposes, a private school must be nonsectarian and meet other requirements for reporting, auditing, and student assessment. Maximum Student Participation: The number of students receiving tuition to attend other schools depends upon the number of students in the districts without their own schools. The district pays tuition directly to a public school or to a private school that has accepted the child, has been selected by the child’s parents, and has been approved for tuition purposes. Maximum Voucher Amount: The tuition paid to a private elementary school cannot exceed the average per-student cost in all public elementary schools in the state for the previous year as computed by the State Education Commissioner. For private secondary schools, the tuition paid by the district cannot exceed the sum of the school’s allowable expenditures, divided by the number of students at a particular school, adjusted by certain factors; or the adjusted state average public secondary per-student cost, whichever is lower. In the 2000–2001 school year, the maximum tuition rate for public elementary students attending any private school was $4,596. For secondary students attending private schools, the amount was $5,732. 1999-2000 Enrollment in Private Schools: The state’s total public school enrollment was 214,985. The number of these public school students that attended private schools with public funding was 5,614. All but 214 of the voucher students attended secondary schools. The number of privately funded students attending private schools was 10,394. Status of Legal Challenges: On April 23, 1999, the Maine Supreme Court affirmed the judgment of the Superior Court (Bagley vs. Raymond School Department) that the exclusion of religious schools from receiving state funds under Maine’s education tuition program does not violate any section of the U.S. or Maine Constitution. On October 12, 1999, the U.S. Supreme Court declined to review the ruling, allowing the lower court’s decision to stand. Objective: A school district that does not operate its own school or jointly operate a school with another district or districts (a union school) must provide for the education of its students by paying tuition to another Vermont public school district, an out-of-state public school district, or an approved private school. Vermont has had an educational choice system since 1869 but prohibited the inclusion of religiously affiliated schools in 1961. Student Eligibility: Students in grades kindergarten through 12 from qualified districts. Private School Eligibility: A private school may operate and provide elementary or secondary education if it obtains state approval. It must show that it has the resources required to meet its stated objectives, including financial capacity, qualified faculty, and physical facilities and special services that comply with state and federal regulations. Maximum Student Participation: Each school district decides how it will educate its students and thus determines the number that will attend private school. A school district that does not maintain an elementary school may pay tuition for elementary pupils at approved private nonresidential elementary schools. If it does not maintain a high school, it may pay tuition for its pupils to an approved private high school. Maximum Voucher Amount: The tuition paid to an approved private elementary school must not exceed the lesser of (1) the average tuition of Vermont union elementary schools or (2) the tuition charged by the public elementary school attended by the greatest number of the district’s pupils. For students in grades 7 and 8, the district must not pay an amount that exceeds the average tuition of Vermont union high schools for students in grades 7 and 8. For students in grades 9-12, the maximum is the average tuition of union high schools for students in grades 9-12. For the 1999–2000 school year, the allowable tuition for elementary pupils was $6,257; for grades 7 and 8 pupils, $6,514; and for grades 9-12 pupils, $7,306. 1999–2000 Enrollment in Private Schools: Total public school enrollment was 104,559 students. A Vermont Department of Education official estimated that 2,500 publicly funded students attended five private academies (designated high schools used by districts without public high schools) and another 900 publicly funded students are enrolled in other private schools and programs. Status of Legal Challenges: On June 11, 1999, the Vermont Supreme Court affirmed the judgment of the Superior Court (Chittenden Town School District v. Vermont Department of Education) that providing state aid tuition for children at religious schools would violate a provision of the state constitution barring compelled support for religion. On December 13, 1999, the U.S. Supreme Court declined to hear an appeal. The Cleveland Municipal School District provided detailed data on public school students’ racial and ethnic composition for school years 1996–97 and 1999–2000. The Cleveland Scholarship and Tutoring Program Office provided similar data for 1999–2000 voucher students (see table 8). Second and third studies: did not control for any possible reasons for voucher students’ achievement other than the voucher program Students in third–grade public school classes containing two or more students who had applied for or were participating in the tutoring assistance component of the Cleveland Scholarship and Tutoring Program were selected as the comparison group for the study. These classes were selected for comparison with the voucher students because they included public school students whose parents, like the voucher students’ parents, were motivated to apply for a supplementary educational program. Use of this comparison group was intended to take account of nonrandom selection for the voucher program. The Hope schools were established especially for the voucher program. These two schools did not permit the contract researchers to test the achievement of their students in the first voucher program year (1996–97). In addition to those named above, Sara Edmondson, Robert Miller, and Jay Smale made key contributions to this report. Valerie Caracelli, Arthur James, Jr., Arthur Kendall, and Douglas Sloane provided important consultation on methodological issues for the academic achievement analysis. Metcalf, Kim K., William Boone, Frances K. Stage, and others. A Comparative Evaluation of the Cleveland Scholarship and Tutoring Grant Program, Year One: 1996–97. Bloomington: School of Education, Indiana University, 1998. Metcalf, Kim K. Evaluation of the Cleveland Scholarship and Tutoring Grant Program, 1996–1999. Bloomington: The Indiana Center for Evaluation, Indiana University, 1999. Metcalf, Kim K., Patricia Muller, William Boone, and others. Evaluation of the Cleveland Scholarship Program: Second Year Report (1997–98). Bloomington: The Indiana Center for Evaluation, Indiana University, 1998. Wisconsin Legislative Audit Bureau. An Evaluation: Milwaukee Parental Choice Program (00-2). Madison, Wisc.: 2000. ——-. An Evaluation: Milwaukee Parental Choice Program (95-3). Madison, Wisc.: 1995. Witte, John F. Achievement Effects of the Milwaukee Voucher Program. Paper presented at the 1997 American Economics Association Annual Meeting, New Orleans, La., Jan. 4–6, 1997. ——-, Troy D. Sterr, and Christopher A. Thorn. Fifth Year Report, Milwaukee Parental Choice Program. University of Wisconsin-Madison, 1995. ——-, and Christopher A. Thorn. Fourth Year Report, The Milwaukee Parental Choice Program. University of Wisconsin–Madison, 1994. ——-. The Market Approach to Education: An Analysis of America’s First Voucher Program. Princeton, N.J.: Princeton University Press, 2000. ——-. “The Milwaukee Voucher Experiment,” Educational Evaluation and Policy Analysis, Vol. 20, No.4. Winter 1998, pp. 229–251. Fuller, Howard L. and George A. Mitchell. The Fiscal Impact of School Choice on the Milwaukee Public Schools. Institute for the Transformation of Learning, Marquette University, March 1999. Greene, Jay P., Paul E. Peterson, and Jiangtao Du. Effectiveness of School Choice: The Milwaukee Experiment. Cambridge, Mass.: Harvard University, 1997. ——-. “School Choice in Milwaukee: A Randomized Experiment,” Learning From School Choice. Washington, D.C.: Brookings Institution Press, 1998, pp. 335–356. Greene, Jay P. and Paul E. Peterson. Methodological Issues in Evaluation Research: The Milwaukee School Choice Plan. Cambridge, Mass.: Harvard University, Aug. 29, 1996. Rouse, Cecilia E. Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program (Working Paper 5964). Cambridge, Mass.: National Bureau of Economic Research, 1997. ——-. “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program.” Quarterly Journal of Economics (May 1998), pp.553–602. ——-. “Schools and Student Achievement: More Evidence from the Milwaukee Parental Choice Program.” FRBNY Economic Policy Review (March 1998), pp.61–76. ——-. School Reform in the 21st Century: A Look at the Effect of Class Size and School Vouchers on the Academic Achievement of Minority Students (Working Paper 440). Cambridge, Mass.: Princeton University and National Bureau of Economic Research, Jan. 31, 2000. Steuerle, C. Eugene, and others, ed. Vouchers and the Provision of Public Services. Washington, D.C.: Brookings Institution Press, 2000.
This report reviews the Cleveland and Milwaukee school voucher programs, which provide money for low-income families to send their children to private schools. Both programs require participating private schools to be located within the city or the city's school district and to adhere to state standards for private schools, such as those covering health and safety. In both Cleveland and Milwaukee, voucher students were more likely than public school students to come from poorer families that were headed by a single parent. Some information about the racial and ethnic composition of Cleveland's and Milwaukee's public school and voucher students is available, but it is unclear whether the composition changed as a result of the voucher programs. Ohio and Wisconsin use different methods to provide state funds for the voucher programs and spend less on voucher students than on public school students. The Cleveland voucher program is funded with Disadvantaged Pupil Impact Aid funds up to a limit set by the Ohio Legislature. Wisconsin funds the Milwaukee voucher program with general state aid funds on the basis of number of students participating in the program in a given year. The contracted evaluations of voucher students' academic achievement in Cleveland and Milwaukee found little or no difference in voucher and public school students' performance, but other studies found that voucher students did better in some of the subject areas tested.
The Navy delivered its first DDG 51 destroyer in April 1991, and 62 ships currently operate in the fleet. Two shipbuilders (Bath Iron Works Corporation in Bath, Maine, and Huntington Ingalls Industries in Pascagoula, Mississippi) build DDG 51 destroyers, with four separate configurations in the class (Flights I, II, IIA, and III) that reflect upgrades over the last 25 years to address the growing and changing capability demands on the Navy’s surface combatants. Table 1 provides details on each DDG 51 configuration. In 2007, the Navy determined, based on its Maritime and Missile Defense of Joint Forces (MAMDJF) Analysis of Alternatives, that a larger, newly- designed surface combatant ship with a very large radar was needed to address the most stressing ballistic and cruise missile threats. In response to the MAMDJF Analysis of Alternatives results, the Navy initiated development of a new cruiser, known as CG(X), and AMDR (SPY-6)—an advanced, scalable radar with a physical size that can be changed as needed to respond to future threats. Subsequently, in 2008, the Navy expressed an increasing need for greater integrated air and missile defense capability. Noting that DDG 51 ships demonstrated better performance than DDG 1000 ships for ballistic missile defense, area air defense, and some types of anti-submarine warfare, the Navy determined to restart production of DDG 51 Flight IIA ships to combat the increasing ballistic missile threats. In January 2009, USD (AT&L) issued a memorandum stating that the Navy’s plan to buy additional DDG 51 Flight IIA ships would be followed by a procurement of either DDG 1000- or DDG 51-based destroyers that could carry the SPY-6 radar. To fulfill this demand, the Navy conducted a limited study in 2009, referred to as the Radar/Hull Study, which examined existing DDG 51 and DDG 1000 designs with several different radar concepts to determine which pairing would best address the integrated air and missile defense needs at lower cost than the planned CG(X). Following the Radar/Hull Study, the Navy validated the need for a larger, newly designed surface combatant with a very large radar to counter the most stressing threats. However, based on the analysis of the Radar/Hull Study, the Navy decided to pursue a new DDG 51 configuration instead—now referred to as DDG 51 Flight III—that would include a new advanced, but smaller, radar and an upgraded Aegis combat system. The Navy also cancelled the CG(X) program. We found in 2012 that the Navy’s decision to pursue new DDG 51 destroyers equipped with a new air and missile defense radar represented a substantial commitment that was made without a solid analysis and that the planned oversight and visibility into the program was insufficient given the level of investment and potential risks. We also found that the Navy’s plan to buy the Flight III lead ship as part of an existing multiyear procurement of Flight IIA ships was not sound due to a lack of design and cost knowledge about the Flight III ships. Multiyear contracting is a special contracting method used to acquire known requirements for up to 5 years if, among other things, a product’s design is stable and the technical risk is not excessive. Based on our findings, we recommended that 1) the Navy complete a thorough analysis of alternatives for a future surface combatant program; 2) DOD increase the level of oversight for DDG 51 Flight III by changing the program designation to acquisition category (ACAT) 1D; and 3) the planned fiscal year 2013 DDG 51 multiyear procurement request not include a Flight III ship. Subsequent to our recommendations, the DDG 51 program was elevated to ACAT 1D status—with Flight III remaining an upgrade within the overall program—but a new analysis of alternatives was not undertaken and the fiscal year 2013 DDG 51 multiyear procurement was awarded as planned, with the Navy intending for the lead Flight III ship to be part of this procurement. The two existing DDG 51-class shipbuilders—Bath Iron Works and Huntington Ingalls Industries—will complete design changes to the existing DDG 51 hull form to support Flight III configuration needs and will construct the ships. In February 2015, the Navy modified its existing design contracts with the shipbuilders to begin Flight III detail design work, with initial construction planned for 2018. The detail design process, as shown in figure 1, begins at a high level for the overall ship. As the needs for the ship are better defined, the granularity of the design for individual units, or zones, of the ship comes into focus. The shipbuilders use computer-aided design product models to design the ship. The product models generate a detail design, which allows engineers at the shipbuilders to visualize spaces and test the design. This validates elements of the design prior to construction, thereby avoiding potentially costly rework. During product modeling, the designers finalize the interfaces between zones, complete the design for ship-wide cables and pipes, and add all detail necessary to support ship construction. The Navy reviews the progress of each zone when it is 50 and 90 percent complete with product modeling. At these critical reviews, the Navy and other stakeholders assess the zone design progress and provide input to ensure that the design meets specifications. Once a zone is 50 percent complete, more detailed design tasks are undertaken by the shipbuilders to finalize and incorporate all outstanding data from the key systems. When the zone is 90 percent complete, it is essentially considered to be finished with detail design, and the shipbuilder will subsequently convert the design into drawings to support construction. The Navy’s AMDR program, which began development in 2013, will provide key IAMD capability for Flight III. The program includes engineering and manufacturing development of a new S-band radar— known as SPY-6—that is being executed by the Raytheon Company to provide volume search capability for air and ballistic missile defense. The radar is expected to have a sensitivity for long-range detection and engagement of advanced threats of at least “SPY+15”, referring to its improved capability compared to the current SPY-1D(V) radar used on existing DDG 51 ships. In addition, the contractor is developing a radar suite controller to provide radar resource management and coordination, and to interface with the Aegis combat system upgrades for Flight III. The Navy is leveraging an existing X-band radar—SPQ-9B—to provide horizon and surface search capabilities, as well as navigation and periscope detection and discrimination functions for the majority of currently planned Flight III ships. The Navy intends to develop a new X- band radar with improved capabilities that will be installed on later Flight III ships. Figure 2 depicts a notional employment of the S-band and X- band radars on a Flight III ship. Along with AMDR development, the Navy is working with the prime contractor for the Aegis combat system—Lockheed Martin—to upgrade the system for Flight III ships. Aegis, which integrates ship sensors and weapons systems to engage anti-ship missile threats, has been providing the Navy with some form of surface defense capability for decades. Over that time, the system has been regularly upgraded, with Aegis Advanced Capability Builds (ACB) providing new, expanded capabilities. The most recently completed build—ACB 12—provides initial integrated air and missile defense capability for DDG 51 Flight IIA ships. ACB 16 is currently in development and is expected to provide, among other things, new electronic warfare capability and expanded missile capability options. Flight III will incorporate ACB 20, a combat system upgrade that includes support for the SPY-6 radar. As with any DOD weapon system, test and evaluation activities are an integral part of developing and producing DDG 51 Flight III, AMDR, and Aegis systems. Test activities provide knowledge of system capabilities and limitations as they mature and are eventually delivered to the warfighter. Developmental testing is intended to provide feedback on the progress of a system’s design process and its combat capability as it advances toward initial production or deployment. For Flight III systems, developmental testing occurs at contractor or government land-based test sites, and will eventually expand to include testing of systems on board the ship after installation—known as shipboard testing. Operational test and evaluation is intended to assess a weapon system’s capability in a realistic environment when employed in combat conditions against simulated enemies. During this testing, the ship is exposed to as many operational scenarios as practical to reveal the weapon system’s capability under stress. The Navy’s operational test agency plans and executes operational testing for DDG 51 and AMDR, as well as other selected programs—such as Aegis. The Director, Operational Test and Evaluation (DOT&E) within the Office of the Secretary of Defense coordinates, monitors, and reviews operational test and evaluation for major defense acquisition programs. DOT&E’s statutory responsibilities include (1) approval of test and evaluation master plans and operational test plans for systems subject to its oversight, (2) analyzing the results of the operational test and evaluation conducted for such systems to determine whether the test and evaluation performed were adequate and whether the results confirm the operational effectiveness and suitability for combat, and (3) reporting the evaluation results of operational testing to the Secretary of Defense and congressional defense committees. The Navy’s AMDR program is progressing largely as planned toward final developmental testing of the SPY-6 radar. An extensive technology developmental phase for the new radar helped increase the maturity of critical technologies, thereby reducing risk prior to beginning the program. Barring any setbacks during final developmental testing, the Navy plans to make an initial production decision for the SPY-6 radar in September 2017 and deliver the first radar to the shipyard for installation on the lead Flight III ship in 2020. In contrast, the Navy is still defining requirements for the Aegis combat system ACB 20 upgrades for Flight III and will not begin development until 2018. The Aegis development schedule appears ambitious when compared to previous combat system iterations and presents risks for shipboard testing. Further, integrating the Aegis combat system with the SPY-6 radar requires a significant amount of development and testing in a relatively short period of time, relying on concurrent land-based and shipboard testing; the Navy’s target date for initial operational capability of Flight III ships is 2023. Although the Navy is making a concerted effort to reduce risks associated with integrating the radar and combat system, the benefits of these efforts will be largely unknown until the start of combat system development and testing in 2018. The Navy must also complete an integrated test strategy and receive approval from DOT&E. After a lengthy debate with DOT&E over the need for an unmanned self-defense test ship equipped with Aegis and SPY-6 in initial operational test and evaluation plans, the Navy has begun to budget for the test ship at the direction of the Secretary of Defense. Figure 3 provides an overview of planned and completed activities for SPY-6, Aegis upgrade, and Flight III, including the anticipated time frame for installing the radar and combat system upgrade onto the ship and conducting operational testing. Prior to the start of the AMDR program, a 2-year technology development phase from 2010 to 2012 helped mature critical technologies required for the SPY-6 radar and reduce technical risk for the program. Table 2 describes the AMDR program’s four critical technologies for SPY-6 and their development status. In 2012, we found that two of the technologies—digital beamforming and transmit/receive modules—had challenges to overcome in order to achieve the required capabilities. At that time, the ability to use gallium nitride-based semiconductors, which provide higher power and efficiency than previously used materials, was untested on a radar the size of SPY- 6. The Navy has since demonstrated each of the critical technologies using prototypes during the technology development phase, including a full-scale, single-face SPY-6 radar array engineering development model to demonstrate radar capability. According to Raytheon, performance of this SPY-6 engineering development model has exceeded requirements, demonstrating SPY+17 decibels (greater than 50 times the sensitivity of the SPY-1D(V) radar currently being fielded on DDG 51 Flight IIA ships). More recently, the prime contractor experienced some challenges with digital beamforming and distributed receiver/exciter technologies that the Navy, Defense Contract Management Agency, and Raytheon indicated have been, or are being, addressed. As reported by the Defense Contract Management Agency, Raytheon and its subcontractors significantly underestimated the design, development, and test efforts required for these technologies to meet their performance requirements, leading to some cost growth but no delay to the start of the final developmental test phase in summer 2016. Final developmental testing will include testing the SPY-6 engineering development model in a maritime environment for the first time at the Navy’s land-based Advanced Radar Detection Laboratory in Hawaii, including live tracking of air, surface, and ballistic missile targets. These tests will help validate previous modeling and simulation tests at the prime contractor’s facilities. Final developmental testing is expected to be completed in time to inform an AMDR program low-rate initial production decision by USD (AT&L) in September 2017. The first SPY-6 radar system is scheduled to be delivered to the shipyard constructing the lead Flight III ship in 2020; this radar will be used in shipboard testing with Aegis ACB 20 in the lead up to Flight III initial operational capability, which is planned for 2023. The Aegis combat system upgrade—ACB 20—planned for DDG 51 Flight III will require significant changes to the version of Aegis currently fielded on DDG 51 Flight IIA ships in order to introduce new and expanded IAMD capabilities. Requirements for ACB 20 are not expected to be fully defined until the program completes its System Requirements Review planned for August 2016, but the Navy has established plans to field ACB 20 capabilities in two phases. The first phase—known as Phase 0—is intended to meet baseline anti-air, anti-surface, and anti-submarine warfare capability requirements for Flight III initial operational capability. The Navy has indicated that the first three Flight III ships will include Phase 0 capabilities. Improved ballistic missile defense and electronic warfare systems are expected to be introduced on Flight III ships as part of ACB 20 Phase 1 beginning in fiscal year 2025. The most extensive ACB 20 changes involve the replacement of the legacy SPY-1D(V) radar with the new SPY-6 radar. According to Lockheed Martin, ACB 20 must include an expanded and modified interface between the SPY-6 radar and the Aegis combat system in order to address the significant increase in data generated by the new radar. In general, the interface changes are intended to ensure data are packaged to take advantage of the radar’s capabilities and effectively provide operators with information to support IAMD needs. Table 3 outlines the Aegis combat system changes that are needed to interface with the SPY- 6 radar. We found in 2012 that the Navy eliminated integration of SPY-6 from ACB 16 plans, effectively deferring integration activities to the ACB 20 upgrade. We concluded that this plan would leave little margin for addressing any problems with the radar’s ability to communicate with the combat system before Flight III’s initial operational capability in 2023. Since that time, the Navy and Lockheed Martin have established an ambitious schedule for ACB 20 development. The schedule is optimistic, particularly due to ACB 20’s interdependencies with the ACB 16 capabilities that are still being developed. For example, recent changes to ACB 16—which will provide the base capability for ACB 20—may affect the Navy’s ability to achieve the ACB 20 schedule. Specifically, the Navy added requirements to ACB 16 that resulted in a 184 percent increase in new and modified software code for ACB 16. As a result, ACB 16 software development will take longer than planned and will overlap with ACB 20 development. This introduces the potential for software deficiencies discovered during ACB 16 development to negatively affect the ACB 20 development schedule. In addition, our comparison of the ACB 20 schedule to the schedule for the most recently fielded Aegis build—ACB 12—indicates that ACB 20 has a significantly shorter development timeline. ACB 12, which included a significant capability upgrade to enable limited IAMD for the first time on DDG 51-class ships, required substantially more development time than the Navy has planned for ACB 20. We found in 2012 that the Navy experienced several setbacks during ACB 12 development and testing, including challenges with coordinating combat system and ballistic missile defense development and testing, as well as an underestimation of the time and effort required to develop and integrate the signal processor with the radar, which led to schedule delays and cost growth. Figure 4 compares the planned timeline from the System Requirements Review—a review to ensure readiness to proceed into initial system development—to certification of the build’s capabilities for ACB 12, ACB 16, and ACB 20, Phase 0. To execute this aggressive ACB 20 development plan, the Navy plans to test the fully-developed ACB 20 Phase 0 for the first time after testing begins on the lead Flight III ship at the end of 2020, adding risk to the program. This decision stems, in part, from the Navy’s late response to a July 2014 Navy requirements directive to redefine ACB 20 core capabilities, contributing to a 17-month delay for the ACB 20 requirements review. Conducting initial tests of the fully-developed ACB 20 Phase 0 after ship installation introduces concurrency between the shipboard and land-based test schedules, reducing opportunities to make less-costly fixes to defects discovered during land-based testing prior to installation of the system on the ship. As we have previously found with other shipbuilding programs, concurrency introduces the potential for additional unanticipated costs if concurrent land-based testing identifies needed modifications to shipboard systems after installation. The Navy’s planned approach for testing the full ACB 20 Phase 0 capability is a departure from its approach for ACB 12, which demonstrated its final capability 4 months before installation on its first DDG 51-class ship. ACB 12 also benefitted from installation and extensive testing on an in-service DDG 51-class ship for an additional 30 months prior to installation on a new construction ship in September 2015, an approach which is not planned for ACB 20. Navy and prime contractor officials told us that several actions have been taken that they expect to ensure ACB 20 development can be executed under its notably compressed timeline. For example, changes have been made to the Aegis development approach in an effort to correct the underlying causes for some of the past issues or to take advantage of process efficiencies, such as: scheduling ACB 20 to begin software development in January 2018— a few months after the initial production decision for SPY-6—to allow time for key radar technologies to mature and for the radar design to stabilize, minimizing the risk of beginning Aegis combat system development with insufficient radar knowledge; coordinating with the Missile Defense Agency—including a single program manager at the prime contractor to oversee the Navy and Missile Defense Agency efforts, along with joint reviews and an integrated test strategy between the two organizations for Flight III activities; and using some Agile software development methods—an iterative approach that includes a series of smaller software increments that can be developed and delivered in shorter time frames, with the goal of improving quality, generating earlier insight on development progress or any potential issues, and reducing defects and rework. Navy officials emphasized that ACB 20’s schedule was not compressed based on any projected efficiencies from Agile use, though it may help reduce defect discovery once the Aegis combat system is installed on the lead Flight III ship. In addition to these programmatic changes, the Navy and the prime contractors for SPY-6 and ACB 20 are making a concerted effort to reduce integration risk through the use of radar and combat system prototypes. Because the full ACB 20 Phase 0 capability and integration with SPY-6 will not be tested until after it is installed on the ship, the Navy and the prime contractor are counting on land-based prototype testing to reduce risk. As previously shown in figure 3, ACB 20 land-based testing is scheduled to begin in 2019 and will be used to verify combat system performance. This testing will be done prior to certification of the full ACB 20 Phase 0 system in February 2023, as will integrated testing of limited ACB 20 software, combat system hardware, and the SPY-6 engineering development model at multiple land-based test sites, and modeling and simulation tests. The Aegis combat system prototype being developed for use with the SPY-6 engineering development model is expected to reduce risk by enabling testing that can identify the interface needs and provide developmental test results in support of the SPY-6 initial production decision. According to Lockheed Martin representatives, the Aegis prototype will model approximately 44 percent of the eventual interface between ACB 20 and SPY-6, including the most complex elements described earlier in table 3. However, the full extent to which software used with combat system prototype can be utilized for ACB 20 will not be known until integration testing of the full ACB 20 Phase 0 and SPY-6 is conducted in 2021. Recent actions taken by DOD indicate that the department supports the use of an Aegis- and SPY-6-equipped unmanned self-defense test ship. Operational test and evaluation plans for DDG 51 Flight III, SPY-6, and ACB 20 have been a source of disagreement between the Navy and DOT&E since at least early 2013. Specifically, the Navy and DOT&E disagree about the need for an unmanned self-defense test ship equipped with SPY-6 and the Aegis combat system to demonstrate Flight III self-defense capabilities through operational testing. DOT&E has asserted that an upgraded unmanned self-defense test ship is needed to help demonstrate the end-to-end performance of Flight III systems—from initial SPY-6 radar detection of a target, such as an anti- ship cruise missile, through target interception by an Evolved Sea Sparrow Missile launched from the ship. As statutorily required, DOT&E assessed the Navy’s proposed test and evaluation master plan for the AMDR and Aegis programs in May 2013 and August 2013, respectively, determining that neither plan was adequate for operational testing because they did not provide for operationally realistic testing of Flight III’s self-defense capability. DOT&E continues to assert this position. Specifically, DOT&E has stated that the Navy’s plan to use a manned ship for testing cannot realistically demonstrate the performance of the integrated Flight Ill combat system against anti-ship cruise missile stream raids—a series of targets approaching the ship from the same bearing— in the self-defense zone because of range safety restrictions. According to DOT&E, it is the practice for all other warships to use an unmanned self-defense test ship for their operational test programs. Use of an unmanned self-defense test ship equipped with SPY-6 and the Aegis combat system would allow a safety offset that is much closer to the ship (less than 400 feet) and would permit the targets to conduct realistic maneuvers, which provides the ability to ensure operationally realistic stream raid effects are present and make the test adequate. The Navy has asserted that the end-to-end testing scenarios identified by DOT&E for operationally testing Flight III self-defense capabilities can be accomplished on a manned Flight III ship. Navy officials also stated that a robust test approach that includes testing at land-based test sites, on the currently-configured self-defense test ship, and on a manned Flight Ill ship, can provide sufficient information to support their test needs and accredit the modeling and simulation used in testing. Furthermore, the Navy’s position is that using an unmanned self-defense test ship equipped with Aegis and SPY-6 for Flight Ill operational testing would only minimally increase knowledge of operational performance beyond what can be achieved without its use. Navy officials emphasized that (1) land- based testing is expected to provide nearly all data required to accredit the Aegis modeling and simulation capability, (2) the Evolved Sea Sparrow Missile Block 2—a key element of Flight III’s self-defense capabilities—will be tested on DDG Flight llA ships using ACB 16, and (3) live-fire end-to-end testing of Flight Ill systems—within the bounds of range safety restrictions—will be completed using a manned ship to provide data on operational capability. Several factors have contributed to the different conclusions reached by the Navy and DOT&E on the need for, and value of using, a test ship equipped with the Flight III combat system to meet operational testing needs. Table 4 explains the key factors that we identified as having contributed to the different assessments of the need for a test ship. Recent actions taken by DOD indicate that the department is moving in the direction of supporting the use of the unmanned self-defense test ship. First, a December 2014 DOD resource management decision supporting the President’s budget for fiscal year 2016 directed the Office of Cost Assessment and Program Evaluation (CAPE) within the Office of the Secretary of Defense to conduct a study of test ship options that would satisfy DDG 51 Flight III self-defense operational testing, including an assessment of the risks and benefits, cost estimates for each option, and a recommended course of action. The study, completed by CAPE in 2015 found that the lowest risk option was to equip the Navy’s existing USS Paul F. Foster self- defense test ship with Flight III combat systems—at an estimated cost of about $350 million—to support operational test and evaluation. The study recommended that the Navy and DOT&E collaborate to develop an integrated test plan to determine the number of air targets and test missiles needed to support developmental testing and operational testing for key Flight III-related self-defense systems. Second, following the study of self-defense test ship options, a February 2016 DOD resource management decision supporting the President’s budget for fiscal year 2017 directed the Navy to adjust funds within existing resources—$175 million total across fiscal years 2019 through 2021—to procure long-lead items in support of an Aegis- and SPY-6-equipped self-defense test ship. The Navy’s subsequent fiscal year 2017 President’s budget submission includes funding in this amount for equipment associated with the self-defense test ship starting in 2019. Third, as recommended in the 2015 self-defense test ship study, the Secretary of Defense directed the Navy to work with DOT&E to develop an integrated test strategy for the Flight III, AMDR, Aegis Modernization, and Evolved Sea Sparrow Missile Block 2 programs, and to document that strategy in a test and evaluation master plan or plans by July 29, 2016. Officials from the Navy and DOT&E both questioned whether an integrated test strategy could be completed as directed based on the significant differences between the two sides. However, DOT&E officials stated in April 2016 that the Navy’s integrated warfare systems program executive office had begun working on an integrated test strategy to examine the ship’s anti-ship cruise missile self-defense and integrated air and missile defense capabilities in an effort to meet the intent of the July 2016 deadline. Although it appears progress is being made in support of the Secretary’s direction, the Navy has not yet fully responded to it. If the integrated test strategy being developed by the Navy does not include the use of an unmanned self-defense test ship as directed in DOD’s recent resource management decision, then DOT&E will not approve the Navy’s operational test plan. If the plan is not approved, a resolution is not likely to be achieved until fiscal year 2019 when the Navy would need to begin buying SPY-6 and Aegis-related long-lead items for the unmanned self- defense test ship to maintain its plan for Flight III initial operational test and evaluation and initial operational capability in 2023. Integrating the SPY-6 radar with the DDG 51 hull form will require significant changes to the existing ship’s hull, mechanical, and electrical systems. The Navy plans to limit the use of new technologies and introduce some Flight III equipment on Flight IIA ships first in order to reduce the program’s technical risk. Flight III design is complex and the tightly-packed existing design of the DDG 51-class ship presents additional challenges for Flight III ship design and construction. The Navy recognizes the need to mature and complete all phases of Flight III design before construction begins in spring 2018. However, its Flight III design schedule is ambitious—considering the amount and complexity of the remaining design work—and the shipbuilders will have a significant amount of design work to complete in a relatively short amount of time. Flight III configuration changes are driven primarily by the need to integrate the SPY-6 radar with the existing DDG 51-class hull form. The design changes that make up the Flight III configuration are complex because of the need to integrate the SPY-6 radar and its supporting equipment into the densely constructed DDG 51-class ship. According to the Navy, DDG 51-class ships were selected as the platform for the SPY-6 radar because the hull form involves relatively low overall risk, as it already is integrated with the Aegis weapon system architecture and is a proven ship design. However, integrating the SPY-6 radar will require extensive changes to the ship’s hull, mechanical, and electrical systems. Figure 5 illustrates key changes that will be introduced to the ship as part of the Flight III configuration in order to accommodate SPY-6. For example, the ship’s deckhouse must be modified because the SPY-6 radar is considerably deeper and heavier than the legacy DDG 51 radar. In particular, the positioning of the SPY-6 radar arrays high on the deckhouse has a significant impact on the ship’s estimated weight and center of gravity. As part of the preliminary design, the Navy introduced plans to widen the ship’s stern by up to 4 feet on each side, allowing the ship to carry more weight to accommodate SPY-6 and restore available weight service life allowance. Flight III’s hull will be reinforced with steel to lower the ship’s center of gravity and counteract the radar’s additional weight. Significant upgrades to the ship’s electric plant and air conditioning systems also are required to support SPY-6 radar operations. These upgrades involve the use of generators, power conversion modules, transformers, power distribution equipment, and high-efficiency air conditioning units that are new to the DDG 51 class of ships. The Navy also plans to introduce other Flight III changes that are not related to SPY-6 integration. Table 5 describes changes related to SPY-6 and other upgrades planned for Flight III. The Navy has taken several steps to reduce technical risk in Flight III design, including limiting the use of new technologies. New electric and air conditioning plant technologies planned for Flight III ships are in use on ships that are currently—or soon will be—in the fleet. For example, Flight III’s higher capacity generators and power conversion modules are derived from generators being used on the Zumwalt-class destroyers. Additionally, the Navy plans to retain a substantial amount of the existing electric plant design, changing the design only where necessary to provide increased power to operate the SPY-6 radar. The Navy noted that this approach should minimize design impacts and testing requirements for the electric plant. Navy officials acknowledged, however, that the new Flight III systems are evolutions of existing technologies and may require some modifications. To further reduce risk, the Navy plans to introduce some technologies required for Flight III on earlier Flight IIA ships. For example, an Aegis hardware upgrade and a new power conversion system will be introduced on DDG 51 Flight IIA ships beginning with DDG 121, which will be delivered in July 2020. In addition, the density of the existing DDG 51-class ship design presents challenges for Flight III ship design and construction. As we have previously found with shipbuilding, density—the extent to which ships have equipment, piping, and other hardware tightly packed within ship spaces—affects design complexity and cost. Density can complicate the design of the ship, as equipment will need to be rearranged to fit new items. Construction costs can increase because of the inefficiencies caused by working in spaces that are difficult to access. Although in this case the two shipbuilders have extensive experience in building the DDG 51 hull form, Navy officials acknowledged the significant effort required to integrate the SPY-6 radar on the ship and the space and power constraints it poses for adding new systems. Table 6 describes how ship density contributes to challenges in designing and constructing Flight III ships. The Flight III upgrade requires extensive changes to the DDG 51 design. Navy officials estimate that approximately 45 percent of the ship’s design drawings will need to be changed. The shipbuilders estimate that 72 of 90 ship design zones will also require revisions. At the same time, however, Navy program officials have stated that the design work associated with the Flight III upgrade is no more complicated than previous DDG 51 upgrades. They noted, for example, that the number of drawing changes for Flight III is fewer than the Flight IIA upgrade. While this is true based on current estimates, the Flight III estimate is a projection and may increase once the final design is complete. Moreover, the Flight III design is projected to require nearly 1 million design hours to incorporate the changes that the Navy attributed, at least in part, to additional quality assurance and design reviews to accommodate stricter government oversight than with previous upgrades. The projected design hours for Flight III are notably more than what were required on previous upgrades, as seen in figure 6. The Navy recognizes the need to mature and complete all phases of Flight III design before construction begins, currently projected for spring 2018. Our prior work on shipbuilding has identified this practice as a key factor in better ensuring that ships are delivered on time, within planned costs and with planned capabilities. The Navy has completed most of the two initial phases of ship design (preliminary and contract design) as shown in Figure 7 below. These design efforts are aimed at the production of technical data packages, preliminary drawings, and ship specifications needed for detail design and construction. In February 2015, the Navy modified its existing design contracts with the two DDG 51-class shipbuilders—Bath Iron Works and Huntington Ingalls Industries—to begin Flight III detail design work, which includes three-dimensional product modeling of the ship’s individual zones, also referred to as zone design. The shipbuilders began Flight III zone design activities for their respective ship designs in October 2015, using a computer-aided design product model to make changes to the zones that make up the design of the ship. Shortly before beginning zone design, the shipbuilders revised their design approach in an effort to better manage and complete the activities. The Navy had originally planned to split zone design between the two shipbuilders, requiring both shipbuilders to ensure that their designs were compatible with one another. According to the shipbuilders, a change was made so now each shipbuilder will complete its own design for the ships built at their respective yards. The Navy’s design schedule is ambitious, considering the amount and complexity of the remaining design work. For example, as of April 2016, one shipbuilder stated that it had completed product modeling of 7 percent of the ship’s zones and had held a 50 percent milestone review for three of the 72 zones that require design changes as part of the Flight III upgrade. The lead shipbuilder plans to complete about 25 percent of zone design by October 2016. Flight III detail design work is planned to be completed by December 2017, as directed by the Navy. The shipbuilders are scheduled to complete detail design about 3 months earlier than they originally planned, which provides more time between design completion and the start of ship construction. However, under the current schedule, the shipbuilders will have a significant amount of design work to complete in a relatively short amount of time. In addition to design time, one shipbuilder will not begin the zone design for the five zones requiring the most significant changes until December 2016, leaving less time to discover and address any design problems for some of the most complex areas of the ship. In addition, the shipbuilders will face challenges as they enter the more difficult product modeling phases— when the details of the design must be finalized to start ship construction. If the Navy purchases the first Flight III ship in fiscal year 2016 as planned by issuing a series of modifications to its existing construction contracts, it will do so without sufficient acquisition and design knowledge. As of May 2016, the Navy was still in the process of updating key acquisition documents with Flight III information, including a revised cost estimate, and had not released a request for proposals for construction of the lead Flight III ship design. In addition, because the Navy will have a significant amount of Flight III zone design work remaining at the end of fiscal year 2016, any procurement decisions will not be informed by a complete understanding of Flight III design. In addition, while the Navy did not update its anticipated cost savings under the current (fiscal year 2013- 2017) multiyear procurement to reflect the addition of Flight III ships, doing so would provide Congress a more accurate savings estimate, as well as provide improved information to support future multiyear procurement savings estimates. Further, in February 2017, when the Navy plans to request authority from Congress to award new multiyear procurement contracts for 10 Flight III ships in fiscal years 2018-2022, it will not be positioned to meet the criteria necessary to support the request. For example, the Navy would be required to preliminarily find by February 2017 that the Flight III design is stable, although shipbuilders will not complete detail design until December 2017. Finally, while the Flight III upgrade is being managed as a continuation of the longstanding DDG 51 program, the Navy is completing many of the activities that are required for new acquisition programs, including the establishment of a new acquisition program baseline. However, information on the Flight III upgrade is not planned to be presented to Congress in Selected Acquisition Reports as a separate major sub-program of the DDG 51 class of ships, which will reduce decision makers’ insight into its cost and schedule performance. To construct the lead Flight III ship and the next two follow-on ships, the Navy intends to modify its existing DDG 51 multiyear procurement contracts with Bath Iron Works and Huntington Ingalls Industries. In 2013, the Navy awarded multiyear procurement contracts to these shipbuilders to construct a total of 10 DDG 51 ships from fiscal years 2013 through 2017. The Navy plans to modify these existing contracts, which are currently priced for Flight IIA ship construction, through a series of 17 design changes—also called engineering change proposals—to introduce the Flight III upgrades on up to three ships. A new target cost will be established for each Flight III ship to reflect the yet-to-be-determined cost of design changes. Figure 8 illustrates how the Navy plans to modify the existing multiyear procurement contracts to convert Flight IIA ships to Flight III ships. The Navy plans to issue the necessary modifications for the lead Flight III ship in fiscal year 2016 and do the same for the two additional Flight III ships in fiscal year 2017. The procurement approach for the lead ship recently changed due to additional funding provided by Congress. Specifically, Congress provided the Navy with an additional $1 billion in construction funding for fiscal year 2016 to procure an additional DDG 51 ship. Of note, however, the $1 billion is not sufficient to procure a complete ship in either the Flight IIA or Flight III configuration. The Chief of Naval Operations included $433 million on the Navy’s fiscal year 2017 unfunded priorities list provided to Congress to fully fund the additional ship. If the funding is approved, the total number of new ships would increase from 10 to 11 over the multi-year contract period. The Navy originally planned to introduce the lead Flight III ship as one of the two ships procured under the multiyear contracts in fiscal year 2016. However, according to the Navy, the additional ship in fiscal year 2016 is now anticipated to become the lead Flight III ship, although the acquisition strategy has not been determined. A procurement contract for this additional ship has not been awarded and it is not currently included in the existing multiyear procurement contracts. Table 7 provides details of how the additional fiscal year 2016 funds affect the Navy’s multiyear procurement contracting strategy. In addition, the Navy no longer plans to introduce limited competition between the two shipbuilders for the lead Flight III ship construction. The Navy’s acquisition strategy for the first three Flight III ships included use of a contracting strategy to introduce limited competition between the two shipbuilders for the Flight III procurement. As part of this strategy, both shipbuilders would submit proposals for the additional work associated with Flight III changes. The shipbuilder that submitted the lowest proposal for the work would have received a higher percentage of target profit and would have been awarded the contract modifications to build the lead Flight III ship in fiscal year 2016 and build one of the fiscal year 2017 ships; the other shipbuilder would build one fiscal year 2017 Flight III ship. In April 2016, the Navy issued a pre-solicitation notice, stating that it now intends to issue a request for proposal to Bath Iron Works for the lead Flight III ship, which would be a sole-source award for the lead ship with no competition between the shipbuilders for profit. The extent to which the Navy plans to introduce limited competition into the Flight III modifications for the two fiscal year 2017 ships and how such competition would be structured remains uncertain. As of May 2016, the Navy had not demonstrated sufficient knowledge regarding its Flight III acquisition approach to modify the current multiyear procurement contracts to introduce these upgrades. In a June 2014 Flight III Acquisition Decision Memorandum, USD (AT&L)—the decision authority for the DDG 51 program—approved a plan to support a fiscal year 2016 program review of the Flight III upgrade prior to modifying ship construction contracts. Under this plan, the Navy is required to update its acquisition program baseline and test and evaluation master plan, among other documents, with Flight III-specific information. According to officials from the DDG 51 program and the Office of the Secretary of Defense, as of May 2016, the Navy was still in the process of updating these documents. The 2014 plan also requires the Navy to ensure that the Flight III program is fully funded in the Future Years Defense Plan and that CAPE assess the Navy’s Flight III cost estimate. A prior Navy cost estimate completed in 2014 lacked knowledge on the current Flight III baseline, making it difficult to use the estimate as support for construction award decisions. CAPE officials stated they began working with the Navy Center for Cost Analysis in November 2015 to ensure the Navy cost estimate being developed in response to the 2014 direction incorporates data on all of the relevant factors that will influence the cost of Flight III ships. These factors include historical DDG 51 ship construction hours, maintenance cost trends, and shipyard labor cost trends, among others. According to a CAPE official, as of May 2016, the Navy had yet to provide a revised cost estimate to be assessed, thus the estimate was not expected to be completed until the summer of 2016 at the earliest. Until this estimate is finalized and assessed by CAPE, the Navy will not have an independent perspective on its Flight III costs and, pursuant to USD (AT&L)’s 2014 requirements, cannot move forward with a contract for Flight III ships. The Navy originally scheduled a program review in March 2016 to approve its plans to award the lead Flight III ship, but the review was postponed because key contract-related activities had not been accomplished. Specifically, the Navy is required to release requests for proposals to modify the fiscal years 2016 and 2017 Flight III multiyear procurement and to evaluate the shipbuilders’ proposals prior to this review. As of May 2016, the Navy had yet to release these requests for proposals. Until the Navy complies with the documentation requirements established by USD (AT&L) in June 2014, releases the requests for proposals, and receives and evaluates shipbuilder proposals, it will not have achieved sufficient knowledge about its acquisition approach to make an informed decision to proceed with the Flight III modifications to the existing multiyear procurement contracts. Even if the Navy fulfills its documentation requirements, procuring the lead Flight III ship in fiscal year 2016 as currently planned increases the cost risk with the lead ship because cost estimates will be based on limited detail design knowledge. The lead shipbuilder expects to have about 75 percent of zone design work remaining at the end of fiscal year 2016 and, as a result, procurement activities—including the shipbuilder proposal development, Navy completion of construction cost estimates, and finalization of the target cost for constructing the lead Flight III ship— will not be informed by a more complete understanding of Flight III design. Our prior work has found that over time, cost estimates become more certain as a program progresses—as costs are better understood and program risks identified. According to both shipbuilders, waiting until fiscal year 2017 to procure the lead Flight III ship would allow Flight III design to further mature, which would provide greater confidence in their understanding of the Flight III design changes and how these changes will affect ship construction costs. By completing more detail design activities prior to procuring a Flight III ship, the Navy—and both shipbuilders—will be better positioned for Flight III procurement and construction. One shipbuilder also noted that waiting until fiscal year 2017 to procure the lead Flight III ship would enable the Navy to coordinate government furnished equipment delivery schedules with suppliers that support the shipyard production need dates. Congress authorized procurement of up to 10 DDG 51 ships under the Navy’s current multiyear procurement for fiscal years 2013 through 2017; the Navy’s estimated cost savings of $1.54 billion did not take into account the differing costs between Flight IIA and Flight III ships. With the Navy planning for up to three of the ships to be in the Flight III configuration, some of the projected cost savings will be offset by the additional costs associated with Flight III ship construction. The Navy updated the estimated savings to $2.35 billion in September 2014 for 10 Flight IIA ships based on (1) additional savings achieved through DOD’s Better Buying Power principles, (2) exercising the option for a tenth DDG 51 ship, and (3) lower cost estimates for the purchase of ship equipment. According to DDG 51 program officials, the Navy did not provide a revised estimate of savings based on the Flight III changes. While it is not known at this point whether or not the Navy may still achieve cost savings with the addition of Flight III ships in the fiscal year 2013-2017 multiyear procurement, the cost increases associated with the modifications for the Flight III upgrade will reduce the extent of the savings. The multiyear procurement statute does not require that existing savings estimates be updated to include costs associated with design changes, but doing so for Flight III changes would provide improved transparency of costs for Congress and the taxpayers. It would also help inform a more realistic basis for estimating future multiyear procurement savings. Once the current DDG 51 multiyear procurement ends in 2017, the Navy plans to award new multiyear procurement contracts to both shipyards, covering fiscal years 2018-2022, for the construction of the next 10 Flight III ships. In order to request authority from Congress to use a multiyear procurement contract to procure Flight III ships, the Navy must preliminarily find that several criteria will be met. However, based on our analysis, the Navy is not likely to be positioned to meet all of the criteria for requesting multiyear procurement authority in time to seek authority to award the Flight III multiyear contracts in fiscal year 2018. This request for multiyear contracting authority would have to be submitted with the fiscal year 2018 President’s budget request, scheduled to be released in February 2017. Table 8 shows the statutory criteria for requesting authority to use a multiyear procurement contract and the extent to which the Navy will be positioned to preliminarily find they would be met by February 2017. To highlight one aspect, to request authority to use a multiyear contract, the Navy will have to preliminarily find that the design of the Flight III ships is stable. As we previously stated, the most complicated aspects of Flight III design work are ongoing, and detail design will not be complete until December 2017—well after the President’s budget request is submitted to Congress. In addition, the Navy plans to begin construction of the lead Flight III ship in spring 2018. Until the Navy begins construction of the lead Flight III ship, it will not have sufficient knowledge to demonstrate a realistic construction estimate or cost savings because there will be no prior cost or construction history on the Flight III upgrade. Another example is that technical risk for Flight III systems— such as with Aegis upgrades—will remain and ship design stability will not yet be achieved if new multiyear procurement contracts are awarded at the start of fiscal year 2018, because software coding for the Aegis combat system upgrade will only have just begun. Although the Navy has previously used multiyear procurement contracts for DDG 51 ships, it has typically first demonstrated production confidence by building ships in the corresponding configuration before employing a multiyear procurement approach. For example, the Navy built 10 Flight IIA ships before entering into a multiyear procurement for them. If the Navy proceeds with a multiyear procurement strategy for Flight III ships beginning in fiscal year 2018, it will be asking Congress to commit to procuring nearly half of the planned Flight III ships with an incomplete understanding of cost, and effectively, no Flight III construction history to support the decision. The Flight III upgrade represents a significant resource investment for the Navy, with more than $50 billion over the next 10 years devoted to designing and constructing 22 Flight III ships. Despite the magnitude of these costs and the degree of changes to the ship, DOD is not treating it as a new acquisition program. Instead, as permitted under law and regulation, the Flight III upgrade is being managed as a continuation of the existing DDG 51 program, which is currently designated as an ACAT 1D program. Two key decision reviews have been held or are planned for Flight III: one in June 2014 to approve the beginning of detail design, and one to review the readiness of the program to proceed with Flight III ship construction, which was originally scheduled for March 2016, but has not yet occurred. The Navy is still conducting some activities for the Flight III upgrade that are commensurate with what is required of a new acquisition program, even though the upgrade is being managed as part of the existing DDG 51 program. Milestone B is normally the formal initiation of a DOD acquisition program at which, for example, the acquisition program baseline—a document which establishes a program’s business case—is approved by the program’s decision authority. As part of the tailored plan for Flight III approved by USD (AT&L) in June 2014, the Navy is completing some, but not all, of the fundamental activities that are required at Milestone B. This includes development of a Flight III cost estimate that will be assessed by CAPE, which is being done in lieu of an independent cost estimate that would be typical for a new major defense acquisition program. Table 9 shows the degree to which the Navy is completing the fundamental activities for the Flight III upgrade that are required for new programs at Milestone B. Further, while the Navy held a Milestone B-like review for Flight III and is going to establish a new acquisition program baseline, Flight III is not a distinct acquisition program or major subprogram, which has implications for reporting requirements related to cost and schedule performance. In particular, since the Flight III upgrade is part of the existing DDG 51 program, certain oversight mechanisms that are generally set in motion after passing through Milestone B—such as reporting Nunn-McCurdy breaches of unit cost growth thresholds and periodic reporting of the program’s cost, schedule, and performance progress—do not apply to Flight III separately from the overall program. Flight III performance measures do not have to be broken out for the DDG 51 program Selected Acquisition Report—a report submitted by DOD that provides Congress with information that is used to perform oversight functions—which diminishes transparency and encumbers oversight efforts. For example, the DDG 51 program’s December 2015 Selected Acquisition Report did not include schedule estimates for any Flight III events with the SPY-6 radar. Additionally, the average procurement unit cost of approximately $1.19 billion per DDG 51 ship reported in 2015 is significantly less than the average procurement unit cost currently anticipated for Flight III ships because it represents the unit cost for DDG 51 ships as a whole. Without distinct Flight III information, decision makers will not be able to distinguish cost growth associated with the overall DDG 51 program baseline from Flight III cost growth, which may limit the effectiveness of oversight mechanisms, such as Nunn-McCurdy unit cost thresholds. Further, since the Navy is not reporting key events for Flight III as part of the overall DDG 51 program, Congress and the Office of the Secretary of Defense will not be made aware of any changes to Flight III’s schedule via this standard reporting mechanism for acquisition programs. USD (AT&L) has the authority to designate major subprograms within major defense acquisition programs, like the DDG 51 program. DOD’s guidance states that establishing a major subprogram may be advisable when increments or blocks of capability are acquired in a sequential manner. In the case of the DDG 51 program, designating the Flight III upgrade as a major subprogram would allow for oversight of the upgrade separate from the overall DDG 51 program. For example, Nunn-McCurdy breaches could be tracked and reported separately. Treating the upgrade as a major subprogram would also offer the ability to separately baseline and track cost (including unit cost), schedule, and performance for Flight III within the overall DDG 51 Selected Acquisition Report. This more granular level of reporting would provide Congress and the Office of the Secretary of Defense with greater visibility into the cost, schedule, and performance of the Flight III upgrade. In the future, as the Navy begins assessing solutions for the next surface combatant ship, it will need to make important decisions about evolving threats and the IAMD capabilities necessary to combat those threats. The Navy expects Flight III ships to meet key operational performance requirements in 2023, with full Flight III capabilities delivered in 2027. While Flight III ships will increase the fleet’s IAMD capabilities, they will not provide the level of capability that the Navy previously identified as necessary to address the more stressing IAMD threats. The limited weight and stability service life allowance of the ship due to Flight III’s design changes will also affect the Navy’s ability to add capabilities to the ship in the future without removing existing equipment or making significant structural changes to the ship. The Navy is also considering the extent to which Flight III destroyers may be used instead of Navy cruisers to provide air and missile defense for carrier strike groups. In 2016, the Navy began a capabilities-based assessment to identify capability gaps and potential solutions for the next surface combatant ship which, according to the Navy’s annual long-range shipbuilding plan, will be introduced in 2030. The Navy plans to meet key operational performance requirements for Flight III initial operational capability in 2023, as outlined in the Navy’s DDG 51 Flight III Capability Development Document, but is using an incremental approach to deliver the full capability planned for the ships. The first three Flight III ships will not include all of the Navy’s planned capabilities, and full capability for a Flight III ship is expected in 2027. The incremental approach is tied to the delivery of X-band radar and Aegis combat system capabilities. For the X-band radar, the Navy changed its original Flight III plans. Specifically, the Navy intended to develop two new radars—an X-band and an S-band—under the AMDR program to support Flight III ships. However, in 2012 the Navy altered its plans, with the AMDR program reduced to a new S-band radar development effort that would be paired with the existing SPQ-9B X-band radar for the first 12 Flight III ships. This decision helped reduce the risk associated with conducting parallel radar development efforts, but also delayed the timeline for the improved X-band capability planned for Flight III until the 13th ship, anticipated to be delivered in 2027. According to Navy officials, there are no plans to retrofit the first 12 Flight III ships with the new radar once it is available. The Navy has not yet begun planning for the new X- band radar program and initial budgeting activities are not expected until at least 2018, with the new radar expected to be part of the Flight III baseline in fiscal year 2022. The first three Flight III ships will include ACB 20 core capabilities, with a second phase of capability improvements intended to be provided beginning with the fourth ship. Figure 9 illustrates the Navy’s planned approach for introducing additional capability to Flight III ships. Ultimately, the DDG 51 Flight III cannot provide the SPY+30 capability needed to address the threats identified in the 2007 MAMDJF analysis because the SPY-6 radar, which is as large as can be accommodated by the Flight III configuration, is not able to achieve this capability. MAMDJF stated that a large radar was needed on a surface combatant to counter the most stressing ballistic and cruise missile threats expected in the 2024 to 2030 time frame. In 2009, the Navy’s Radar/Hull Study looked at ways to leverage existing Navy destroyer designs to address less stressing threats in the near-term at less cost. Raytheon representatives stated that the SPY-6 radar’s performance in testing shows it provides SPY+17 capability, exceeding the SPY+15 requirement for Flight III and providing greater performance than existing radars. DDG 51 Flight III ships with the SPY-6 radar are expected to deliver the capability necessary to counter the near-term threats identified in the Radar/Hull Study. Navy officials affirmed that the SPY-6 radar, if already available to the fleet, would help combat current threats. Navy officials also agreed that the threats identified in the 2007 Analysis of Alternatives remain valid. The actual threat environment when the first Flight III ships are delivered is more likely to reflect the threats outlined in the MAMDJF Analysis of Alternatives, as opposed to the less stressful threats outlined in the Radar/Hull Study. As shown in figure 10, the time frame for the threat environment assumed by the Radar/Hull Study will have passed by the time the lead Flight III ship is delivered to the fleet; at that point, the more stressing threat environment outlined in the MAMDJF Analysis of Alternatives timeframe will be imminent. Under the Navy’s acquisition approach, six of the Flight III ships planned for the fiscal year 2018-2022 multiyear procurement, and over three-fourths of the 22 total planned ships, will be outpaced by the threat environment identified in the MAMDJF Analysis of Alternatives. To account for the gap between the anticipated radar capability need and what SPY-6 can provide, the Navy may consider other maritime platforms that can accommodate a larger-scale version of SPY-6 or the use of radars on multiple ships. For example, as we found in 2012, the Navy altered its concept on the number of ships that will be operating in an IAMD environment in an effort to address the gap that exists between the 2007 Analysis of Alternatives’ stated need and the expected SPY-6 capability. Specifically, rather than one or a small number of ships conducting IAMD alone and independently managing the most taxing threat environments without support, the Navy has envisioned multiple ships that can operate in concert with different ground- and space-based sensor assets to provide cueing for SPY-6 when targets are in the battlespace. The cueing would mean that the ship could be told by the off- board sensors where to look for a target, allowing for earlier detection and increased size of the area that can be covered. According to Navy requirements officers, the Navy is examining this concept—referred to as sensor netting—to augment radar capability, but the viability of this operational concept has yet to be proven. The MAMDJF Analysis of Alternatives had originally excluded DDG 51- class ships from consideration as the platform for the SPY-6 radar due, in part, to minimal opportunity for growth and limited service life. Weight and center of gravity service life allowance limitations, in particular, affected the Navy’s decisions about Flight III capabilities from the outset. Specifically, the SPY-6 radar was sized to provide the largest radar feasible for the Flight III configuration without requiring major structural changes to the hull form and design. A larger ship could have taken advantage of the scalability of the SPY-6 radar by installing a larger radar that would provide the Navy with increased capability. Thus, for any future capability upgrades to Flight III related to the radar or other systems, the Navy will have to consider significant changes to the DDG 51 hull form. Navy officials stated that adding a new section (called a plug) to the middle of the existing hull form is one option by which the Navy could achieve the additional square footage necessary to accommodate a larger radar. However, the Navy has never executed a plug for a complex, large surface combatant ship and the associated design effort would likely be complicated and costly. The Navy’s weight estimate for Flight III ships has remained relatively stable throughout design, with an overall weight growth of 159 tons since 2012. Navy officials acknowledged that the addition of the SPY-6 radar consumed a significant amount of the ship’s vertical center of gravity service life allowance. Navy weight and vertical center of gravity allowances enable future changes to the ships, such as adding equipment, and reasonable growth during the ship’s service life without unacceptable impacts on the ship. Ten percent of weight and a 1-foot vertical center of gravity are the Naval Sea Systems Command’s architecture standards for surface combatants. According to program officials, the Navy accepts that Flight III will have less of an available service life allowance margin because DDG 51-class ships are inherently dense by design. As figure 11 shows, according to Navy estimates, Flight III ships will essentially be right at the service life allowance standard for weight and well below the vertical center of gravity standard, even with the planned service life allowance improvements included as part of Flight III design. According to Navy requirements officers, Flight III’s upgrade potential will require trade-offs with the currently planned systems and the Navy has already identified several other Flight III capability limitations as a result of DDG 51’s hull size. For example, the Navy is unsure how the addition of the future X-band radar would impact the Flight III ship’s center of gravity. In 2012, a Navy technical study on Flight III found that the addition of a new X-band radar would most likely require additional electric and cooling capacity beyond what is being introduced as part of Flight III configuration, which would necessitate the addition of another generator and air conditioning plant and subsequent equipment arrangement challenges. Navy officials stated that based on their improved understanding of Flight III design, they now expect that there may be enough electric capacity to forgo the need for an additional generator. Similarly, the Navy is planning to begin a study to determine if an upgraded electronic warfare system— included in the initial Flight III concept—can be included within the ship’s existing constraints. With the pending retirement of the CG 47 Ticonderoga-class of cruisers and no new cruiser currently being developed, the Navy has expressed concern about a destroyer supporting the Navy commander’s role in providing air and missile defense for a carrier strike group. Specifically, an air warfare commander (AWC), who is typically the commanding officer of a Navy cruiser within a carrier strike group, is responsible for defense against air and missile threats and requires crew and command, control, communications, and computer resources to fulfill this role. While destroyers and cruisers both utilize the Aegis combat system and can accommodate AWC staff, the Navy has noted that the cruisers were built to support an AWC and are the most capable ships for fulfilling this role. Further, the Navy found through analysis of a Flight III technical feasibility study that the Flight III design does not have an increased capacity to readily enable the functionality required by a major warfare commander. A former Commander of Naval Surface Forces identified some notable differences for meeting AWC responsibilities on the different ships, including: The cruisers are commanded by a captain and have a more senior staff on the ship, with more individuals dedicated to the planning and execution of the air defense mission for the carrier strike group. By contrast, the destroyers are commanded by a commander with a less experienced, though capable, staff that will typically operate in a support role. If the AWC role were to transition permanently to the destroyers, additional training and expertise would be required for the staff. In the second year of its analysis of DDG 51 Flight III technical feasibility, the Navy estimated that for the AWC role to be executed on a Flight III, personnel would need to be increased to fill 15-18 additional positions. The total amount needed is dependent on ballistic missile defense capability requirements. Unlike destroyers, the cruisers have radar array and transmitter redundancies that help avoid losing radar capability if the ship is damaged in combat. The cruisers also have a greater capacity— about 25 percent more than a Flight IIA—for launching surface-to-air missiles in support of the air defense mission. The cruisers have increased command-and-control capability over the guided-missile destroyers. This includes greater radio and satellite communication suites than a destroyer, as well as extra space for AWC staff—20 consoles in the combat information center compared to 16 on a DDG 51. Navy requirements and DDG 51 program officials stated there are no current plans to have Flight III ships permanently replace the cruisers with respect to AWC operations. The Navy included a requirement for AWC equipment and crew accommodations in the Flight III upgrade. According to Navy officials, the equipment and accommodations will provide enhanced ballistic missile defense capability and can provide temporary AWC capability; however, Flight III ships do not meet the longevity requirement for AWC operations, making their use as a one-for-one replacement for the cruiser less viable. The AWC requirement for Flight III ultimately is an effort to reduce—but not eliminate—the capability gap created by the upcoming cruiser retirements. The Navy is currently conducting a capabilities-based assessment for future surface combatants, which will assess capability shortfalls and risks in the mid-21st century for surface combatant forces. According to Navy officials, this assessment will take into account the findings and gaps identified in the MAMDJF Analysis of Alternatives. The assessment is intended to provide a better understanding of the capability challenges that will result from the retirements of cruiser, littoral combat, and DDG 51 Flight IIA ships in the coming decades, but will not identify potential solutions to address those challenges. In addition to this ongoing assessment, the Navy identified plans in its fiscal year 2016 annual long- range shipbuilding plan, submitted to Congress, for a future surface combatant ship, referred to as DDG(X), with the procurement of 37 ships to begin in 2030. This was a change from the Navy’s 2012 annual long- range plan, which included a future DDG 51 Flight IV, with the procurement of 22 ships to begin in 2032. The Navy is in the early stages of its planned investment of more than $50 billion over the next 10 years to design and construct 22 DDG 51 Flight III destroyers. While the Navy has made some good decisions in support of DDG 51 Flight III, including taking an incremental approach to developing and delivering new radar and Aegis combat system capabilities, several challenges in design, development, integration and testing of the radar, upgraded combat system, and the ship itself will need to be overcome going forward. The Navy has implemented a number of practices to reduce program risk. However, the Navy is still defining requirements for an upgraded Aegis combat system, which must be successfully developed, integrated, and tested with the SPY-6 radar under a relatively compressed schedule that includes increased risk in order to meet Flight III’s schedule needs. Further, substantial design changes remain before the Navy will have a sufficient understanding of the resources required to support ship construction. Nevertheless, the Navy intends to ask Congress to commit to the initial ships and the succeeding multiyear procurement beginning in fiscal year 2018 with limited design and cost information in hand. This approach portends risk in the future, which amplifies the need for improved oversight mechanisms to facilitate greater transparency of the cost, schedule, and performance for Flight III. The considerable cost of the Flight III, AMDR, and Aegis programs, as well as the challenges the Navy faces in working to effectively synchronize their schedules, emphasizes the need to ensure a knowledge-based contracting approach and adequate program oversight. Many unknowns remain with regards to cost and the design of Flight III. In particular, the Navy’s plan to issue the lead Flight III ship construction modifications with limited design knowledge puts the government at greater risk that the contract modifications may not represent the true cost to implement the changes during construction. A realistic assessment of Flight III costs gained through completing more of the ship design prior to procuring the lead ship would put the government in a better negotiating position—which is particularly important given that the lead ship is anticipated to be awarded on a sole source basis. Further, the Navy’s estimate of $2.35 billion in cost savings that it expects to achieve through the fiscal year 2013-2017 multiyear procurement has not been updated to reflect the additional costs to design and construct the Flight IIIs. A more accurate assessment of the estimated cost savings for this current multiyear procurement would increase transparency into the expected cost savings. It would also provide valuable insight into expected savings for the next planned multiyear procurement of Flight III ships. The timing of the Navy’s request for authority for the next procurement is also a matter of concern. The Navy’s plan to request, in February 2017, multiyear procurement authority for fiscal years 2018-2022, means it would ask Congress to commit to procuring nearly half of the planned Flight III ships with an incomplete understanding of cost and, effectively, no Flight III construction history to support the decision. Although the department responded to our 2012 recommendation to improve program oversight by elevating the program’s milestone decision authority, Flight III’s status as a new configuration within the existing DDG 51 program, as opposed to its own acquisition program or a major subprogram, reduces congressional insight into cost and schedule plans and performance. Greater transparency of Flight III performance against cost and schedule goals, for example, in standard Selected Acquisition Reports to Congress on the DDG 51 class, would assist DOD and Congress in performing their oversight responsibilities. This oversight continues to be important, as the Navy still has risks to overcome in achieving the intended capabilities of Flight III ships. Greater transparency could also increase awareness of how any future Navy decisions to add capabilities to Flight III will affect the program, such as those related to the cost and schedule plans for the future X-band radar or plans to upgrade electronic warfare systems for later ships. To ensure a more accurate estimate of the expected cost savings under the fiscal year 2013-2017 multiyear procurement, Congress should consider requiring the Navy to update its estimate of savings, which currently reflects only Flight IIA ships, to increase transparency for costs and savings for Congress and the taxpayers, as well as provide improved information to support future multiyear procurement savings estimates. We recommend that the Secretary of Defense take the following three actions: To ensure the department and the shipbuilder have sufficient knowledge of the Flight III design and anticipated costs when making decisions on the award of the lead ship, we recommend that the Secretary of Defense direct the Secretary of the Navy to: Delay the procurement of the lead Flight III ship until detail design is sufficiently complete to allow the government to have a more thorough understanding of the costs and risks associated with Flight III ship construction. To ensure sufficient knowledge of Flight III design and enable some Flight III construction history to inform cost expectations for future multiyear procurement decisions, we also recommend that the Secretary of Defense direct the Secretary of the Navy to: Refrain from seeking authority from Congress for multiyear authority for the procurement of Flight III ships, as currently planned for 2018, until the Navy is able to preliminarily find, relying on DDG 51 Flight III data, that the Flight III configuration will meet criteria for seeking multiyear procurement authority, such as a stable design and realistic cost estimates. To better support DDG 51 Flight III oversight, we recommend that the Secretary of Defense: Designate the Flight III configuration as a major subprogram of the DDG 51 program in order to increase the transparency, via Selected Acquisition Reports, of Flight III cost, schedule, and performance baselines within the broader context of the DDG 51 program. We provided a draft of this report to DOD for review and comment. Its written comments are reprinted in appendix II of this report. DOD partially concurred with our three recommendations. In regards to our recommendation to delay procurement of the lead Flight III ship until more detail design information will be available, DOD acknowledged the importance of a thorough understanding of the costs and risks prior to making procurement decisions but does not believe the procurement should be delayed. We continue to believe that waiting until at least fiscal year 2017 to procure the lead Flight III ship would result in additional time to develop the detail design for Flight III and would in turn support a more refined understanding of design changes and their implications on ship construction and costs prior to making significant contractual commitments. As noted in our report, both shipbuilders support this delay. Additionally, the Flight III program has yet to finalize its request for proposal for the lead ship and receive a shipbuilder response, both of which are required prior to the planned Defense Acquisition Board review—which was postponed indefinitely earlier this year—and are needed in order to proceed with the procurement of the lead Flight III ship. The positive aspects of delaying the lead ship procurement, when combined with the reality that the department will be challenged to accomplish all of its requisite activities to procure the first Flight III ship before the end of fiscal year 2016, support lead ship procurement based on improved design knowledge in fiscal year 2017. Regarding our recommendation on the next planned multiyear procurement for DDG 51 Flight III ships, the department agrees that the criteria for seeking multiyear procurement authority must be met but disagreed that it should refrain from seeking multiyear procurement authority based on the current state of information available on the Flight III configuration. As we have emphasized, the Navy is unlikely to meet all of the criteria for requesting multiyear procurement authority using data from Flight III—particularly as they relate to cost and design stability—in time to seek authority to award the Flight III multiyear contracts planned for fiscal year 2018. Flight III detail design will be nearly a year away from completion when the President’s budget request for fiscal year 2018 would need to be submitted for such a procurement. Further, construction of the first Flight III ship will be about to begin, meaning there will be no Flight III construction history to inform any estimates of ship costs or the savings from use of multiyear procurement. We believe the Navy’s Flight III multiyear procurement strategy lacks sufficient knowledge on design and cost, and poses significant risk to the government. This includes the risk of Congress committing to procure nearly half of the planned Flight III ships without adequate information to support such a decision. Finally, while the department agreed that visibility into Flight III cost, schedule, and performance is important for oversight, and noted planned activities to provide such visibility, DOD does not plan to designate Flight III as a major subprogram. Instead, the department intends to continue reporting on DDG 51-class ships as a single major program in the Selected Acquisition Reports as it has done through previous Flight upgrades. DOD stated that the major impediment to implementing our recommendation is the difficulty in allocating research, development, test, and evaluation costs for the Aegis weapon system. Although the Flight III information that the department stated it intends to provide in next fiscal year’s budget documentation and future Selected Acquisition Reports may help support oversight activities, we believe that designating Flight III as a major subprogram would enhance Flight III oversight efforts and is befitting for an acquisition that is expected to cost more than $50 billion over the next decade. We acknowledge the challenge noted by the department regarding the allocation of costs for the Aegis weapon system. However, the Navy has demonstrated the ability to provide sufficient Aegis funding information to support reporting on specific Aegis advanced capability builds that are designated to specific DDG 51 ships. For example, the fiscal year 2016 President’s budget submission outlines funding for different builds, including ACB 20 that is being developed for Flight III. We understand that some elements of Aegis cost may be more difficult to associate with Flight III because of software components shared across different baselines of the system. The Navy could communicate any limitations to the information as part of reporting Aegis cost information for Flight III. We continue to believe that the improved transparency that would be achieved by formally recognizing Flight III as a major subprogram would be beneficial to Congress and to the taxpayers. DOD also separately provided technical comments on our draft report. We incorporated the comments as appropriate, such as to provide additional context in the report. In doing so, we found that the findings and message of our report remained the same. In a few cases, the department’s suggestions or deletions were not supported by the preponderance of evidence or were based on a difference of opinion, rather than fact. In those instances, we did not make the suggested changes. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Navy, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report evaluates the Navy’s planned acquisition strategies for the DDG 51-class Flight III ships and the Air and Missile Defense Radar (AMDR) programs. Specifically, we assessed (1) the status of the Navy’s efforts to develop, test, and integrate the SPY-6 radar and Aegis combat system in support of DDG 51 Flight III, including plans for operational testing; (2) challenges, if any, associated with the Navy’s plans to design and construct Flight III ships; (3) the Flight III acquisition approach and oversight activities, including reporting on cost, schedule, and performance; and (4) the capabilities that Flight III ships are expected to provide and the extent to which these capabilities fulfill the Navy’s existing and future surface combatant needs. To assess the status of the Navy’s effort to develop, test, and integrate the SPY-6 radar and Aegis combat system in support of DDG 51 Flight III, we reviewed program briefings and schedules, results from recent test and design reviews, and other Navy, Department of Defense (DOD), and contractor documentation to assess the cost, schedule, and performance risks of the AMDR and Aegis combat system programs. We assessed the maturity of the technologies that make up AMDR to determine remaining risks to their development. We reviewed the acquisition program baseline, selected acquisition reports, and Defense Contract Management Agency assessments to determine the cost risk that exists within the program. We assessed the progress and existing risks for the SPY-6 radar and identified integration challenges with Aegis. We reviewed the results of the current Aegis testing and schedules for software development, including the Navy’s plans for the Aegis iteration that will support DDG 51 Flight III. To corroborate documentary evidence and gather additional information in support of our review, we met with officials from the Navy’s Program Executive Office (PEO) Integrated Warfare Systems (IWS) 1.0 and 2.0, which manage the Aegis and AMDR programs, respectively, and the Missile Defense Agency. Additionally, we met with representatives from Raytheon and Lockheed Martin, the prime contractors for the SPY-6 radar and Aegis combat system, respectively, to discuss the development efforts, test plans, and initial integration efforts for each capability. We met with officials from the Defense Contract Management Agency to discuss Raytheon’s SPY-6 radar development activities and performance. We also met with officials from the Navy’s office of the Deputy Assistant Secretary of the Navy (Test and Evaluation) and relevant PEOs, as well as DOD’s offices of the Director, Operational Test and Evaluation, Deputy Assistant Secretary of Defense for Developmental Test and Evaluation, and Cost Assessment and Program Evaluation (CAPE) to discuss the use of a self-defense test ship for operational testing. This included discussion of how the Flight III integrated air and missile defense systems— particularly the SPY-6 radar, Aegis, and Evolved Sea Sparrow Missile systems—could be effectively tested to demonstrate the ship’s self- defense capabilities. We reviewed the Navy’s planned test approach and the technical aspects of the approach that have been the subject of disagreement between the Navy and DOT&E regarding the use of an unmanned self-defense test ship. We assessed the fundamental differences between the two positions, including the costs associated with use of a self-defense test ship for operational testing. In addition to the contents within this report, we are also issuing a classified annex, Arleigh Burke Destroyers: Classified Annex to GAO-16-613, Delaying Procurement of DDG 51 Flight III Ships Would Allow Time to Increase Design Knowledge, GAO-16-846C, which contains supplemental information on the self-defense test ship issue for Flight III. To determine what challenges, if any, are associated with the Navy’s approach to design and construct Flight III ships, we reviewed Navy and contractor documents that address the technologies being introduced as part of Flight III, including program schedules and briefings, test reports, and design progress reports. We compared Flight III design changes— including number, type, and location of those changes—to Navy and contractor estimates and to previous DDG 51-class upgrades to assess the complexity of Flight III design. We evaluated Navy and contractor documents outlining schedule parameters for DDG 51 Flight III ships, including budget submissions, contracts, cost estimates, reports to Congress, and program schedules and briefings. We analyzed the extent to which these parameters have changed over time for Flight III and compared them with our prior work on shipbuilding best practices. To assess the Flight III acquisition approach and oversight activities, including reporting on cost, schedule, and performance, we reviewed the acquisition strategy and other key documents, including DOD memorandums and reports to Congress, which outlined the Navy’s acquisition approach for Flight III. We compared the Navy’s acquisition strategy against the documentation and requirements typically necessary for a new acquisition program based on DOD acquisition guidance. We reviewed Navy program briefings, reports to Congress, and testimony statements to identify how the Flight III acquisition strategy has changed over time and the extent to which the Navy has completed key activities that are part of its acquisition approach, including updating documents and holding program reviews. We also assessed the Flight III contracting strategy by comparing the Navy’s knowledge of Flight III design and construction to statutory criteria required for requesting authorization to use a multiyear contract. To further corroborate documentary evidence and gather additional information in support of our review, we conducted interviews with relevant Navy officials responsible for managing the design and construction of DDG 51 Flight III ships, such as those within PEO Ships, DDG 51 program office, Electric Ships program office, PEO IWS, Naval Sea Systems Command’s Naval Systems Engineering, and Supervisor of Shipbuilding, Conversion, and Repair. We also met with representatives from the lead and follow shipyards—Bath Iron Works Corporation and Huntington Ingalls Industries—to understand their role in Flight III design and development. To understand Flight III cost considerations, we interviewed CAPE about cost estimation activities for the program. To assess what capabilities Flight III ships are expected to provide and the extent to which these capabilities fulfill the Navy’s existing and future surface combatant needs, we compared the Navy’s 2009 Radar/Hull Study—which was the main tool the Navy used to identify the DDG 51 Flight III as the platform for AMDR—with the Navy’s Maritime Air and Missile Defense of Joint Forces (MAMDJF) Analysis of Alternatives, a 2007 Navy study related to ballistic missile defense and integrated air and missile defense. We reviewed the Capability Development Documents for both DDG 51 Flight III and AMDR and other Navy documentation to determine the capabilities that the Navy had originally planned to include as part of the Flight III configuration. We compared these capabilities against those that are currently expected to be delivered as part of the first three Flight III ships. We assessed the extent to which Flight III and AMDR planned capabilities fulfill requirements for surface combatants based on MAMDJF stated requirements. We also reviewed the potential for Flight III to fulfill air and missile defense requirements that are currently the responsibility of the Navy’s cruiser fleet. We examined ship weight reports and other Navy, DOD, and contractor documentation to analyze Fight III’s service life allowance and determine the extent to which future upgrades can be introduced onto the ship. To further corroborate documentary evidence and gather additional information to support our review, we met with officials from the office of the Chief of Naval Operations to discuss the status Navy’s current and any future studies related to surface combatants and integrated air and missile defense capabilities. We also met with officials from the PEO Ships, PEO IWS, and the Joint Staff. We conducted this performance audit from July 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Michele Mackin, 202-512-4841 or [email protected]. In addition to the contact above, Diana Moldafsky, Assistant Director; Pedro Almoguera; Laura Greifner; Laura Jezewski; C. James Madar; Sean Merrill; Garrett Riba; Roxanna Sun; James Tallon; Hai Tran; and Alyssa Weir made key contributions to this report.
Over the next 10 years, the Navy plans to spend more than $50 billion to design and procure 22 Flight III destroyers, an upgrade from Flight IIA ships. Flight III ships will include the new SPY-6 radar system and Aegis (ballistic missile defense) combat system upgrades. The Navy's MYP approach requires the Navy to seek authority to do so from Congress. House report 114-102 included a provision for GAO to examine the Navy's plans for the DDG 51 Flight III ships and AMDR. This report assesses (1) the status of efforts to develop, test, and integrate SPY-6 and Aegis in support of Flight III; (2) challenges, if any, associated with the Navy's plans to design and construct Flight III ships; and (3) the Flight III acquisition approach and oversight activities, among other issues. GAO reviewed key acquisition documents and met with Navy and other DOD officials and contractors. The Air and Missile Defense Radar (AMDR) program's SPY-6 radar is progressing largely as planned, but extensive development and testing remains. Testing of the integrated SPY-6 and full baseline Aegis combat system upgrade—beginning in late 2020—will be crucial for demonstrating readiness to deliver improved air and missile defense capabilities to the first DDG 51 Flight III ship in 2023. After a lengthy debate between the Navy and the Department of Defense's (DOD) Director of Operational Test and Evaluation, the Secretary of Defense directed the Navy to fund unmanned self-defense test ship upgrades for Flight III operational testing, but work remains to finalize a test strategy. Flight III ship design and construction will be complex—primarily due to changes needed to incorporate SPY-6 onto the ship, as shown in the figure. The Navy has not demonstrated sufficient acquisition and design knowledge regarding its Flight III procurement approach and opportunities exist to enhance oversight. If the Navy procures the lead Flight III ship in fiscal year (FY) 2016 as planned, limited detail design knowledge will be available to inform the procurement. In addition, the Navy's anticipated cost savings under the FY 2013-2017 Flight IIA multiyear procurement (MYP) plan do not reflect the planned addition of Flight III ships. While the Navy did not update its cost savings with Flight III information, doing so would increase transparency and could help inform expected savings under the next MYP. The Navy plans to request authority to award new Flight III MYP contracts (FY 2018-2022) in February 2017. The Navy will be asking Congress for this authority to procure nearly half of Flight III ships before being able to meet the criteria to seek this authority. For example, detail design will not be complete and costs will not be informed by any Flight III construction history. Finally, Flight III cost and schedule performance is not distinguished from that of the overall DDG 51 ship class in annual reports to Congress. Establishing Flight III as a major subprogram would improve reporting and offer greater performance insight. Congress should consider requiring an update of estimated savings for the current DDG 51 MYP to reflect the addition of Flight III ships. The Navy should delay procurement of the lead Flight III ship and refrain from seeking authority for a MYP contract until it can meet criteria required for seeking this authority. DOD should also designate Flight III as a major subprogram to improve oversight. DOD partially concurred with all three recommendations but is not planning to take any new actions to address them. GAO continues to believe the recommendations are valid.
The Military Health Services System (MHSS), with an annual cost of over $15 billion, has the dual mission of providing medical care to the military forces during war or conflict and to military dependents and retirees. The MHSS consists of over 90 deployable combat hospitals that are solely devoted to the wartime mission. In addition, over 600 medical treatment facilities, such as medical centers, community hospitals, and clinics, are available worldwide to care for wartime casualties, but also provide peacetime care to active duty dependents and retirees. The system employs over 184,000 military personnel and civilians with an additional 91,000 medical personnel in the National Guard and Selected Reserves. In the post-Cold War era, personnel downsizing and constrained budgets focused attention on DOD’s need to determine the appropriate size and mix of its medical force. In 1991, the Congress required DOD to reassess its medical personnel requirements based on a post-Cold War scenario. Specifically, section 733 of the National Defense Authorization Act for Fiscal Years 1992 and 1993 (P. L. 102-190, December 5, 1991) required, among other things, that DOD determine the size and composition of the military medical system needed to support U.S. forces during a war or other conflict and identify ways of improving the cost-effectiveness of medical care delivered during peacetime. In April 1994, DOD completed the required study, known as the “733 study.” Although the study included all types of medical personnel, it used physicians to illustrate key points. It estimated that about 50 percent of the 12,600 active duty physicians projected for fiscal year 1999 were needed to treat casualties emanating from two nearly simultaneous major regional conflicts (MRC). When reserve forces were included, the study showed that the 19,100 physicians projected for fiscal year 1999 could be reduced by 24 percent. In March 1995, we testified that the 733 study results were credible and that its methodology was reasonable. However, we noted that the study’s results differed from the war plans prepared by the commanders in chief (CINC) for the two anticipated conflicts, due mainly to different warfighting and casualty assumptions. Following the 733 study, each service used its own model to determine wartime medical personnel requirements. Using these models, the services estimated that their wartime medical personnel requirements were almost as much as those projected for fiscal year 1999—offsetting most of the reductions suggested in the 733 study. Over the past several years, the services have maintained essentially the same number of active duty physicians, even though active duty end strengths have dropped considerably. The Navy developed a model known as the Total Health Care Support Readiness Requirement to correct what it viewed as inaccuracies in the 733 study. The Air Force also developed a model patterned closely after the Navy’s. In their models, the Navy and the Air Force used the medical personnel levels from the 733 study as their wartime baseline and then identified adjustments which, in their view, were needed to more accurately represent personnel required to treat combat casualties and to maintain operational readiness and training. Using these models, the Navy and the Air Force, in the summer of 1995, identified wartime active duty medical personnel requirements that supported 99 percent and 86 percent, respectively, of their fiscal year 1999 projections. The Army also developed a model called Total Army Medical Department Personnel Structure Model (TAPSM) to determine medical personnel required to meet the medical demands of the two-MRC strategy. TAPSM differed from the Navy’s and the Air Force’s models in that the Army continued using its Total Army Analysis (TAA) process to estimate the baseline wartime requirements, whereas the Navy and the Air Force used the 733 estimates as their baseline. Building on the baseline obtained from TAA, the Army used TAPSM to determine additional medical personnel needed for medical readiness, such as rotation and training. In the summer of 1995, the Army’s process identified wartime active duty medical personnel requirements that were 104 percent of the Army’s fiscal year 1999 projections. Major differences between the results of the service models and the 733 study occurred because the services made different assumptions about the personnel needed for medical readiness. These readiness requirements are intended to ensure that, at any point in time, DOD has enough personnel to care for deployed forces. Specifically, these readiness-related requirements support continuous training of medical personnel and a medical cadre in the United States that can replace or relieve deployed personnel as needed. While the 733 study made some provision for such requirements, the services’ estimates assume that a much higher number of medical personnel are needed for such training and rotation. The services’ estimates of wartime requirements support a medical force projection that does not decrease nearly as much as the active duty force. Responding to changes in the national military strategy, DOD projects that by 1999 the active duty force will be reduced by one-third from the 1987 levels. At the same time, the services are projecting reductions of 16 percent in total active duty medical personnel and 4 percent in active duty physicians. The services’ modeling techniques for estimating medical personnel requirements appear reasonable. While we found some differences between the models, each determined requirements for similar categories of personnel. However, the models’ results depend largely on the values of the input data and assumptions. We assessed the services’ modeling techniques by comparing the attributes of each model to the methodology used in the 733 study, which we had previously concluded was reasonable. We found that the services’ modeling techniques were consistent with the 733 study in that they used (1) current defense planning guidance for two MRCs, (2) DOD-approved policies for evacuating casualties from the theater, and (3) casualty projections. Also like the 733 study, the services’ techniques included active duty and reserve personnel working in hospital and nonhospital functions, those working in graduate medical education programs, and those needed for rotation to overseas installations. However, as described previously, the services assumed more medical personnel would be needed for training and rotation associated with medical readiness. These assumptions, not the modeling techniques, accounted for a major difference between the results of the 733 study and the services’ models. The 733 study concluded that about 50 percent of the active duty physicians projected for fiscal year 1999 were not needed to meet wartime medical readiness requirements, while the services’ models supported a need for 96 percent of the fiscal year 1999 active duty physicians. DOD’s current study of wartime medical personnel requirements, when completed, will present another analysis to compare with the services’ modeling techniques. This analysis could reveal methodological or other differences not currently identified. In the services’ medical personnel requirements processes, the demand for care emanating from the two-MRC strategy is translated into the number of hospital beds required. This demand is based on the number of anticipated casualties without regard to whether the beds will be staffed by active duty or reserve component medical personnel. The allocation between active and reserve components is made by analyzing when casualties are projected to occur during the conflicts and comparing that requirement to information on how soon active and reserve medical units can arrive in the theater. If high numbers of casualties in a theater are anticipated to occur early in a conflict, more active duty medical personnel will likely be required to provide medical care because active duty medical units generally can deploy more quickly than reserve units. Conversely, if high numbers of casualties do not occur until later in the conflict, the need for active duty medical personnel diminishes and more requirements can be met by reserve forces. DOD’s current study of medical requirements will examine the appropriateness of the mix between active duty and reserve medical forces. The outcome of this study will have important ramifications for sizing the medical components of each service and the number of medical personnel to remain on active duty status. If, for example, the study assumes that medical forces will be needed sooner than assumed in the 733 study, most, if not all, of the reductions in active duty medical personnel estimated in the original study could be nullified. On the other hand, if medical forces are assumed to deploy later, more reductions in active duty medical personnel could be made. DOD is currently updating its 733 study using a process intended to replace the individual service models for determining wartime medical personnel requirements. The update was directed by the Deputy Secretary of Defense, in August 1995, to respond to the continuing debate over the estimates for wartime medical personnel. The update is being led by the Director of DOD’s Office of Program Analysis and Evaluation, which also conducted the original 733 study, under the general direction of a steering group of representatives from several offices. The update will result in a new estimate of wartime medical demands derived from updated planning scenarios and force deployment projections. In an effort to arrive at one set of DOD requirements, the 733 update working groups have been attempting to reach agreement on the underlying assumptions with the key parties within DOD. However, the March 1996 completion has been delayed because of disagreements over some assumptions, such as the population-at-risk and casualty rates. DOD officials have not provided a firm date for completing the study, but they believe they are making progress in reaching agreement on input assumptions. They also believe such an agreement will establish a unified process for determining DOD-wide wartime medical demands. After the wartime demand is established, the 733 update is expected to use a model to estimate medical personnel needed to meet the demand. DOD officials believe that, in the future, this model—the DOD Medical Sizing Model—will be used to determine total wartime medical personnel levels. According to DOD officials, if agreement is reached on the model and the assumptions to be used, wartime medical requirements will no longer be determined by the individual service models. We reviewed documents, reports, and legislation relevant to military medical staffing trends; each service’s medical staffing model; the DOD Medical Sizing Model; and the 733 update study. We interviewed officials from the Office of the Assistant Secretary of Defense for Health Affairs; DOD’s Office of Program Analysis and Evaluation; the Joint Staff; the Offices of the Surgeons General of the Army, the Navy, and the Air Force; the Office of Reserve Affairs; and the U.S. Army Concepts Analysis Agency in the Washington, D.C., area. We also interviewed officials from the U.S. Central Command, Tampa, Florida; the U.S. Transportation Command, Scott Air Force Base, Illinois; and the Army Medical Command, San Antonio, Texas. In assessing the reasonableness of the services’ modeling techniques, we compared the attributes of each model with the 733 study. We obtained information from each service on the model formats, the underlying assumptions, and the types and sources of information used in developing the models. We met with the service representatives responsible for developing and using the models to gain an understanding of how each model worked. We did not attempt an in-depth validation of the accuracy of each model, rather, we reviewed the models to see if their methodologies were generally consistent with the 733 study. We initially concentrated on looking at how each model developed the active duty medical personnel requirements from the total wartime bed requirements. We also compared the services’ modeling techniques with each other. We intended to compare each service’s input values (rates) for such factors as wounded-in-action, conflict intensities, conflict durations, and disease and non-battle injuries with similar rates depicted in the CINC war plans and with the updated casualty rates being developed subsequent to the 733 study. However, before we started this phase, DOD decided to develop, as part of the 733 update, a single DOD-wide model for determining medical staffing requirements. Since the update is still ongoing, we are at this time unable to fully assess the reasonableness of the data inputs and assumptions, the appropriateness of the active/reserve component split, and the degree to which DOD integrates the medical requirements of the three services. We conducted our review from June 1995 to June 1996 in accordance with generally accepted government auditing standards. In oral comments, DOD fully concurred with this report’s findings and conclusions. We are sending copies of this report to other interested congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant, U.S. Marine Corps; and the Director, Office of Management and Budget. We will also send copies to others on request. If you or your staff have any questions about this report, please call me on (202) 512-5140. Major contributors to this report are listed in appendix I. Jeffrey A. Kans Cary B. Russell The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO studied the reasonableness of the models each military service uses to determine appropriate wartime medical personnel force levels, focusing on the models' results, their methodologies, and their inclusion of active duty and reserve medical personnel. GAO found that: (1) in 1995, each service used its own model to determine wartime medical personnel requirements instead of adopting the results of the Department of Defense's (DOD) "733 study," which, among other things, sought to determine the size and composition of the military medical system needed to support U.S. forces during a war or other conflict; (2) taken together, the services' models offset nearly all of the reductions estimated in the 733 study, supporting instead, a need for about 96 percent of the active duty physicians projected for fiscal year (FY) 1999; (3) much of this difference resulted because the services assumed that significantly more people were needed for training and maintaining personnel to relieve deployed medical forces; (4) given these results, DOD has not planned significant reductions in future medical forces; (5) by comparison, the overall DOD active duty end strengths are expected to decline by twice the rate of decline in medical forces from FY 1987 to FY 1999; (6) the modeling techniques the services used to determine medical requirements appear reasonable; (7) however, the results of the models depend largely on the values of the input data and assumptions used; (8) although their techniques differed in some ways, the services appropriately considered factors, such as current defense planning guidance, DOD policies for evacuating patients from the theater, and casualty projections; (9) the service models also included requirements for both active duty and reserve medical personnel; (10) at the time of GAO's review, the services had done more detailed analyses of the active duty requirements than the reserve portion; (11) given the dichotomy between the results of the service models and the 733 study, in August 1995, the Deputy Secretary of Defense directed that the 733 study be updated and improved; (12) this ongoing study is intended to form the basis for a single DOD position on wartime medical demands and associated personnel; (13) as such, it is to resolve differences in the key assumptions that drive medical force requirements; (14) while the study was to be completed by March 1996, DOD has encountered difficulty in reaching agreement over some assumptions, such as the population-at-risk and casualty rates, and thus, the study has been delayed; and (15) the 733 update is using a unified DOD sizing model, which will supplant individual service models.
Mr. Chairman and Members of the Committee: I am pleased to be here today to discuss some of the possible implications of legislative proposals that would give new personnel flexibility to the Internal Revenue Service (IRS). With federal agencies now called upon to improve customer service and deliver better results to the American people while limiting costs, the need for a well-managed, well-qualified, and highly motivated workforce has never been greater. Therefore, it is not surprising that recent discussions have centered on the amount of flexibility federal agencies should have in hiring and managing their employees. With regard to the personnel flexibility proposals for IRS, I would like to make three points on the basis of our prior body of work in the human resource management area: First, because the proposals generally provide a broad outline for managing IRS employees, but not the details, it is difficult to predict to what extent the new provisions will help IRS improve its performance and overcome past problems. Second, the proposals, focusing as they do on customer service and on aligning employees’ performance with the agency’s mission, goals, and objectives, are in keeping with broad trends in the public and private sector that we have identified in our previous work. At IRS or any federal agency, the degree of commitment by top management will determine whether this new focus can be sustained. Third, federal agencies such as IRS need the flexibility to tailor their personnel approaches to best meet the demands of their missions. Along with this need for flexibility, there is a need to maintain oversight and accountability mechanisms that will ensure that agencies adhere to the statutorily required merit principles, such as maintaining high standards of integrity, conduct, and concern for the public interest and other national goals, such as veterans’ preference. limited period of time. This would give IRS the opportunity to include effective planning and evaluative mechanisms in the test and would allow Congress to consider the effects of IRS’ personnel changes before deciding whether they should be made permanent. We have examined two bills that would give IRS new flexibilities in managing its workforce: H.R. 2676, which passed the House of Representatives in November 1997, and S. 1174, which has been referred to the Senate Committee on Finance. The bills are similar in that both would give IRS additional flexibilities relating to performance management, staffing, and the development of demonstration projects. The Senate bill also includes classification and pay flexibilities (“broad-banding”) and a provision for “critical pay authority” to help recruit and retain employees in highly skilled, high level technical and professional positions. The new flexibilities in performance management, staffing, and pay would be granted permanently, while those initiatives IRS might develop under the bills’ demonstration authorities would be subject to testing before being made permanent. The legislative proposals in H.R. 2676 outline a performance management approach for IRS that would include all IRS employees, with the exception of the IRS Oversight Board, the IRS Commissioner, and the IRS Chief Counsel. The new performance management system would appear to cover Senior Executive Service (SES) members and non-SES employees alike, require that goals and objectives established through IRS organizational performance planning be linked to individual or group performance and used to make performance distinctions among employees or groups of employees, require performance appraisals to have at least two performance rating levels at fully successful or above, allow awards of up to 50 percent of salary for a small number of employees who report directly to the IRS Commissioner, and allow for employee awards based on documented financial savings. It would also require periodic performance evaluations to determine whether employees are meeting all applicable retention standards, and would use the results of employees’ performance evaluations as a basis for adjustments in pay and other appropriate personnel actions. opportunity to address some of its long-standing challenges, which include attracting and retaining the talent necessary to modernize its management practices and bring its technology and administrative systems up-to-date. The provisions may also help IRS focus its employees on the agency’s fundamental responsibility for collecting the proper amount of taxes while, at the same time, providing courteous service to those who must pay the taxes. The details of the new performance management approach are left to the Commissioner, who is charged with developing a plan for the new system within 1 year. Leaving the details to the Commissioner is of course entirely consistent with the bills’ approach of granting IRS somewhat greater flexibility to tailor its personnel management to the agency’s particular needs. Until the Commissioner develops that plan, acting in accordance with both the new legislation and those provisions of Title 5 to which IRS would remain subject, and has some experience in implementing the new flexibilities, there is no way to predict just how helpful the new flexibilities may be in improving IRS’ actual performance. To the extent that the performance management, staffing, and pay flexibility provisions, as implemented, contribute to improved IRS performance, they not only will be worth retaining in IRS, but also may be worthy of emulation elsewhere in the federal government. If certain provisions do not improve performance, or perhaps unexpectedly detract from performance or have other undesirable consequences, it may be useful to have a means of identifying these problems and pulling the plug if necessary. Under these circumstances, one useful alternative to permanently authorizing the performance management, staffing, and pay flexibility provisions might be found in the legislation itself. H.R. 2676 would allow the Commissioner to carry out demonstration projects without the screening and approval currently required under the Office of Personnel Management’s (OPM) demonstration project authority. The time-limited projects as currently authorized in the bill could be conducted for such purposes as improving personnel management, providing increased individual accountability, and eliminating obstacles to dealing with poor performers. An alternative might be to add the performance management, staffing, and pay flexibility provisions to authorized activities included in the proposed demonstration authority. Including all of the authorized flexibilities under the demonstration authority would give IRS a chance to see just how well its new approaches work when put into action. IRS would have the opportunity to shape personnel approaches outside those currently available and to develop an evaluative mechanism to gather data on how well they work. Congress would have the opportunity to consider the effects of the new approaches before deciding if they should be made permanent. This option would also provide information that other agencies could use to assess whether similar changes might improve their personnel systems. In our contacts with human resource management experts from public- and private-sector organizations both here and abroad, we have found that successful organizations recognize the importance of organizational mission, vision, and culture as a means of focusing their workforce on the job at hand. At IRS, that job includes more than simply collecting taxes. For example, as Congress is now emphasizing to IRS, it includes fair treatment of the taxpayers as well. According to the House Ways and Means Committee report on H.R. 2676, the new proposals for personnel management at IRS would be aimed at establishing a balanced system of measures that would ensure that taxpayer satisfaction—i.e., customer service—is paramount throughout all IRS functions. For example, while giving IRS greater flexibility in distributing cash awards to employees, H.R. 2676 specifies that awards will not be based solely on tax enforcement results. This is consistent with our belief that IRS employees’ performance should be assessed using a balanced set of indicators. Therefore, we believe H.R. 2676 appropriately gives IRS the opportunity to factor in other measures, such as customer service results and employee behavior. The Committee report also said that the proposed legislation would refocus the IRS personnel system on the agency’s overall mission and on how each employee’s performance relates to that mission. Across government, some of the agencies now implementing the Government Performance and Results Act (known as GPRA or the Results Act) are engaged in similar efforts, aligning the performance expectations of each level of their organizations, and ultimately of each employee, with the agencies’ missions and strategic goals. The Results Act itself was based on principles and best practices established by successful private-sector organizations and by governments at the state and local level and abroad. The challenge for federal agencies such as IRS is to make these principles work for the federal government as well. Some federal agencies that have tried to align employee performance with agency missions and goals have noted the conceptual challenges involved in becoming more results-oriented. For example, when we reviewed the experiences of five regulatory agencies affected by the President’s March 1995 directive to measure agency and employee performance in terms of results, we found that some of the agencies were further along than others. Officials at the five agencies cited some barriers, mostly involving the need to clarify their missions and establish results-oriented goals and measures, that made creating results-oriented performance standards for employees more difficult. For example, at IRS, one of the five agencies we reviewed, officials said it was difficult to measure the impact that IRS taxpayer education and outreach efforts would have on the agency’s goal of increasing voluntary tax compliance rates. To a significant extent, meeting the challenge of more effectively aligning employees’ performance with organizational missions and goals will be an effort that succeeds or fails through its implementation. Nothing in current personnel law or regulation prohibits agencies from establishing goals or objectives for employees that are based on organizational goals, communicating these goals and objectives to the employees, and using these goals or objectives to make performance distinctions for purposes of adjustments in pay and other personnel actions. Still, while many agencies implementing the Results Act have tried to do these things, others have not. Some that have tried have found that the challenges involved are not so much a matter of restrictive personnel rules as of instilling in their managers and other employees a new understanding of their agencies’ missions and goals and of what, for each employee, constitutes successful performance. to changing IRS’ organizational culture to support it, and to holding all employees accountable for fulfilling IRS’ commitment to the taxpayers. Both H.R. 2676 and S. 1174 also require that before any flexibilities are exercised, management and the employee unions need to enter into a written agreement. This provision underscores the need for a shared commitment to improving performance at every level of the agency. It also underscores the importance of maintaining good working relationships between management and all employees. The proposals for new personnel flexibility at IRS are part of a broader set of proposals to restructure the agency and improve its performance. In facing new pressures to perform, IRS is not alone. In recent years, changes in social, economic, and technological conditions put new pressures on both public and private sector organizations, which had to deal with calls for better performance and growing demands for more responsive customer service, even as resources were becoming harder to come by. Many of these organizations have looked hard at their human resource management approaches, found them outmoded or too confining, and turned to new ways of operating. The new human resource management model that many of these organizations have chosen is more decentralized, more directly focused on mission accomplishment, and set up more to establish guiding principles than to prescribe detailed rules and procedures. Under this model, an organization adopts its human resource management practices because they support the organization’s needs and mission, rather than because they conform with practices that have been adopted elsewhere. organizations (PBO) includes personnel features that lie outside the structure of Title 5. The proposals for IRS we are discussing today are part of this general trend. In our previous work, we have recognized that to manage effectively for results, agencies need the flexibility to manage according to their needs and missions. Under the Results Act, managers are expected to be given greater flexibility to manage, but also to be held more accountable for results. We have also found that, over the years, Title 5 has evolved to give federal agencies more flexibility than they once had—and often, more than they realize—to tailor their personnel approaches to their missions and needs. But we also know that the federal government has traditionally wanted certain principles to hold true for all its employees. The merit principles and certain other national goals, such as veterans’ preference, remain generally applicable to employees of all agencies. In fact, both H.R. 2676 and S. 1174, while giving new personnel flexibilities to IRS beyond those already available to it under Title 5, would specifically require that the agency continue to conform to the merit principles and other national goals. The question is, what sort of oversight is appropriate as agencies such as IRS gain additional personnel flexibilities outside the traditional purview of Title 5? The current civil service system is already highly decentralized, and current oversight is by no means uniform. What is commonly thought of as the “civil service”—the federal civilian workforce subject to all the provisions of Title 5 and overseen by OPM—comprises just more than half of all federal civil servants. Technically, this segment is known as the “competitive service,” which operates under the federal merit system. Other federal civilian employees are employed in agencies or other federal entities—such as government corporations (like the Tennessee Valley Authority) and quasi-governmental organizations (like the U.S. Postal Service)—that operate outside Title 5 or are statutorily excepted from parts of it. These workers, while all members of the civil service, are in the “excepted service” and are covered by a variety of alternative merit systems. One of Congress’ reasons for establishing alternative merit systems for some federal organizations was to give them a measure of freedom from the rules governing the competitive service under Title 5. Concerns over the constraints imposed by Title 5 have led to proposals such as those already accepted or pending regarding FAA, FBI, DOD, and IRS—proposals that could lead to an even more decentralized civil service. To the extent that agencies such as these gain flexibilities outside of Title 5, Congress will need to know whether, in planning and implementing their new approaches, these agencies continue to adhere to the merit principles and other national goals. However, the proposals for IRS do not make OPM’s role in this regard entirely clear. Congress has options of clarifying OPM’s role or taking a more direct hand itself in overseeing IRS’ new personnel practices. In closing, the proposals in H.R. 2676 and S. 1174 have been developed to provide IRS exceptions from various Title 5 personnel requirements that IRS believes impede its ability to accomplish its mission. In order to take full advantage of the lessons that implementation will yield, Congress may find it appropriate to incorporate all of the flexibilities into the demonstration authority provisions of the bills. With appropriate evaluative mechanisms included, this would allow for an informed judgment as to whether these flexibilities should be made permanently available to IRS as well as whether they possibly should be extended to other agencies. In addition, the bills’ provisions encouraging IRS to align its employees’ performance with IRS’ mission and goals are consistent with other public- and private-sector organizational trends that have been given congressional endorsement through the passage of the Results Act. However, success in achieving this alignment will require a culture change in IRS driven by a long-term managerial commitment. Finally, the granting of personnel flexibilities to federal agencies raises important issues as to the extent to which, or the mechanisms whereby, Congress or OPM will oversee these agencies to ensure their continued compliance with the merit principles and other national goals that undergird all federal employment. This concludes my prepared statement, Mr. Chairman. I would be pleased to answer any questions you or other Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the possible implications of proposed legislation that would give new personnel flexibility to the Internal Revenue Service (IRS). GAO noted that: (1) it examined two bills that would give IRS new flexibilities in managing its workforce: H.R. 2676 and S. 1174; (2) the bills are similar in that both would give IRS additional flexibilities relating to performance management, staffing, and the development of demonstration projects; (3) until the Commissioner of IRS develops an implementation plan, acting in accordance with both the new legislation and those provisions of Title 5 to which IRS would remain subject, and has some experience in implementing the new flexibilities, there is no way to predict just how helpful the new flexibilities may be in improving IRS' actual performance; (4) GAO believes that H.R. 2676 appropriately gives IRS the opportunity to factor in other measures, such as customer service results and employee behavior; (5) the proposals for new personnel flexibility at IRS are a part of a broader set of proposals to restructure the agency and improve its performance; (6) GAO has recognized that to manage effectively for results, agencies need the flexibility to manage according to their needs and mission; (7) GAO also found that, over the years, Title 5 has evolved to give federal agencies more flexibility than they once had--and often more then they realize-- to tailor their personnel approaches to their missions and needs; (8) the merit principles and certain other national goals such as veterans' preference remain generally applicable to employees of all agencies; (9) both H.R. 2676 and S. 1174, while giving new personnel flexibilities to IRS beyond those already available to it under Title 5, would specifically require that the agency continue to conform to the merit principles and other national goals; (10) the proposals in H.R. 2676 and S. 1174 have been developed to provide IRS exceptions from various Title 5 personnel requirements that IRS believes impede its ability to accomplish its mission; (11) the bills' provisions encouraging IRS to align its employees' performance with IRS mission and goals are consistent with other public- and private-sector organizational trends that have been given congressional endorsement through the passage of the Government Performance and Results Act; and (12) these proposals do not make clear the Office of Personnel Management's role of ensuring IRS' continued compliance with the merit principles.
An election is the act or process by which citizens cast a vote to select an individual for an office. Although an election is a single event, an election system involves the integration of the people, processes, and technology that are generally associated with the preparation and administration of an election. The basic goals of election systems in the United States are to enable every eligible citizen who wishes to vote to cast a single ballot in private and have the votes on that ballot counted accurately. Administering an election is a year-round activity that generally consists of the following: Voter registration--This includes local election officials registering eligible voters and maintaining voter registration lists to include updates to registrants’ information and deletions of the names of registrants who are no longer eligible to vote. Absentee and early voting--This type of voting allows eligible persons to vote in-person or by mail before election day. The conduct of an election--This aspect of election administration includes preparation before election day, such as local election officials arranging for polling places, recruiting and training poll workers, designing ballots, and preparing voting equipment for use in casting and tabulating votes; and election day activities, such as opening and closing polling places and assisting voters to cast votes. Vote counting--This includes election officials tabulating the cast ballots; determining whether and how to count ballots that cannot be read by the vote counting equipment; certifying the final vote counts; and performing recounts, if required. As shown in figure 3, each stage of an election involves people and technology. Under its various constitutional authorities, Congress has passed legislation regarding the administration of both federal and state elections, including voter registration, absentee voting, accessibility provisions for the elderly and handicapped, and prohibitions against discriminatory practices. Congress enacted the National Voter Registration Act of 1993 (NVRA) commonly known as the “Motor Voter” Act, to establish registration procedures designed to “increase the number of eligible citizens who register to vote in elections for Federal office,” without compromising “the integrity of the electoral process” or the maintenance of “accurate and current voter registration rolls.” NVRA expanded the number of locations and opportunities for citizens to apply to register. For example, under NVRA, citizens are to be able to apply to register (1) when applying for or renewing a driver’s license; (2) at various state agencies, such as public assistance centers; or (3) by mailing a national voter registration application to a designated election official. NVRA also establishes requirements to ensure that state programs to identify and remove from voter registration rolls the names of individuals who are no longer eligible to vote are uniform, nondiscriminatory, and do not exclude a voter from the rolls solely because of his or her failure to vote. Finally, NVRA requires that the Federal Election Commission (FEC) submit to Congress a biennial report with recommendations assessing the impact of the NVRA on the administration of elections for federal office during the preceding 2-year period. The Uniformed and Overseas Citizens Absentee Voting Act of 1986 (UOCAVA) requires that states permit the following categories of citizens to apply to register and vote by absentee voting in federal elections: (1) members of the uniformed services living overseas, (2) all other citizens living overseas, and (3) uniformed services voters and their dependents in the United States who are living outside of their voting jurisdiction. In addition, the Voting Accessibility for the Elderly and Handicapped Act of 1984 requires, with some exceptions, election jurisdictions to provide alternate means of casting a ballot (e.g., absentee and early voting) for all elections in which election day polling places are not accessible to people with disabilities. Congress, however, has been most active with respect to enacting prohibitions against discriminatory voting practices. For example, the Voting Rights Act of 1965 codifies and effectuates the Fifteenth Amendment’s guarantee that no person shall be denied the right to vote on account of race or color. Subsequent amendments to the Act expanded it to include protections for members of language minority groups, as well as other matters regarding voting registration and procedures. States regulate the election process, including, for example, ballot access, registration procedures, absentee voting requirements, establishment of voting places, provision of election day workers, and counting and certification of the vote. As described by the Supreme Court, “the tates have evolved comprehensive, and in many respects complex, election codes regulating in most substantial ways, with respect to both federal and state elections, the time, place, and manner of holding primary and general elections, the registration of voters, and the selection and qualification of candidates.” In fact, the U.S. election system comprises 51 somewhat distinct election systems—those of the 50 states and the District of Columbia. However, although election policy and procedures are legislated primarily at the state level, states typically have decentralized this process so that the details of administering elections are carried out at the city or county levels, and voting is done at the local level. At the federal level, no agency bears direct responsibility for election administration. However, in 1975, Congress created FEC to administer and enforce the Federal Election Campaign Act. To carry out this role, FEC discloses campaign finance information; enforces provisions of the law, such as limits and prohibitions on contributions; and oversees the public funding of presidential elections. FEC’s Office of Election Administration (OEA) serves as a national clearinghouse for information regarding the administration of federal elections. As such, OEA assists state and local election officials by developing voluntary voting system standards, responding to inquiries, publishing research on election issues, and conducting workshops on matters related to election administration. The administrative structure and authority given to those responsible for elections vary from state to state. The majority of states vest election authority in a secretary of state (or other state cabinet-level official) who is elected for a term of 2 to 4 years. The approval of voting equipment for use in a state may be a responsibility of the secretary of state or another entity, such as a State Board of Elections. State officials usually provide information services and technical support to local election jurisdictions but seldom participate in the day-to-day administration of an election. Local election jurisdictions, such as counties, cities, townships, and villages, conduct elections, including federal and state contests. Although some states bear some election costs, it is local jurisdictions that pay for elections and provide the officials who conduct the elections. Local election administration officials may be elected, appointed, or be professional employees. State or local regulations determine who functions as the chief elections official. Elections may be conducted by county or town clerks, registrars, election boards, bureaus, or commissions, or some combination thereof. The election administration official may have extensive or little experience and training in running elections. Local jurisdictions administer elections within the framework of state laws and regulations that provide for differing degrees of local control over how elections are conducted, including voting equipment to be used, ballot design, and voter identification requirements at polling places. One of the responsibilities of state and/or local election officials is to recruit, train, assign, and compensate permanent and temporary personnel. These personnel may include: voting equipment operators, voter registrars, absentee ballot clerks, polling place workers, and election day phone bank operators. Depending on the jurisdiction, these workers could be part-time or full-time, appointed or elected, paid or unpaid volunteers. Some election workers support election administration activities during the year, and others work only on election day. For the November 2000 election, about 1.4 million poll workers staffed precincts across the country on election day. The size of local election jurisdictions varies enormously, from a few hundred voters in some rural counties to Los Angeles County, whose total of registered voters exceeds that of 41 states. For the purposes of voting, election authorities subdivide local election jurisdictions into precincts, which range in size from a few hundred to more than a thousand people. Voters are assigned to a specific precinct where they are to vote on election day. All voters in a precinct vote at one place, such as a school or other public facility. For the November 2000 election, there were more than about 186,000 precincts in about 10,000 local election jurisdictions. However, precincts may be combined in a single polling place. For example, voters from a few precincts in a small jurisdiction may vote in a single location, such as the town high school. Voting technologies are tools for accommodating the millions of voters in our nation’s approximately 10,000 local election jurisdictions. These tools can be as simple as a pencil, paper, and a box, or as sophisticated as computer-based touchscreens—and one day, perhaps, Web-based applications running on personal computers. To be fully understood, all these technologies need to be examined in relation to the people who participate in elections (both voters and election workers) and the processes that govern their interaction with each other and with the technologies. To integrate the functions associated with readying vote casting and tallying equipment for a given election with other election management functions, jurisdictions can use election management systems. The methods by which votes are cast and counted in the United States today can be placed into five categories; the latter four methods employ varying degrees of technology. The five methods are paper ballot, lever machine, punch card, optical scan, and direct recording electronic (DRE). Table 1 shows the percentage of jurisdictions, precincts, and registered voters who used the different voting methods. The paper ballot and lever machines have been used in the United States for more than a century, and versions of the other three methods have been used for 20 to 40 years. For paper ballots, the vote count is done by hand; lever machines keep a mechanical count. The three newer methods (punch card, optical scan, and DRE) depend on computers to tally votes. In three of the five methods (paper ballot, punch card, and optical scan), voters use paper to cast their votes. In the other two methods (lever machine and DRE), voters manipulate the equipment. Each method possesses a unique history and set of characteristics. When these are overlaid with the evolution and composition of the more than 10,000 local election jurisdictions in the United States, the result is much diversity across the nation in the technology used to conduct elections and how it is used. The paper ballot, sometimes referred to as the Australian ballot, was first used in the United States in 1889 and is still used in some jurisdictions today. Paper ballots, which are generally uniform in size, thickness, and color, list the names of the candidates and the issues to be voted on. Voters generally complete their ballots in the privacy of a voting booth, recording their choices by placing marks in boxes corresponding to the candidates’ names and the issues. After making their choices, voters drop the ballots into sealed ballot boxes. Election officials gather the sealed boxes and transfer them to a central location, where the ballots are manually counted and tabulated. Figure 4 shows an example of a paper ballot. In 1892, the lever voting machine, known then as the Myer Automatic Booth, was first used in the United States. By 1930, lever machines were used in almost all major cities, and by the 1960s, over half the nation’s votes were cast and counted on lever machines. During this time, lever machines helped alleviate concerns about vote fraud and manipulation that were common with paper ballots. Unlike paper ballots, however, lever machines do not provide individual records of each vote. Lever machines are mechanical, with a “ballot” composed of a rectangular array of levers, which can be physically arranged either horizontally or vertically. Adjacent levers in each row are placed about one inch apart, and the rows of levers are spaced 2 to 3 inches apart. Printed strips listing the candidates and issues are placed next to each lever. Because the ballot is limited to the size of the front of the lever machine, it is difficult to accommodate multiple languages. When using a lever machine, voters first close a privacy curtain, using a long handle attached to the machine. They vote by pulling down those levers next to the candidates or issues of their choice. Making a particular selection prevents any other selection in that contest (unless it is a vote-for- no-more-than-N contest, in which case no more than N levers would be selectable). Overvoting is prevented by the interlocking of the appropriate mechanical levers in the machine before the election. Write-in votes are recorded on a paper roll within the lever machine. The voter opens the write-in slot by moving the lever to the appropriate position and then writes in his or her choice on the exposed paper above the office name. Once this occurs, the machine locks and will no longer allow the voter to vote for another candidate listed on the ballot for that particular contest. After voting, the voter once again moves the handle, which simultaneously opens the privacy curtain, records the vote, and resets the levers. Figure 5 shows a lever machine. Votes are tallied by mechanical counters, which are attached to each lever. These counters rotate after the voter moves the handle to open the privacy curtain. The counters are composed of three gears—units, tens, and hundreds. Each vote causes a gear to make one tenth of a turn. After 10 turns, the units gear turns to 0, and the tens gear turns to 1, equaling 10 votes. Similarly, after 100 turns, the tens gear turns to 0, and the hundreds gear turns to 1, equaling 100 votes. At the close of the election, election officials tally the votes by reading the counting mechanism totals on each lever voting machine. Some machines can also print a paper copy of the totals. The design of the lever machine does not allow for a recount of individual voter records. Therefore, if the machine malfunctions and a gear fails to turn, no record exists from which a proper tally can be determined. Mechanical lever machines are no longer manufactured. As a result, maintaining lever machines is becoming more challenging, and some jurisdictions have turned to “cannibalizing” machines to get needed parts. The punch card was invented by Herman Hollerith to help perform statistical computations analyzing data from the 1880 U.S. Census. In the 1960s, this technology was first applied to vote casting and tallying. In 1964, Fulton and De Kalb counties in Georgia, Lane county in Oregon, and San Joaquin and Monterey counties in California were the first jurisdictions to use punch cards and computer tally machines in a federal election. Punch card voting equipment is generally comprised of a ballot, a vote recording device (this device holds the ballot in place and allows the voter to punch holes in it), a privacy booth, and a computerized tabulation device. There are two basic types of punch card devices: Votomatic and Datavote. The Votomatic relies on machine-readable cards that contain 228, 312, or 456 prescored numbered boxes representing ballot choices. The corresponding ballot choices are indicated to the voter in a booklet attached to the vote recording device, with the appropriate places to punch indicated for each candidate and ballot choice. To vote, the voter inserts the ballot into the vote-recording device and uses a stylus to punch out the appropriate prescored boxes. Votomatic punch card voting offers certain challenges because the ballot must be properly aligned in the vote-recording device for the holes in the ballot card to be punched all the way through. Incomplete punches are not uncommon, so that the rectangular scrap (the “chad”) punched by the stylus may cling to the hole in the card and create what is referred to as a “hanging chad.” Hanging chads can cause tabulation machines to read votes incorrectly and can make it difficult to determine voter intent in a recount or contested election. Voters cannot easily review a completed ballot, because the ballot lacks candidate or issue information, having only hole numbers. In addition, voters must use a separate piece of paper and attach it to the ballot with the names of write-in candidates. Figure 6 shows a Votomatic vote recording device and a Votomatic ballot. The Datavote also relies on a machine-readable card, but unlike the Votomatic, the names of the candidates and issues are printed on the card itself, eliminating the need for a ballot booklet. The ballots are not prescored, except for those used for absentee voting. The voter uses a stapler-like punching device to punch a hole corresponding to each candidate and issue. Spaces for write-in candidates are generally placed on the ballot. Because the candidates' names are printed on Datavote punch card ballots, each voter may require multiple ballot cards in elections that have a large number of candidates and issues. (Figure 7 shows a Datavote ballot.) For both the Votomatic and Datavote, software is used to program each vote tabulation machine to correctly assign each vote read into the computer to the proper contest and candidate or issue. Generally, the software is used to identify the particular contests in each precinct, assign punch card positions to each candidate, and configure any special options, such as straight party voting and vote-for-no-more-than-N contests. In addition, vote-tally software is often used to tally the vote totals from one or more vote tabulation machines. For both types of punch cards, jurisdictions can count the ballots either at the polling place or at a central location. In a polling place count, either the voters or election officials put their ballot cards into the vote tabulators. In a central count, voters drop ballots into sealed boxes, and the sealed boxes are transferred to a central location after the polls close. At the central location, ballots are run through the vote tabulators. In either case, the tabulator counts the ballots by reading the holes in the ballots. Generally, central-count tabulators are higher speed machines, allowing more ballots to be counted in less time than do precinct-based machines. Both precinct- count and central-count tabulators store votes on electronic storage media. These media can be removed manually or transferred via cable communication. Figure 8 shows punch card tabulation machines. Optical scan technology has been used for decades for such tasks as scoring standardized tests, but it was not applied to voting until the 1980s. An optical scan voting system is comprised of computer-readable ballots, appropriate marking devices, privacy booths, and a computerized tabulation machine. The ballot can vary in size and lists the names of the candidates and the issues. Voters record their choices using an appropriate writing instrument to fill in boxes or ovals, or to complete an arrow next to the candidate’s name or the issue. The ballot includes a space for write-ins to be placed directly on the ballot. Figure 9 shows an optical scan ballot. Like punch card software, the software for optical scan equipment is used to program the tabulation equipment to correctly assign each vote read into the computer to the proper contest and candidate or issue (i.e., to assign the location of valid marks on the ballot to the proper candidate or issue). In addition to identifying the particular contests and the candidates in each contest, the software is also used to configure any special options, such as straight party voting and vote-for-no-more-than-N contests. Precinct-based optical scanners can also be programmed to detect and/or reject overvotes and undervotes (where the voter does not vote for all contests and/or issues on the ballot). In addition, similar to punch cards, optical scan systems often use vote-tally software to tally the vote totals from one or more vote tabulation machines. Like punch cards, optical scan ballots are counted by being run through computerized tabulation equipment, in this case, optical-mark-recognition equipment. This equipment counts the ballots by sensing or reading the marks on the ballot. Ballots can be counted in the polling place or in a central location. If ballots are counted at the polling place, voters or election officials put the ballots into the tabulation equipment. In this case, either vote tallies can be captured in removable storage media that can be taken from the voting equipment and transported to a central tally location, or they can be electronically transmitted from the polling place to the central tally location. If ballots are centrally counted, voters drop ballots into sealed boxes, and election officials transfer the sealed boxes to the central location after the polls close, at which time election officials run the ballots through the tabulation equipment. Election officials can program precinct-based optical scan equipment to detect and reject overvotes and undervotes, which allows voters to fix their mistakes before leaving the polling place. However, if voters are unwilling or unable to correct their ballots, a poll worker can manually override the program and accept the ballot, even though it has been overvoted or undervoted. If ballots are tabulated centrally, voters do not have the opportunity to correct mistakes that may have been made. Precinct-count optical scan equipment sits on a ballot box with two compartments for scanned ballots—one for accepted ballots (i.e., those that are properly filled out) and one for rejected ballots (i.e., blank ballots, ballots with write-ins, or those accepted because of a forced override). In addition, an auxiliary compartment in the ballot box is used for storing ballots if an emergency arises (e.g., loss of power or machine failure) that prevents the ballots from being scanned. Figure 10 shows precinct- and central-count optical scan tabulators. First introduced in the 1970s, DRE equipment is an electronic implementation of the old lever machines. DREs come in two basic types, pushbutton or touchscreen, the pushbutton being the older and more widely used of the two. The two types of DREs vary considerably in appearance. Pushbutton DREs are larger and heavier than touchscreens. Figure 11 shows DRE pushbutton and touchscreen voting machines. Pushbutton and touchscreen DREs also differ significantly in the way they present ballots to the voter. With the DRE pushbutton, all ballot information is presented on a single “full-face” ballot. For example, a ballot may have 50 buttons on a 3 by 3 foot ballot, with a candidate or issue next to each button. In contrast, touchscreen DREs display the ballot information on an electronic display screen. For both pushbutton and touchscreen DREs, the ballot information is programmed onto an electronic storage medium, which is then uploaded to the machine. For touchscreens, ballot information can be displayed in color and can incorporate pictures of the candidates. Because the ballot space is much smaller than the pushbuttons, voters who use touchscreens must page through the ballot information. Both touchscreen and pushbutton DREs can accommodate multilingual ballots; however, because the ballot is limited to the size of the screen, pushbutton machines can generally display no more than two languages. Despite the differences, the two types of DREs have some similarities, such as how the voter interacts with the voting equipment. For pushbuttons, voters press a button next to the candidate or issue, which then lights up to indicate the selection. Similarly, voters using touchscreen DREs make their selections by touching the screen next to the candidate or issue, which is then highlighted. When voters are finished making their selections on a touchscreen or a pushbutton DRE, they cast their votes by pressing a final “vote” button or screen. Both types of DREs allow voters to write in candidates. While most DREs allow voters to type write-ins on a keyboard, some pushbutton DREs require voters to write the name on paper tape that is part of the voting equipment. Unlike punch card and optical scan voting equipment, DREs do not use paper ballots. However, they do retain permanent electronic images of all the ballots, which can be stored on various media, including internal hard- disk drives, flash cards, or memory cartridges. These ballot images, which can be printed, can be used for auditing and recounts. Like punch card and optical scan devices, DREs require the use of software to program the various ballot styles and tally the votes, which is generally done through the use of memory cartridges or other media. The software is used to generate ballots for each precinct within the voting jurisdiction, which includes defining the ballot layout, identifying the contests in each precinct, and assigning candidates to contests. The software is also used to configure any special options, such as straight party voting and vote-for-no- more-than-N contests. In addition, for pushbutton DREs, the software assigns the buttons to particular candidates and, for touchscreens, the software defines the size and location on the screen where the voter makes the selection. Vote-tally software is often used to tally the vote totals from one or more DREs. DREs also offer various configurations for tallying the votes. Some contain removable storage media that can be taken from the voting equipment and transported to a central location to be tallied. Others can be configured to electronically transmit the vote totals from the polling place to a central tally location. Because all DREs are programmable, they offer various options that are not as easily supplied by other voting methods. For example, they do not allow overvotes. In addition, voters can change their selections before hitting the final button to cast their votes. DRE touchscreens offer the most flexibility because they can present numerous screens of data; for example, they allow unlimited multilingual ballots, unlike pushbutton DREs. They can also offer a “review” feature (i.e., requiring voters to review each page of the ballot before pressing the button to cast the vote) and various visual enhancements (such as color highlighting of ballot choices, candidate pictures, etc.). Each type of voting equipment performs critical vote casting and tallying functions. However, before the equipment can be used in any given election to perform these functions, election officials must program the equipment to accommodate the unique characteristics of that election. For example, regardless of the voting equipment used, election officials must prepare a ballot that is unique to that election and, depending on the voting equipment, program the equipment to present the ballot to the voter and/or read the ballot as voted. Election management systems integrate the functions associated with readying vote casting and tallying equipment for a given election with other election management functions. Election management systems run on jurisdictions’ existing personal computers or vendor-provided election management system computer platforms. In brief, election management systems (hardware and software) generally consist of one or more interactive databases containing information about a jurisdiction’s precincts, the election contest, the candidates, and the issues being decided. These election management systems can be used to design and generate various ballots. Election management systems also allow jurisdictions to program their vote casting and tallying equipment to properly assign each vote to the proper contest and candidate. These systems also can centrally tally and generate reports on election progress and results. Some election management systems offer more sophisticated capabilities, such as managing the absentee ballot process. For example, some systems have the capability to automate the massive ballot mailings and recording of returns and support barcoding and imaging for ballot application signature verification. To describe elections in the United States, we reviewed reports by FEC and others, including the reports of the various national and state election reform commissions as they were completed. To obtain examples of the various stages of an election and any associated challenges, we had to get information from the level of government responsible for administering elections-that is, from the local election jurisdictions, which in most states involved counties. To get this information about the November 2000 election, we used a mail survey that is generalizable to 90 percent of the U.S. population, and a telephone survey that is generalizable nationwide. We also interviewed local election officials. To describe selected statutory requirements in the 50 states and the District of Columbia for voter registration, absentee and provisional balloting, and recounts, we reviewed state and D.C. statutes. We also conducted a survey of D.C. and state election directors, and reviewed information from the National Conference of State Legislatures on state election requirements and recent amendments to those requirements. To identify the types of voting methods used on November 7, 2000, and the distribution of these methods among local election jurisdictions and their precincts, we used several sources of information, including two databases—one for counties and one for subcounty minor civil divisions (MCDs) in the New England states—from Election Data Services, Inc., a private company that collects election-related data from state and local jurisdictions. We then used several methods to validate the data in the databases. We also checked state Web sites, such as those of the Secretaries of State, and compared any data on voting methods from these sources to those in Election Data Services, Inc.’s database for the respective states. To assess the characteristics of different types of voting equipment, we reviewed available studies, interviewed voting equipment vendors, reviewed vendor documentation on their equipment, used data from our mail survey of local election jurisdictions and data from our survey of state election directors, and interviewed election officials from our 27 judgmentally selected local election jurisdictions. Two of these jurisdictions had recently used new voting equipment in the November 2000 election, and one had purchased new equipment for delivery in 2001. To identify new voting equipment, we surveyed vendors and reviewed vendor publications, attended vendor marketing events and conferences, and researched periodicals and vendor Web sites. To estimate the potential cost of replacing existing voting equipment in the United States, we developed data on the distribution of voting equipment in the United States—among the states, counties within the states, and precincts within each county. For the cost of purchasing optical scan or DRE equipment, we used data obtained from voting equipment vendors. Our estimates generally include only the cost to purchase the equipment and do not contain software costs associated with the equipment to support a specific election and to perform related election management functions, which generally varied by the size of the jurisdiction that purchased the equipment. Because of the wide variation in the ways jurisdictions handle operation and maintenance (e.g., in-house or by a contract), our estimates do not include operations and maintenance costs. The cost of software and other items could substantially increase the actual cost of purchasing new voting equipment. To identify and describe issues associated with the use of the Internet for vote casting and tabulation, we interviewed vendors, reviewed vendor publications, attended vendor marketing events, and researched periodicals and vendor Web sites. We did not independently validate vendor-provided information. To identify Internet voting options and issues, we reviewed relevant recent studies, researched publications and material, and assessed preliminary Internet voting pilot reports. We also interviewed recognized experts from various institutions—academia, professional associations, and voting industry—that are familiar with issues surrounding Internet voting. In addition, we interviewed Internet voting equipment vendors that were involved in conducting these Internet voting pilots. We did our work between March 2001 and September 2001 in Washington, D.C.; Atlanta; Los Angeles; Dallas; Norfolk; San Francisco; and 27 local election jurisdictions in accordance with generally accepted government auditing standards. Appendix I contains additional detail on our objectives, scope, and methodology. The November 2000 election resulted in widespread concerns about voter registration in the United States. Headlines and reports have questioned the mechanics and effectiveness of voter registration by highlighting accounts of individuals who thought they were registered being turned away from polling places on election day, the fraudulent use of the names of dead people to cast additional votes, and jurisdictions incorrectly removing the names of eligible voters from voter registration lists. For purposes of this report, voter registration includes the processes, people, and technology involved in registering eligible voters and in compiling and maintaining accurate and complete voter registration lists. List maintenance is performed by election officials and consists of updating registrants’ information and deleting the names of registrants who are no longer eligible to vote. This chapter discusses (1) state requirements to vote, (2) applying to register to vote, (3) compiling voter registration lists, and (4) voter registration list maintenance. Voter Eligibility Requirements Varied From State to State Registration Was a Prerequisite to Vote in All States but One Although the federal government has enacted legislation that affects registration procedures, registering to vote is not a federal requirement. Instead, registration is one of several potential requirements, in addition to citizenship, age, and residency, that states may require citizens to meet to be eligible to vote. Although voter eligibility requirements varied from state to state, registration was a prerequisite to vote in nearly all jurisdictions in the United States. However, because of differences in state voter eligibility requirements, citizens with the same qualifications were eligible to vote in some states but not in others. The 50 states and the District of Columbia are empowered by the U.S. Constitution to establish voter eligibility requirements within their jurisdictions. At a minimum, every state and the District of Columbia required that a voter be at least 18 years of age, a U.S. citizen, and a resident of the state or the District. In addition, most states limited voter eligibility on the basis of criminal status and mental competency, although the specifics of these limitations varied. Based on our review of information developed by the Justice Department, 48 states and the District of Columbia prohibited individuals from voting while incarcerated for a felony conviction but varied in their provisions for restoring voting rights after the incarceration period. Thirty-eight states and the District of Columbia provided for automatic restoration of voting rights. In 12 of these states and the District of Columbia, restoration occurred after the individual's release from incarceration. In the other 26 states, restoration occurred after the individual completed his or her sentence, including any term of probation or parole. Ten states did not provide for automatic restoration of voting rights. In these states, individuals could seek restoration of voting rights through pardon procedures established by the state (e.g., gubernatorial pardons). In a few states, individuals convicted of specific offenses permanently lost the right to vote. Maryland, Missouri, and Tennessee permanently disenfranchised those convicted of certain voting-related crimes, such as buying or selling votes. Tennessee also permanently disenfranchised those convicted of treason, rape or murder. In Delaware, individuals convicted of murder, manslaughter, felony sexual offenses, or certain public corruption offenses permanently lost the right to vote. The majority of states and the District of Columbia also prohibited individuals who were mentally incompetent from voting. Nearly all of these states and the District of Columbia required a judicial determination of incompetence to disqualify a citizen from voting. For example, in Texas, those who were judged by a court to be mentally incompetent were ineligible to vote. In Oklahoma, individuals judged to be incapacitated could not vote, and those judged to be partially incapacitated also could not vote, if so stated in the court order. A few states, such as Delaware, did not require a judicial determination of incompetence, but simply disqualified individuals who were mentally incompetent from voting. Registration was a prerequisite to vote in nearly all jurisdictions. In the United States, citizens were responsible for applying to register to vote. For the November 2000 election, FEC reported that nearly 168 million people, or about 82 percent of the voting age population, were registered to vote. All states, except North Dakota with 53 counties, required citizens to apply to register and be registered with the appropriate local election official before they could vote in an election. Because of North Dakota’s rural character, voting occurred in numerous relatively small precincts, which are the areas covered by a polling place. According to North Dakota officials, the establishment of small precincts was intended to ensure that election boards knew the voters who came to the polls and could easily determine if an individual should not be voting in the precinct. In the November 2000 election, North Dakota voters in 696 precincts cast 292,249 ballots, representing about 62 percent of the voting age population. Citizens Could Apply to Register to Vote in Many Ways Citizens Learned About the Registration Process Through Different Officials Faced Challenges in Processing Applications Officials Had Concerns About Applications Submitted at Motor Registering to vote appeared to be a simple step in the election system- generally, a qualified citizen provided basic personal information, such as name and address, to an election official and was able to vote in all subsequent elections. But applying to register and being registered were not synonymous. A citizen became a registered voter only after his or her application was received, processed, and confirmed by an election official. We found that citizens could apply to register to vote and could learn about the registration process in numerous ways, and that election officials faced challenges in processing these applications, especially in processing applications received from motor vehicle authorities. Citizens had numerous opportunities to apply to register to vote. Figure 12 shows several of these opportunities, such as applying at a local election office or at a motor vehicle authority, or obtaining and mailing an application to a local election official. These and other examples of how citizens were able to apply to register are illustrated by the situations we found in our visits to local election jurisdictions-cities, counties, and townships. In most of the jurisdictions we visited, individuals were able to apply in person to register at (1) their local election office, (2) a motor vehicle authority, and (3) various other agencies such as public assistance agencies, or via voter registration drives through political parties or other organizations. Applying Through Local Election Offices To apply at a local election office, individuals completed an accepted state registration application and submitted it to their local election official. Some local election officials we visited also provided registration services outside of their offices, such as at schools or other community events. For example, officials at some jurisdictions told us they visited high schools to provide eligible students with voter education, registration forms, and assistance. Officials in some jurisdictions said they held registration events at local malls, county open houses, libraries, county fairs, and at other community programs. In one medium-sized jurisdiction, 600 deputy registrars were trained to register citizens at various events and within their communities and civic organizations. Finally, citizens in one large jurisdiction we visited were able to apply to register at a mobile voter registration van (shown in figure 13). Applying at a Motor Vehicle Authority In most states, citizens could apply to register to vote at a motor vehicle authority under NVRA, which is widely known as the Motor Voter Act. There were variations in how NVRA was implemented and how citizens were able to apply to register at motor vehicle authorities in the jurisdictions we visited. National data from FEC and the Census Bureau indicated that the use of motor voter programs increased over the past 4 years. The percentage of all applications received through motor vehicle authorities in states covered by NVRA increased to 38 percent of the total number of registration applications received from 1999 through 2000, from 33 percent from 1995 through 1996. Similarly, we estimate that at least one-third of people in 2000 reported registering to vote when obtaining or renewing a driver’s license, up from 1996 levels. The jurisdictions we visited varied in their implementation of motor voter programs. In many of these jurisdictions, election officials told us that motor vehicle authority staff were to offer to assist individuals obtaining or renewing a driver’s license or other form of identification, in applying to register to vote. In other jurisdictions, we were told that the voter registration assistance provided by the motor vehicle authority consisted of making voter registration applications available on a table. However, in one small jurisdiction we visited, an election office employee was available at the motor vehicle authority to provide individuals with registration information and assistance. The procedure for applying to register to vote at motor vehicle authorities also varied across the jurisdictions we visited. For example, at some jurisdictions, a citizen applied to register by completing a voter registration section of the driver’s license application. In others, we were told that the voter registration application was printed using information from the motor vehicle authority database and was provided to the applicant for verification, confirmation of citizenship, and signature. Two jurisdictions in the same state provided voter registration terminals at motor vehicle authorities where applicants could complete their voter registration form and obtain a copy of the transaction. Applying at Other Agencies and Locations Finally, citizens could apply in person to register to vote at several state agencies and locations, or through other organizations. NVRA requires states to provide citizens with the opportunity to apply to register at public assistance agencies; state-funded disability service offices; armed forces recruitment offices; and state-designated agencies, such as public libraries, public schools, or marriage license bureaus. The number of voter registration applications submitted at NVRA- designated agencies decreased during the past 4 years. According to FEC, from 1999 through 2000, voter registration applications received at these agencies and locations accounted for less than 8 percent of the total, a decrease from 1995 through 1996, when 11 percent of applications had been submitted at these agencies. In a very large jurisdiction we visited, local election officials reported a substantial decline in the number of registration applications received from social service agencies from 24,878 applications in 1996 to 1,309 in 2000. Officials in that jurisdiction noted that “when the program was initially instituted, there was widespread interest both from potential voters as well as from agency personnel.” The officials suggested possible reasons for the decline in applications, including that the majority of social service clients were repeat clients, and thus already registered, or that some clients were no longer using social services because they had been placed in jobs. Citizens could also apply to register to vote in person through other organizations. We estimate that in November 2000, at least 16 percent of respondents completed an application at a registration drive, which included political rallies, someone coming to their door, or registration drives at a mall, market, fair, or public library. Officials in some jurisdictions we visited noted that political parties were a major source of voter registration applications in their jurisdiction. In addition to applying to register in person, citizens could apply by obtaining, completing, and mailing a voter registration application to the appropriate election official. According to FEC, during 1999-2000, 31 percent of total registration applications submitted in the states covered by NVRA were submitted by mail. In the jurisdictions we visited, we found a variety of ways for citizens to obtain applications and multiple forms for citizens to use. Sources for Voter Registration Applications Within most jurisdictions we visited, registration applications generally were available at many places, including at state and local election offices, public libraries, post offices, and schools. In one very large jurisdiction, registration applications were available at over 1,200 locations. Other jurisdictions we visited included registration information and applications in the local telephone book or in state tax packets. Some states and jurisdictions provided citizens the opportunity to download or request registration application forms over the Internet. Many of the states and jurisdictions we visited included on their Web sites registration application forms that could be downloaded and used for registering, while others included a form for requesting a registration application. Still others allowed citizens to complete and electronically submit an application form on the state’s Web site. The state election office then mailed the applicant the completed application form to be signed and then mailed back to the office. The applicant would not be officially registered until election officials accepted the signed form. In November 2000, U.S. citizens could use over 50 different forms to apply to register to vote. For example, some states used more than one form, having a standard state application as well as a separate form for NVRA- designated agencies. In addition, citizens could apply to register using the National Mail Voter Registration Form and the Federal Post Card Application (FPCA). The National Mail Voter Registration Form was developed by FEC to allow citizens to register to vote from anywhere in the United States. NVRA required states to accept and use the National Mail Voter Registration Form in addition to their own state application form. According to FEC, as of June 2001, 26 states accepted paper reproductions of the form. U.S. citizens serving with the military or working overseas and their dependents were allowed to register to vote by mail using the FPCA (shown in figure 14). This form allowed an applicant to simultaneously register to vote and request an absentee ballot. In some states, those who used the FPCA were not placed on the state’s permanent registration list. Instead, their registrations were valid for only 1 year, after which they were required to reregister in order to be eligible to vote. We found variation in the application forms available to apply to register to vote. At the jurisdictions we visited, the most common information requested on applications was full name, address, and signature. Most jurisdictions also requested date of birth, while others requested social security number, gender, race, and/or place of birth. Some registration applications requested more or less information from an applicant than was required to register to vote within the particular jurisdiction. On some forms, information not required to register to vote was clearly indicated as optional; on other forms it was not. As a result, one completed application might be accepted in some states but not in others. Examples of differences in the applications included the following: According to FEC, as of June 2001, seven states required applicants to provide their full social security number, and two required the last four digits of the number. Twenty others only requested that applicants provide the number (17 full and 3 the last four digits). The National Mail Voter Registration Form did not provide a specific space for applicants to provide their social security number, but the FPCA did. The application forms in several of the jurisdictions we visited requested that the applicant provide more information than was required to register, such as gender and telephone number. Application forms in some of these jurisdictions stated that identifying gender or providing a telephone number was optional; others did not. The FPCA had spaces for applicants to indicate their gender, but not telephone number. The National Mail Voter Registration Form did not include a space for applicants to provide gender, and indicated that providing a telephone number was optional. The application forms for some states and jurisdictions asked for applicants to identify their race or ethnic group and their place of birth. Both the FPCA and the National Mail Voter Registration Form had spaces for an applicant to use to identify race, but neither form had a space to indicate place of birth. Figures 15 and 16 show voter registration forms from jurisdictions we visited. Informing citizens about the registration process was important, given the various ways people could apply to register, the numerous forms they could complete, and different information required for completing the applications. On the basis of our mail survey, we estimated that 14 percent (plus or minus 4 percent) of jurisdictions nationwide actively sought comments or suggestions from voters about voter registration. The jurisdictions we visited differed in the emphasis they placed on voter education. Officials at some jurisdictions told us they offered little in the way of registration education. A few jurisdictions said that they relied on external organizations, such as the League of Women Voters and/or political groups, to educate voters. However, most of the jurisdictions we visited educated voters about registration in a variety of ways. Many of the jurisdictions we visited printed registration deadlines, locations, and procedures in at least one newspaper. Some used television and others used radio to publicize registration information. In some states and jurisdictions we visited, Web sites offered voter registration information, including deadlines, qualifications to register, and where to submit an application. Some of these jurisdictions offered interactive Web sites where individuals could determine their registration status and locate their voting precinct. Other registration education efforts included mailing each household a voter guide with registration information; speaking to civic groups, churches, unions, high schools, and other providing handouts and registration applications at naturalization distributing flyers and newsletters. The results of our nationwide surveys and meetings with election officials indicated that election officials faced challenges, such as implementing state requirements, handling applicant errors, and coordinating with multiple agencies, in processing applications. Local election officials described how they processed applications, including (1) receiving applications, (2) obtaining information from registrants who submit incomplete applications, (3) verifying information on the application, and (4) confirming registration status. Citizens were required to submit registration applications to local election officials by certain deadlines, specified by state statutes, to be eligible to vote in an upcoming election. These deadlines varied, allowing citizens in different states different amounts of time to submit applications. Local election officials expressed concerns about processing applications in the allotted time before election day and varied in how they handled late applications. In 30 states, registration applications were to be received by the local election office about 1 month before the election. Six states–Idaho, Maine, Minnesota, New Hampshire, Wisconsin, and Wyoming--allowed same-day registration where their residents could register to vote on election day. In Maine, for same-day registration, citizens were to register at the voter registrar’s office or the board of elections instead of at the polls as in the 5 other states that allowed same-day registration. Figure 17 shows the registration deadlines across the United States, and appendix IV contains information about these deadlines. Deadlines closer to election day, or election day itself, provide citizens more time to apply to register. However, some local election officials expressed concerns about not having enough time to process applications if deadlines for their submission were shortened or eliminated. California recently passed legislation that shortened its registration deadline from 29 days before an election to 15 days. A local election official in a very large jurisdiction in California said that processing the registration applications, sending out the sample ballots, and processing registrants absentee ballot requests within 15 days, instead of 29 days, would be “impossible for a major election.” A few local election officials raised concerns about the possibility of voter fraud, as there may not be time to verify an applicant’s eligibility. All of the states that allowed same-day registration required citizens to sign a registration oath or to show some proof of identification or residency when applying to register. For example, Minnesota allowed citizens to register on election day by completing the registration card under oath and by providing proof of residence, such as a Minnesota driver’s license. However, one local election official from a state that allowed same-day registration said that she “didn’t believe same-day voter registration should be allowed as there is little regulation, nor proper time to verify voters.” The official noted that in the last election they averaged one a minute. In contrast, officials in another jurisdiction that allowed same-day registration said that they did not have concerns about fraud, nor did they have concerns about verifying applications on election day. In those states that had registration deadlines, local election officials in jurisdictions we visited differed in how they dealt with applications received after the deadline. In some jurisdictions, registrants were informed via mail that their application was received late and that they were not eligible to vote in the upcoming election. Officials in one large jurisdiction said that applications were officially accepted for 5 working days after the close of the registration period if the date on the form was before the 30-day deadline. However, they said that in practice they accepted registration applications at any time before the day of the election. Local election officials we visited reviewed applications for completeness. However, they varied in how they processed applications missing any of the required or requested information. The variations included how strict they were in accepting applications with missing information and how they attempted to obtain missing information. In addition, even within the same jurisdiction, applicants who submitted different types of forms lacking the same piece of information were treated differently. At one medium-sized jurisdiction we visited, election officials said that if someone applying in person refused to provide his or her birth date, he or she was registered if “it was clear” the individual was 18 or older. Officials at some other jurisdictions said they called (if a phone number was provided) or sent written notification to the applicant to get the missing information. For example, in one large jurisdiction, officials told us if there was not enough time for the applicant to provide the birth date before the registration deadline, they registered him or her anyway and tried to get the information at the polling precinct. The official at one small jurisdiction said that when a birth date was missing from the application, she registered the applicant and entered the birth date as January 1, 1850. She told us that people were usually more than willing to correct that date at the polls. Differences in Processing Applications Within the Same Jurisdiction Even within the same jurisdiction, there were differences in how applications missing the same piece of information were treated. Officials at these jurisdictions told us these differences were the result of accepting different types of application forms for registration. For example, in one large jurisdiction we visited where the last four digits of the social security number were required by the state, applicants who did not provide the information were treated differently, depending on the form they used to apply. Officials at that jurisdiction told us that some motor vehicle authorities were still using an old voter registration form that did not request the social security information. In order not to disadvantage these applicants, they were registered without having to provide the information and were able to vote in the November 2000 election. Other applicants in the same jurisdiction downloaded and used the National Mail Voter Registration Form from the Internet. That form also did not ask for the social security number, although the state-specific directions for the form noted that the information was required and instructed applicants to provide it. Notices were sent to any applicants who used the National Mail Voter Registration Form and did not provide the social security information. Unless they reapplied with the social security information, they were not registered or allowed to vote in the November 2000 election. In another very large jurisdiction, election officials told us that the standard state voter registration form asked for information on place of birth and that applicants who mailed the standard state form but did not provide their place of birth, were put in a “pending” status and were notified by mail that they would not be registered until the information was provided. However, when applicants used the National Mail Voter Registration Form or the FPCA, which did not request the applicant’s place of birth, the officials told us they registered the applicant and then tried to obtain the information by sending the registrant a letter requesting the place of birth. At one medium-sized jurisdiction we visited, the officials told us that if an applicant registered in person, he or she had to use a state form and present identification, but if the same applicant registered by mail, the National Mail Voter Registration Form could be used and no identification was required. When jurisdictions received completed applications, the degree to which they verified the information on the forms to ensure the applicant was truly eligible to vote, based on statutory requirements, varied. Some local officials in jurisdictions we visited said they considered the registration application process to be an honor system and they simply relied on the applicant to tell the truth. All registration applications in the jurisdictions we visited required the applicant to sign an oath declaring that they were citizens and were eligible to vote. In other cases, an applicant may have had to present identification at the time of application. Officials at one very large jurisdiction told us they verified application information for a random 1 percent of all applicants. A form letter and a copy of the registration application were mailed to these applicants, who were asked to complete and return the form as verification of the application. We found varying degrees of checks on citizenship, residency, and multiple registrations to ensure that the applicant was qualified to register. On the basis of our telephone survey, we estimate that 34 percent (plus or minus 11 percent) of jurisdictions nationwide checked for U.S. citizenship to determine initial and/or continued eligibility for voter registration. Some election officials said that they checked that the affirmation on the application was signed or that the applicant had marked the box on the application indicating that he or she was a citizen. Other election officials told us they used jury lists to compare with voter registration records, since some people identified themselves as noncitizens as a reason for declining to perform jury duty. However, some local election officials we met with indicated that they had no way to verify that an applicant was indeed a citizen. We estimate that nearly all (96 percent) jurisdictions nationwide checked whether an individual’s address was outside of their jurisdiction to determine eligibility for voter registration. GAO Telephone Survey of Jurisdictions On the basis of our telephone survey, we estimate that 96 percent of jurisdictions nationwide checked whether an individual’s address was outside of their jurisdiction. Some local election officials we visited used street maps or city planning files to confirm whether an address was a valid location within their jurisdiction. Others said that they used information such as property tax appraisal and building permit files to verify addresses within their jurisdictions. “You can ask any county clerk in the state and they will tell you that the biggest problem is motor voter. Residents can register at the welfare office, the health department, the motor vehicle authorities, and they do, time and again. This results in tons of registrations which are costly and time-consuming to sort through and check against records.” We estimate that nearly all (99 percent) jurisdictions nationwide checked whether an individual was already registered within their jurisdiction to determine eligibility for voter registration. GAO Telephone Survey of Jurisdictions On the basis of our telephone survey, we estimate that 99 percent of jurisdictions nationwide checked whether an individual was already registered in their jurisdiction. Jurisdictions we visited varied in the processes they used to check for multiple registrants. For example, in a medium-sized jurisdiction we were told that the state provided the election officials with a report identifying possible duplicate registrants. The officials investigated these and canceled any they found to be duplicates. In many jurisdictions we visited, however, officials checked new registration applications against records of registered citizens. Officials in several jurisdictions noted that names alone were not a sufficient identification source. For example, after the November 2000 election, the Illinois State Board of Elections completed a brief analysis of multiple registrations by looking at voter registration records submitted by local election officials in all but 2 counties in the state. Using data collected between December 15, 2000 and February 28, 2001, the study found that of 7,197,838 voters registered in Illinois, 143,080, or 2 percent, were multiple instances of the same voter. The study also found that there were 283 people registered as “Maria Rodriguez” in Chicago and 159 as “Jose Hernandez.” There were also 919 “Robert Smiths” registered in Illinois. The study noted that “additional criteria are needed to differentiate these voters, as they are obviously not all multiple registrations of the same person.” According to some local election officials, using social security numbers to identify registered voters helped to avoid multiple registrations of the same person. One small jurisdiction we visited used the first three letters of the last name and date of birth to identify any registrants who may already be registered. “…We were even on 60 Minutes in 1998 with our 16,000 fraudulent voter registrations….However, we did track those. We did not have a single one of those people vote.” After accepting a registration application, election officials informed the applicant that he or she had been registered. In all of the jurisdictions that we visited, officials informed citizens that they had been registered by mailing a voter registration card or letter (an example of which is shown in figure 18). Registration confirmation was also an important step in the verification process. Local election officials told us that registration confirmations were mailed as nonforwardable mail and thus also served as a check on the registrant actually living at the address provided. In addition, the confirmation allowed registrants to review and correct any information about their registration status before election day. Some jurisdictions varied in how they confirmed individuals’ registration status close to the date of elections. A few local election officials said that closer to election day they might not have been sufficiently staffed to confirm all applicants’ registrations. We estimate that about 46 percent of jurisdictions nationwide had problems with NVRA during the November 2000 election. GAO Telephone Survey of Jurisdictions NVRA expanded the opportunities for citizens to apply for registration to include submitting applications at motor vehicle authorities, and in the recent election cycle, such applications have increased. Local election officials around the country expressed concerns about processing applications submitted at motor vehicle authorities. At most of the jurisdictions we visited, applications submitted by citizens at motor vehicle authorities were hand delivered, mailed, or electronically transmitted to a state or local election office. On the basis of our telephone survey, we estimate that 46 percent of jurisdictions nationwide had problems, in general, with NVRA registrations during the November 2000 election. Officials most frequently noted challenges with processing incomplete or illegible applications, applications that arrived late at the local election office, and applications that never arrived. According to local election officials, each of these three situations could result in individuals showing up at the polls to vote and discovering that they were never registered. Local election officials offered suggestions to address these problems, such as using technology, expanding voter education, and increasing training at motor vehicle authorities. Local election officials at the jurisdictions we visited described instances in which they received incomplete or illegible applications from the state motor vehicles authorities that had incomplete or incorrect addresses; were missing signatures; were missing required information, such as date of birth or social had signatures that were illegible or did not match the typed name on the application. In particular, one challenge that local election officials noted involved state statutory requirements for an original signature on the registration application. Local election officials in jurisdictions that received applications via electronic transmission also had to receive a separate paper application that contained the applicant’s original signature. Officials in a large jurisdiction we visited noted problems because the mailed signature cards did not arrive at the same time as the electronically submitted applications and, in some instances, took up to 3 months to arrive. Processing late applications submitted at motor vehicle authorities was a challenge in some of the jurisdictions that we visited. In one medium-sized jurisdiction, applications dated in July were received at the election office with October transmittal dates from the motor vehicle authority. For the November 2000 election, to speed up the process of mailing applications, one large jurisdiction arranged to send elections staff to the offices of the motor vehicle authority on the last day citizens could apply to register to pick up and deliver the applications directly to the county elections office. When election offices failed to receive applications, citizens could show up to vote on election day to find that they were not registered. Local election officials we met with described the following accounts of citizens not included on registration lists showing up at polling precincts on election day claiming that they had registered to vote at a motor vehicle authority. In one very large jurisdiction we visited, between September 15, 2000, and November 28, 2000, a total of 688 calls were received from potential voters who claimed they had either registered or changed their address through the motor vehicle authority. Upon investigation of these cases, 39 percent needed to either register or reregister at their current address. In one medium-sized jurisdiction, 22 percent of citizens who were not on registration lists, but who claimed that they had registered, said they did so at a motor vehicle authority. However, the local election official believed that most of these citizens were not registered to vote. Election officials suggested ways for addressing the occurrence of a citizen showing up at the polls on election day after incorrectly assuming that he or she had registered to vote at a motor vehicle authority. These fixes included implementing technology options, such as electronically submitting applications, increasing voter education efforts, and providing training opportunities for motor vehicle authority employees. Local election officials relied on available technology and suggested changes to current systems they believed could address problems with registration applications. Some local election officials suggested that voter registration information be transmitted electronically to election offices. Officials in two small jurisdictions in the same state described how registration information was sent electronically from the motor vehicle authority to the statewide voter registration system, which then sent the information to the jurisdiction in which the applicant wished to be registered. In addition, local election officials in a medium-sized jurisdiction said they would like to redesign the application used to apply at motor vehicle authorities to allow a user to input registration information into a computer and have an application print out for the applicant to sign and submit. However, electronic transmission of registration applications in states that required an original signature on an application would still require that a paper copy be transferred to local election officials. Increased public education may reduce the number of people who come to vote on election day believing they are registered when they are not. The public should be educated about the importance of receiving the confirmation card in the mail after registering and the importance of saving the receipt given to voters who register at the motor vehicle authority until the confirmation card is received. “The biggest problem is that voters are not educated on motor voter procedures. New voters misunderstand that a driver license card is not a voter registration card… that they are applying to register to vote, not actually registering to vote… Motor voter has helped registration activities in the latest election because it has provided a steadier stream of new voters. But, the enactment of motor voter makes it easier for applicants to place the blame for registration problems on others instead of themselves.” Training Opportunities for Motor Vehicle Authority Employees As a result of NVRA, election officials were to share some of the responsibility of administering voter registration with motor vehicle authorities, whose primary purpose is unrelated to election administration. Some local election officials felt that, as a result, the registration process was more difficult to manage, and that motor vehicle authority staff had too much responsibility for registering voters. Others we surveyed and met with agreed that for motor voter programs to successfully function, motor vehicle authority staff needed to be trained about registering voters. In one very large jurisdiction we visited, local election officials coordinated with motor vehicle staff to provide training sessions and information about registering voters. In one small jurisdiction, a local election official was situated in the lobby of the motor vehicle authority. The election official provided voter registration services to reduce the number of citizens who mistakenly believed that they had registered and to reduce the number of applications denied due to missing or incomplete information. Lists Had Multiple Uses and Helped Ensure That Only Qualified Officials Used Different Methods, Providing Varying Capabilities, to Election officials compiled confirmed registration applications into lists of registered voters for use throughout the election process. Officials used different technologies and systems to compile the lists, and each system had different capabilities and limitations. Election officials used lists of registered voters for several purposes. A citizen’s access to voting was based primarily on the appearance of his or her name on such a list. For example, for both absentee and election day voting, election officials typically verified an individual’s eligibility using a list of registered voters or a poll book before allowing him or her to vote. In some jurisdictions, officials also used registration lists for defining who in the jurisdiction received election-related information like sample ballots or voter information guides. The registration lists also provided election officials with a basis for determining the quantity of supplies, such as ballots and voting machines and the numbers of personnel needed on election day. States and local election jurisdictions used different systems to compile registration applications into a list of registered voters. Some officials compiled voter registration lists manually or, as most did, through an automated system. All of the local election jurisdictions we visited used automated systems to compile registration lists. Some jurisdictions used a local computerized system for maintaining registration lists, and others were linked to a statewide automated voter registration system. The various systems provided different capabilities, such as those for processing applicants’ signatures, generating reports and notifications for registrants, and sharing information with other jurisdictions. Many of the local election jurisdictions we visited used local automated voter registration systems. Local election officials told us that, in comparison to manual systems, their automated systems saved time and effort by allowing them to more easily perform a number of routine tasks. Some jurisdictions operated their own local voter registration system, and others shared a jurisdiction-wide system with other government offices in the jurisdiction. We estimate that 61 percent of jurisdictions nationwide had their own computerized voter registration system. GAO Telephone Survey of Jurisdictions On the basis of our telephone survey, we estimate that 61 percent of jurisdictions nationwide had their own computerized voter registration system. Local election officials we visited noted that their systems allowed them to retain possession and control of their voter registration lists at all times, and to perform several functions, such as checking for duplicate registrations within their jurisdiction, updating registration records, generating forms and letters to send to registrants, and compiling and producing reports. Some automated systems provided additional capabilities and features. Several local election jurisdictions used systems that scanned an applicant’s signature from the application into the voter registration system. The automated system used by one very large jurisdiction interfaced with the jurisdiction’s system for election tallying, and with geographic street reference files, which were used for assigning registrants to a precinct. Some jurisdictions used an automated system that was part of the central computer system that ran applications in support of other county functions. Officials at one medium-sized jurisdiction told us that with their automated system they could perform all of the routine election-related tasks. However, jurisdictions that shared with the county system could have problems based on the capacity limits of the county’s servers, and the need for extra security to maintain the integrity of the election-related functions of the system. We visited one medium-sized jurisdiction that was in the process of implementing its own voter registration system. A local election official in that jurisdiction said that they were “being kicked off the county’s system” because their computer needs had outgrown the system. Sharing Information With States and Other Jurisdictions On the basis of our telephone survey, we estimate that 75 percent of jurisdictions nationwide used or shared information with a statewide computerized voter registration system. Of the jurisdictions we visited that had automated systems, many shared registration information with the state election office. Some shared information electronically, providing registration lists to the state periodically. For example, one medium-sized jurisdiction we visited provided the state a computerized file of their registration list every 6 months. Some local election officials in the jurisdictions that we visited noted that there were limitations in their capacity to share information on a real-time basis. Officials in one medium-sized jurisdiction said that while they provided the state a computerized file of their registration list, the jurisdiction had no automated method for checking the registration list against those of other jurisdictions to identify potential duplicates. In May 2001, their state conducted a study of multiple registrations by matching computerized voter registration files using registrants’ names and dates of birth. The study identified as many as 10 percent of the people on that jurisdiction’s registration list that might also have been registered to vote in another jurisdiction in the state. In two very large jurisdictions in one state we visited, the state operated a statewide database that contained information provided by all of the state’s jurisdictions, its motor vehicle authority, and its Bureau of Vital Statistics. The state system provided the jurisdictions with query capability. Local election officials said that, through queries, they could identify registrants on their list, who might also be on the registration list of another jurisdiction in the state, who were officially reported to have died, or who had moved. However, officials there noted that the jurisdictions were not directly on-line with the system. We visited several jurisdictions that were linked to a statewide voter registration system. In most of these jurisdictions, states had provided software allowing on-line access to a central voter database. The local officials told us of a number of advantages the statewide system provided them. Specifically, they noted the reduced potential for duplicate registrations in the state and the ability to electronically receive applications submitted at motor vehicle authorities. Reducing Multiple Registrations Within the State In one state with a statewide voter registration system, we met with local officials who said that their system significantly reduced the potential for multiple registrations in the state. When a citizen reregistered in a new jurisdiction in the state, his or her registration was automatically cancelled in the former jurisdiction of residence. Local election officials in another state said their statewide system automatically flagged potential multiple registrations before transmitting applications to the appropriate local election official. These officials also noted that their statewide voter registration system was linked to the motor vehicle authority and flagged potential multiple registration applications submitted from that source. Coordination With Motor Vehicle Authorities Some of the statewide systems in jurisdictions that we visited were linked to motor vehicle authorities. Such a linkage decreased the potential for losing application information in the process of transferring it from the application site to the local election office. Local election officials in one small jurisdiction told us the motor vehicle authority transmitted the application to the state office, which then transmitted the application to the jurisdiction in which the applicant lived. At another small jurisdiction the officials told us that, for each application, the motor vehicle authority created a record in the state-operated voter registration database and the local election officials retrieved the application information that applied to their residents. On the basis of our telephone survey, we estimate that 74 percent of jurisdictions nationwide used information from local jurisdictions in other states to help maintain their registration lists. Some local election officials we visited told us that they shared voter registration information with other states and jurisdictions from time to time. For example, in a large jurisdiction we visited, of the 5,299 voters removed from the registration list in 2000, 1,571 were as a result of notifications from other states about the individuals moving to a new state. Officials in the jurisdiction showed us notices from a Florida and a Utah jurisdiction informing them about voters who had recently moved and should be removed from their registration list. Some agreements to share information were established by neighboring states or jurisdictions. For example, a local election official in the District of Columbia told us that they were beginning to exchange voter registration lists with surrounding states, after having compared registration lists with several nearby counties in 1997. In contrast, states could also choose not to share information. For example, election officials in one state we visited were statutorily prohibited from providing voter registration lists to other states, since only candidates and certain other designated individuals were allowed to view lists of registered voters. NVRA and State Election Codes Provided for Registration Officials Relied on Information From Numerous Sources to Maintain Officials Had Varying Degrees of Confidence in Their Lists Statewide Systems Provided Benefits, but Required Resources and Coordination to Develop and Maintain In addition to processing new applications, election officials maintained lists of registered voters, which involved the continual updating and deleting of information from the registration list, using information from numerous sources to keep voter registration lists accurate and current. Election officials reported difficulties in obtaining accurate and timely information from these sources and expressed varying degrees of confidence in the accuracy and currency of their registration lists. Statewide voter registration systems offered the potential to assist election officials with establishing and maintaining registration lists. In passing NVRA, the federal government attempted to establish uniformity in certain list maintenance processes. NVRA required states to conduct a uniform and nondiscriminatory “general program” that makes a reasonable effort to remove ineligible voters from the list. NVRA permitted removing the names of individuals upon written confirmation of a change of address outside the election a change of address along with failure to respond to confirmation mailings and failure to vote in any election within two subsequent general federal elections, the request of the registrant, death, mental incapacity as provided for in state law, and criminal conviction as provided for in state law. One of the purposes of NVRA was to ensure that once an individual was registered to vote, he or she remained on the voting list as long as he or she remained eligible to vote in the same jurisdiction. NVRA’s list maintenance provisions specifically prohibited removing a name from the voter registration list solely for failure to vote or for a change of address to another location within the same election jurisdiction. The state election codes for all 50 states and the District of Columbia specifically provided for registration list maintenance and required cancellation of registrations under certain circumstances. An examination of the state statutes cited in our nationwide survey of state election officials showed that “purge” or registration cancellation requirements varied from state to state but were primarily based upon change of residency, death, criminal conviction, and mental incapacity. Most of the states examined required in certain cases that registered voters be informed of changes made to their registration status. See appendix IV for selected statutory requirements for list maintenance for the 21 states we visited. Local election officials at the jurisdictions we visited used a number sources of information and a variety of procedures to remove the names of registrants no longer eligible to vote. Local election officials used information obtained from these sources to both systematically verify the registration list and conduct ongoing identification efforts aimed at removal of ineligible registrants. However, officials noted difficulties with obtaining accurate and current information to perform list maintenance. Figure 19 shows an example of a list maintenance process and some of the numerous sources of information that local election officials could use to maintain accurate and current registration lists. Election officials used various means to systematically verify their registration lists and identify voters who were no longer eligible to be registered, either because they moved or because they failed to respond to certain confirmation mailings. These means included mass mailings, comparing their entire voter registration list against information from the U.S. Postal Service National Change of Address (NCOA) program, and conducting door-to-door canvassing. Some of the jurisdictions we visited relied on mass mailings of nonforwardable election-related material to confirm registrants’ eligibility. For example, officials in one large jurisdiction mailed a nonforwardable sample ballot to every registered voter before each election. If the ballot was returned as undeliverable, the officials sent forwardable mailings asking the registrant to confirm his or her address. Registrants who responded either remained on the registration list or, if their current address was outside the election jurisdiction, were removed from the registration list. Those who did not respond were designated inactive within the registration system. Within NVRA provisions, an inactive registrant can be removed from the registration list if he or she has not voted during the period of time between the date of the required confirmation notice and the second general election for federal office which occurs after the date of the notice. Some other jurisdictions we visited also conducted mass mailings using the same basic process. However, they used different mailing materials, such as voter registration confirmation cards or voter guides, conducted the mailings with different frequencies (i.e., every 2 years or 5 years) and/or targeted the mailings to those registrants who failed to vote in two federal elections. Mass mailings, because they typically included every registered voter on the list, were costly compared to other verification checks that targeted particular groups of registrants, such as those who had moved. Also, the results were incomplete, since many people who had moved did not always confirm their change of address. According to FEC, from 1999 through 2000, local election officials mailed a nationwide total of 18,892,331 confirmation notices to persons who were reported to have moved outside the local election jurisdiction, and there was a 23-percent response rate to these notices. U.S. Postal Service’s National Change of Address Program We estimate that 70 percent of jurisdictions nationwide used information from the U.S. Postal Service to help maintain accurate voter registration lists. GAO Telephone Survey of Jurisdictions On the basis of our telephone survey, we estimate that 70 percent of jurisdictions nationwide used U.S. Postal Service information to help maintain accurate voter registration lists. Election officials used the U.S. Postal Service’s computerized NCOA files to match against their registration lists to identify those registrants who had moved. Some officials we visited said they relied on private vendors to perform the match; others contracted with the U.S. Postal Service to compare voter files with postal records. The change of address program relied on registrants completing a change- of-address form to allow for the forwarding of mail. The NCOA files did not identify all people who moved because some did not submit a change of address form, nor did the files capture information about other sources of removal, such as deaths or criminal convictions. Some local election officials we visited expressed concerns that postal information did not always match information from their jurisdictions. Two of the jurisdictions we visited used their required annual census as a means of verifying their registration lists. In one small jurisdiction, registrants who did not respond to the town’s annual census and had not voted in 2 years were placed in inactive status and notified of this change in status. If they remained inactive for another 2 years, they were removed from the registration rolls and notified of their removal. In another small jurisdiction, registrants were designated inactive if they did not respond to the town census and were removed from the rolls after no response to two subsequent confirmation letters. Election officials received information from a variety of sources to make individual changes to registration lists, including from state motor vehicle authorities, directly from the registrant, and from a variety of other sources, such as county and state courts. To help maintain accurate voter registration lists, we estimate the following: To help maintain accurate vot Sixty-four percent of jurisdictions nationwide used information from motor vehicle authorities. Ninety-three percent of jurisdictions nationwide used information from registrants. GAO Telephone Survey of Jurisdictions Officials at many of the jurisdictions we visited said they received information from motor vehicle authorities on changes registrants made to their voter registration information. On the basis of our telephone survey, we estimate that 64 percent of jurisdictions nationwide used information from motor vehicle authorities to help maintain accurate voter registration lists. Motor vehicle authorities conveyed information about changes to a registrant’s information to election officials in a variety of ways, and some officials said timeliness was often a problem. On the basis of our telephone survey, we estimate that 93 percent of jurisdictions nationwide used information received directly from registrants to help maintain their registration lists. Registrants could have their names removed from the list at their request. They could also request changes to their registration information, such as name or address. Some local election officials said that although registered voters were required to inform them of any change of address, the registrants frequently failed to do so. The officials told us they believed registrants were not aware of this requirement and that the problem was escalating due to the increasing transience of the population. The mobility of the population created a challenge for local election officials in one very large jurisdiction we visited where it is estimated that approximately 15 to 20 percent of the jurisdiction’s population moves each year. Officials used a variety of other sources to identify registrants made ineligible by death, criminal conviction, or mental incompetence. Local election officials obtained information about the deaths of registrants from sources such as state and county departments of health or vital statistics, the state election office, and newspaper obituaries. Most of the officials with whom we met said they received lists of death notices from their state’s department of health and removed those listed from their registration lists. Officials in some jurisdictions complained that this process was not always timely. Some said they had not received a death listing for several months; others said it was sometimes more than 1 year. Some officials also reviewed newspaper obituaries and used them as a basis for removing registrants from their registration lists. In three small jurisdictions we visited, the local election official was also responsible for issuing death certificates, as the local election official was the clerk of the jurisdiction. Officials in some jurisdictions expressed concern that they often do not find out about registrants who die in other states. In some jurisdictions we visited, registrants were removed from the registration lists on the basis of a death notification from a family member. In others, the individual reporting the death of a registrant had to provide a copy of the death certificate for the name to be removed from the list. Officials from most of the jurisdictions we visited said that they relied on information from the court system to identify convicted felons. However, some of those officials also said that the court system did not always notify them of criminal convictions or releases. For example, in one large jurisdiction we visited, officials said that they received no information on convictions from the court system. Some jurisdictions said they occasionally received information on convicted felons within their jurisdiction, but timeliness was often an issue. For example, one large jurisdiction said they had not received any information on felony convictions in over a year. Some of the jurisdictions we visited received no information of felons convicted outside of their counties or states. If the court system provided information about criminal convictions, local election officials in some states had to interpret and spend time and effort researching a particular individual’s case to determine whether voting rights had been restored. For example, in Delaware, those convicted of certain offenses, such as murder, manslaughter, felony sexual offenses, or certain public corruption offenses, may not have voting rights restored. Any other person who is disqualified as a voter may vote 5 years after expiration of sentence, including probation or parole, or upon being pardoned, whichever occurs first. Thus, election officials in Delaware would need to investigate a particular individual’s offense and sentence to determine whether he or she was eligible to vote. Officials at some of the jurisdictions we visited said they did not routinely receive information from the courts on persons who, as a result of mental incompetence, were no longer eligible to vote. Officials in one large jurisdiction in a statewide system said that the election office did not normally receive information about mental incompetence. Officials in a few jurisdictions said that the only information on mental incompetence was the affidavit the voter signed on the registration form affirming he or she was not mentally incompetent. Where mental incompetence was an eligibility restriction, several officials said they had not removed or could not remember removing anyone from their rolls for this reason. An official in one large jurisdiction said such a disqualification had not happened in 27 years. Local election officials from two jurisdictions said that should they receive information from the courts on a state mental capacity restriction, they would send a confirmation letter to the registrant. Officials in other jurisdictions said they had no process for removing registrants for this criterion. The maintenance of registration lists depended not only on the actions of election officials, but also on the timely receipt of accurate information from numerous sources. Some local election officials expressed concern about the accuracy and currency of their voter registration rolls, while others felt that as a result of NVRA, the voter registration lists were more accurate. Some local election officials were not able to access information on a timely basis. On the basis of our telephone survey, we estimate that 84 percent of jurisdictions nationwide checked death records and 76 percent of jurisdictions nationwide checked ineligibility due to a criminal conviction initially and/or on a continual basis. However, we estimate that only 40 percent of jurisdictions nationwide had the ability to make death record checks on a “real-time” or immediate basis. Similarly, only 33 percent of jurisdictions nationwide had the ability to make criminal conviction checks on a real-time basis. “Currently, we are required to keep voters who have moved and a third party, primarily the post office, has notified us that they do not live at that residence. We cannot cancel them off our voter rolls. We have to carry them on an inactive roll. In the jurisdiction, we have about 200,000 of those people on the inactive roll that we have to supply to those poll workers. Yet, in looking at our database, about 100 of those actually show up and vote.” Despite concerns, some election officials felt that NVRA had increased the accuracy of the voter rolls because registration lists were updated more frequently. They also noted that because NVRA increased the opportunities and locations at which to register, the registration workload had stabilized over the year. Officials in one small jurisdiction noted that NVRA had greatly helped them to purge inactive voters from registration lists following confirmation mailings. Officials said their list is now “more pure” in terms of having more “real” registered voters. Information about the accuracy and currency of voter registration lists nationwide was difficult to obtain, and even more difficult to find was information on the extent of the effect of errors on voter registration lists. Errors and inaccuracies, such as multiple registrations or ineligible voters appearing on the list, could occur as a result of different reasons. However, when explicitly asked about problems with list maintenance in the November 2000 election, most local election officials did not indicate that they had any problems. Thirteen states and the District of Columbia operated statewide voter registration systems, which covered all local jurisdictions. Several other states were implementing such systems, while others operated systems with some local jurisdictions on-line. Local election officials we visited described benefits that statewide voter registration systems provided. However, the implementation and maintenance of such systems required significant resources and the coordination of many jurisdictions. Local election officials in jurisdictions we visited that had statewide registration systems described several benefits of their system. These benefits included real-time access to information about registrants from other jurisdictions in the state, and potentially in other states; the reduction of duplicate registrations across the state; and the potential for instant transmittal of registration applications and information from state motor vehicle authorities and other intake offices to the appropriate election official. FEC described several benefits for list maintenance to operating an automated statewide voter registration system. These benefits included capabilities to “handily” remove names of registrants by reason of death, felony conviction, and mental incompetence; run the statewide list against NCOA files to identify persons who have moved and left a forwarding address with the U.S. Postal Service; receive cancellation notices electronically from motor vehicle authorities, or from other election jurisdictions throughout the nation; perform internal checks to guard against multiple or improper handle any or all of the mailings required under NVRA, such as acknowledgement notices, confirmation notices, and verification mailings; and generate much of the data that FEC required under provisions of NVRA. Statewide voter registration systems had the potential to assist election officials with establishing and maintaining registration lists. However, implementing a statewide system required resources, time, and the coordination of multiple jurisdictions. Also, a statewide system could not ensure the accuracy of a state’s voter registration lists because data may not have been received or entered correctly, or inaccurate data may have been entered. The development and implementation of a statewide voter registration system would not necessarily be an inexpensive or short process. FEC estimated that the process could take 2 to 4 years or longer, and that the costs to implement such systems over the past 2 decades have ranged from under $1 million to over $8 million for the first year. In Maryland, the State Board of Elections and its contractor have worked on the statewide voter registration system since 1998 and expect to finish by the end of 2001 at a cost of $3 to $4 million. In Michigan, the statewide voter registration system was developed within the $7.6 million that was appropriated for the program, with more than half of the funds going to local units of government. Most local election officials we visited that were linked to statewide systems were very pleased with their system. However, officials in one very large jurisdiction in a state without a statewide system indicated that they would prefer to maintain a county-based system because of funding concerns. The jurisdiction currently shares computer capacity with a countywide computer system, and the county pays the bill for processing requirements. With a statewide system, the official said that the jurisdiction “would have to foot the bill for operating and maintenance costs.” Ultimately, some states have implemented statewide systems, and have found the system to be beneficial, while others have felt the investment may not be worth the price. An integrated statewide system required the coordination of all jurisdictions within a state. Coordination could be affected by the size of the state, the number of local election jurisdictions within the state, the variations of the automated systems the jurisdictions operated independently, and the cooperation of local election officials within the state. For example, some large states such as Pennsylvania, New York, Illinois, and New Jersey did not have statewide systems. Less than half of the counties in Texas are linked to the statewide system operated by that state. States with numerous local election jurisdictions, such as townships and cities, also typically did not operate statewide systems. A local election official in a state with several jurisdictions said that when the state was implementing their integrated system, one official was so reluctant that she did not take the system hardware out of the box until the “state forced her to.” Finally, a statewide voter registration system could not ensure the accuracy of a state’s voter registration lists because data may not have been received or entered correctly, or inaccurate data may have been entered. For example, Alaska, despite the implementation of a statewide voter registration system, reported that it has at least 11 percent more active registered voters than voting age population. Maintaining accurate voter registration lists depended on the timely receipt of accurate information from multiple sources. In none of the local election jurisdictions that we visited, did officials say that they received comprehensive, timely information from all of the sources they used to update their registration list. Even with an integrated system, these jurisdictions would still require processes to obtain more timely and accurate data. For example, a statewide voter registration system would not be able to remove from the lists the names of registrants who have died if timely death records were not available. Further, adequate quality assurance processes for the data would also need to be developed as data entry errors can and will occur. One jurisdiction we visited addressed this issue by printing out all registration record changes in the voter registration system on a daily basis to be checked against the paper forms initiating the changes. Local election officials nationwide processed registration applications and, using various systems and sources of information, compiled and maintained lists of registered voters to be used throughout the election process. In summary, the following are the challenges election officials identified for voter registration: Officials faced challenges in processing incomplete applications, identifying ineligible individuals and those who had applied to register more than once, and minimizing the number of individuals who showed up at polling places but had never been registered to vote. In particular, officials faced challenges coordinating the events necessary to process registration applications submitted at motor vehicle authorities. Increasing the use of technology options, such as electronically transmitting applications from motor vehicle authorities to election offices, expanding voter education, and improving the training of motor vehicle authority staff were identified as means of addressing these challenges. Obtaining accurate and timely information from numerous sources to update voter registration lists was a challenge noted by election officials. These officials relied on local, state, and federal sources to provide accurate and current information about changes to registration lists. Information did not always match their records, was received late, or was never received at all. Jurisdictions varied in capability and opportunity to share information with other jurisdictions and states. In none of the local election jurisdictions that we visited, did officials say that they received comprehensive, timely information from all of the sources they used to update their registration list. Finally, integrating technology, process, and people to accept registration applications and compile registration lists, to ensure all eligible citizens who intended to register were able to do so, was identified by officials as a challenge. Election officials processed registration applications, and using various technologies and systems compiled lists of registered voters to be used throughout the election process. They faced challenges with inaccuracies, such as multiple registrations, ineligible voters appearing on the list, or eligible voters who intended to register not being on the list. Local election officials expressed varying levels of confidence in the accuracy of their voter registration lists. The narrow margin of victory in the November 2000 general election raised concerns about absentee voting in the United States. Headlines and reports have questioned the fairness and effectiveness of the absentee voting process by featuring accounts of large numbers of mail-in absentee ballots being disqualified and by highlighting opportunities for mail-in absentee voting fraud. A growing number of citizens seem to be casting their ballots before election day. However, the circumstances under which these voters vote and the manner in which they cast their ballots differ because there are 51 unique election codes. Due to the wide diversity in absentee and early voting requirements, administration, and procedures, citizens face different opportunities for obtaining and successfully casting ballots before election day. In particular, the likelihood that voters’ errors in completing and returning mail-in absentee ballots will result in their ballot being disqualified varies, even, in some instances, among jurisdictions within the same state. However, states do not routinely collect and report absentee and early voting data. Thus, no national data currently are maintained regarding the extent of voting prior to election day, in general. More specifically, no data are maintained regarding the number of mail-in absentee ballots that are disqualified and therefore not counted. In addition, election officials face a variety of challenges in administering absentee and early voting, including establishing procedures to address potential fraud; addressing voter error issues, such as incomplete applications and ballots; handling late applications and ballots; and managing general workload, resource, and other administrative constraints. In this chapter, we will describe (1) the frequency and availability of voting before election day, (2) the mail-in absentee voting process and challenges faced by election officials in conducting this type of voting, and (3) the types of in-person absentee and early voting programs available and the challenges encountered by election officials in administering these efforts. Although most voters cast their ballots at their precincts on election day, every state and the District of Columbia has procedures by which voters can cast their ballots prior to election day. Generally, any voting that occurs before election day has been called "absentee" voting because the voters are absent from their precinct on election day. Registered voters may obtain their ballots prior to election day in one of two ways—through the mail or in person. States do not routinely collect and report data on the prevalence of voting before election day. Using Census data, we estimate that, in the November 2000 general election, about 14 percent of voters nationwide cast their ballots before election day. Of these voters, about 73 percent used mail-in ballots and about 27 percent voted in person (as seen in figure 20). This represents an increase from the 1996 presidential election in which we estimate a total of about 11 percent of voters cast ballots before election day. Many of the election officials in the jurisdictions we visited reported that voting before election day had been increasing in the past few years. For example, in one jurisdiction, voting before election day has increased in the past few years from 50 percent in the 1996 election to a little over 60 percent of the total ballots cast in the November 2000 general election. In another jurisdiction, where the state had passed legislation making voting before election day easier and more convenient, this type of voting increased from about 26 percent of all ballots cast in the November 1996 general election to about 60 percent for the November 2000 general election. As shown in figure 21, the total percentage of individuals voting before election day in the November 2000 general election varied among the states from about 2 percent in West Virginia to about 52 percent in Washington. In 31 states, less than 10 percent of voters cast their ballots before election day. However, in 6 states over 25 percent of the voters cast their ballots before election day, including 1 state with more than half of the voters casting their ballots in this manner. Some states require voters to meet one of several criteria to be eligible to vote before election day, such as being disabled, elderly, or absent from the jurisdiction on election day. However, as seen in figure 22, as of July 2001, 18 states have initiated "no excuse" absentee voting in which any voter who wishes to do so may vote absentee. These voters may vote a mail-in ballot or vote in person as established by state requirements, without first having to provide a reason or excuse. In addition, some states have initiated "early voting" in which local election jurisdictions may establish one or, particularly in larger jurisdictions, several locations at which any voter may cast his/her ballot in person a number of days before election day, based on state statutory requirements. One of the primary purposes of absentee and early voting is to increase voter participation. For example, being able to vote before election day provides greater accessibility to voting for certain voters, such as those who are disabled, living internationally, traveling extensively, or residing in distant rural communities with long commutes to work. In addition, allowing voters to vote before election day can make voting more convenient, particularly in states that allow no-excuse absentee or early voting. Election officials in some jurisdictions we visited stated that no- excuse absentee and/or early voting had increased overall voting before election day, particularly when these programs first became available. Election officials were less certain about any positive effects these efforts have had on overall voter participation. For example, several jurisdictions that offer no-excuse absentee and/or early voting stated that they have had a greater shift of voters from election day to absentee and early voting than overall increases in voter participation. However, election officials in Oregon have reported that their efforts to conduct entire elections by mail have resulted in some significant increases in voter participation. Election officials disagree regarding whether the additional accessibility and convenience gained from the increased availability and use of mail-in absentee voting and all vote-by-mail elections outweigh the increased opportunities for voter fraud. This disagreement represents a clear example of how election officials often must weigh opportunities to increase access to voting against the elevated potential risks to integrity in the voting process. Election officials generally did not have similar concerns regarding increases in early and no-excuse, in-person absentee voting—possibly due to the resemblance of these processes and procedures to election day voting. However, regardless of the effects on overall voter participation and election officials’ concerns regarding increased opportunities for fraud, many election officials agreed that voters liked the convenience of no-excuse and early voting. Different State Requirements to Vote, but Basic Steps Similar Manner, Frequency, and Deadlines for Applying Vary Across States Ballot Casting Differs Across States and Jurisdictions Processes for Qualifying Ballots Vary, but Similar Challenges Exist Voter Education Efforts Are Diverse The basic steps for mail-in absentee voting are similar. Registered voters apply for and receive their ballots; voters complete and return their ballots and related materials; and local election officials review ballot materials prior to counting them. However, the circumstances under which voters are allowed to vote by a mail-in absentee ballot, the manner and deadlines for applying and casting these ballots, and the processes by which these ballots are reviewed, differ widely across states and even, in certain instances, within the same state. In addition, local election officials face several challenges in administering this type of voting. While election officials have established procedures to address certain potentials for fraud, some officials expressed concerns regarding their ability to fully address this issue. In addition, election officials identified several other key challenges in the mail-in absentee voting process. These issues include responding to voter error issues, such as incomplete applications and ballots; handling late applications and ballots; and dealing with general workload issues related to processing large numbers of applications and ballots in a timely manner, including addressing postal concerns such as delivery, priority, and timeliness. All 50 states and the District of Columbia have some statutory provisions allowing registered voters to vote by mail, but not every registered voter is eligible to do so. Some states allow all registrants to vote with a mail-in absentee ballot, but other states require that registrants provide certain reasons or excuses. Examples include being absent from the state or county on election day; a member of the U.S. Armed Forces or a dependent; permanently or totally disabled; ill or temporarily disabled; over a certain age, such as 65; an observer of a religious holiday on election day; at a school, college, or university; employed on election day in a job for which the nature or hours prevent the individual from voting at their precinct, such as an election worker; and involved in emergency circumstances, such as the death of a family member. On the basis of Census data, we estimate that about 10 percent of voters nationwide cast their circumstances differed under which voters in different states were allowed to vote by a mail-in absentee ballot, the basic steps in the process were similar. As seen in figure 23, the basic process of mail-in absentee voting includes the following steps: Registered voter applies for a mail-in absentee ballot. Local election officials review the applications and, if the voter meets the established requirements, sends the voter a mail-in absentee ballot. The voter votes and returns the ballot in accordance with any administrative requirements (such as providing a signature or other information on the ballot/return envelope, often referred to as the affidavit envelope). Local election officials or poll workers review the information on the ballot/return (i.e., affidavit) envelope and subsequently “qualify” or “disqualify” the ballot for counting based on compliance with administrative requirements, such as signatures. The manner in which registered voters were to apply, how frequently they were to apply, and when they were to apply to vote a mail-in absentee ballot varies based on state requirements. Depending on these requirements, registered voters may fax, call, write, or visit their local election official to obtain an application or learn what information is required to request a mail-in absentee ballot. All jurisdictions we visited had a standard state or jurisdiction application form available from local election officials for registered voters to obtain a mail-in absentee ballot. Figure 24 shows an example of the application forms used. In addition, several states we visited allowed voters to apply for an absentee ballot by using a variety of other means, such as a letter or telegram sent to local election officials. In addition, some jurisdictions have a variety of application forms, which are used based on the circumstances under which voters qualify to vote by a mail-in absentee ballot. In addition to providing absentee ballot applications in response to voter’s requests, some jurisdictions made absentee ballot applications available at voter registration locations, such as state motor vehicle licensing and public service agencies, and other public locations, such as libraries. Mail- in absentee ballot applications were also available on-line in many states. For example, Colorado, Georgia, Massachusetts, Oklahoma, and Texas all have state election Web sites that provide mail-in absentee ballot request forms, which can be downloaded, printed, and returned to the appropriate local election office by mail, fax, or in person. See figure 25 for an example of a mail-in application form available on a local jurisdiction’s Web site. Some local election officials took an even more proactive approach to providing applications for mail-in absentee voting. For example, elections officials in one large jurisdiction sent an absentee voting application and a letter explaining the procedures for absentee voting to all registered voters who were eligible to vote absentee, so that they did not need to request an application. These included registrants who were 60 or older, disabled, or poll workers who would not be working in their precinct on election day. As another example, all California jurisdictions sent every registered voter an absentee ballot application as part of their sample ballot package. Since California does not require an excuse to vote absentee, registered voters who wished to vote in this manner simply needed to complete the application and return it to their local elections office. State requirements varied regarding how frequently a voter had to apply for a mail-in absentee ballot. Depending upon the state, voters may have been required to apply for each election in which they wished to vote by mail, apply once for all or certain elections held during a year, or apply for “permanent” absentee status, in which a mail-in ballot is automatically sent for at least 5 years or for all future elections until the voters request to have their absentee status revoked. appendix V provides a summary of the state statutory provisions permitting permanent mail-in absentee voting. As shown in appendix V, voters may have to meet certain state qualifications, such as permanent disability, to qualify for a permanent mail-in absentee ballot application. For example, in New York and California, a person could apply for permanent absentee voter status due to a permanent illness or disability by checking a box on the absentee ballot application. However, in Washington, for example, no excuse was needed for permanent absentee status. In the jurisdiction we visited in this state, about 50 percent of the registered voters were permanent absentee voters, and absentee ballots represented about 62 percent of all ballots cast in the November 2000 general election Differences existed in state statutory requirements regarding the deadline for requesting a mail-in absentee ballot. In the states we visited, the deadline for returning completed mail-in absentee ballot applications ranged from 1 day to 7 days before the election. Some states, such as California and Colorado, had a procedure for registered voters to obtain an emergency ballot after the deadline to apply for a mail-in ballot had passed. To exercise this option, voters were required to have a circumstance that came up after the absentee application period had closed that prevented them from voting at their precincts on election day. For example, Illinois has a strict set of criteria for emergency voting. Under one circumstance, a voter admitted to the hospital not more than 5 days before the election is entitled to personal delivery of a ballot if a doctor signs an affidavit attesting that the voter will not be released on or before election day. Once local election officials receive mail-in absentee ballot applications or requests, they are to review them to determine if they meet state requirements for mail-in absentee voting. These requirements may include whether the applicant is a registered voter, the application includes all the information required (e.g., applicant’s signature, witness), and the applicant meets the state’s approved eligibility requirements for absentee voting. If all the required information is not provided on the application (such as name, address, birth date, and/or voter signature), most jurisdictions we visited had standard letters that were to be sent to voters requesting them to provide the missing information. In one jurisdiction, election officials said that state law requires that all jurisdictions in the state notify applicants of the status of their request, particularly if they are unable to process it. In contrast, election officials in a very large jurisdiction stated that they do not provide any feedback to applicants with problem applications, unless the voters contact them regarding the their application’s status. Officials from another very large jurisdiction stated that, when applications were missing information, the officials would send out the absentee ballot along with a request for the applicants to provide the missing information with the absentee ballot, rather than delay when the voter receives their ballot. However, officials from most other jurisdictions we visited stated they would not send voters their absentee ballot until the voters had provided all the required application information. In addition, officials from most jurisdictions stated that they only provide feedback to the applicants if there is a problem with the applications. Otherwise, the voters received the absentee ballots, once they were available, as their confirmation that their request was received. Election officials in several jurisdictions stated that they attempted to make more direct contact with voters as the application deadline approached. For example, election officials in both small and very large jurisdictions said they attempted to contact voters regarding problems with their applications by telephone if there was insufficient time to allow for a letter to be sent. However, election officials in one medium-sized jurisdiction said that they did not attempt to contact any voters by telephone because they would only take such actions or provide such assistance that they could provide to all voters, not just some portion of them. In contrast, an election official in one large jurisdiction personally resolved problem applications. For example, this official drove to a nursing home before the November 2000 general election to obtain a signature on a mail absentee ballot application from a 99-year-old woman whose daughter had mistakenly signed the application. Officials in November 2000 faced a variety of challenges in successfully processing applications for mail-in absentee ballots, including voter errors and voter’s not understanding the process, late applications, and workload difficulties. Local jurisdiction officials described voters’ failure to provide critical information, such as a signature and/or valid residence or mailing addresses, as a principal challenge to successfully processing applications. On the basis of our telephone survey nationwide, we estimate that 47 percent of jurisdictions encountered problems with voters failing to properly complete their applications, such as providing a signature; 44 percent of jurisdictions encountered problems with voters failing to provide an adequate voting residence address; and 39 percent of jurisdictions encountered problems with voters failing to provide an adequate mailing address. We estimate that 47 percent of jurisdictions nationwide experienced W ti problems with voters not properly completing applications, such as not providing a signature. We also estimate that 39 and 44 percent of jurisdictions had problems with voters failing to provide adequate mailing or voting residence addresses, respectively. In addition, jurisdictions faced challenges with voters who did not fully understand the mail-in absentee voting process. For example, on the basis of our telephone survey of jurisdictions, we estimate that 51 percent of jurisdictions nationwide encountered problems processing applications because citizens did not register to vote before applying for a mail-in absentee ballot. Also, local election officials said that political parties in one large jurisdiction sent all their members forms to request absentee ballot applications. After some voters filled out the forms and then received absentee ballot applications, they called the local elections office to tell them they did not want to vote absentee. In another jurisdiction, some voters sent in more than one ballot application for themselves. We estimate that 51 percent of jurisdictions nationwide experienced W ti problems processing applications because citizens did not register to vote before applying for a mail-in absentee ballot. In addition, jurisdictions experienced problems with receiving applications after the deadline. We estimate that 54 percent of jurisdictions nationwide experienced problems with receiving applications late. An official in a medium-sized jurisdiction stated that their "primary difficulty in absentee voting is getting voters to respond in a timely fashion to meet mailing deadlines.” We estimate that 54 percent of jurisdictions nationwide experienced W ti problems with receiving late applications. We estimate that local election officials nationwide received about 14.5 million applications for absentee mail-in ballots (plus or minus 3 million) for the November 2000 general election. As seen in figure 26, the number of absentee ballot applications can result in large volumes of absentee ballot packages being mailed to voters. Election officials in both small and large jurisdictions said they considered processing applications a workload challenge for their staff. For example, election officials in a very large jurisdiction stated that they received over 640,000 applications for absentee ballots. Officials in a large jurisdiction, as a result of applications received, sent out about an average of 2,000 absentee ballots each day for several weeks before the election. Officials from a small jurisdiction stated that processing absentee voting materials was time-consuming and expensive, and expressed concerns that they would face significant challenges if the number of absentee ballot applications increased. In addition, several local election officials specifically mentioned the large number of absentee ballot applications received the day of the absentee ballot application deadline, particularly the increased volume of faxed absentee ballot applications received on the last day to be an administrative challenge. Officials from two very large jurisdictions specifically mentioned that they hoped their recently instituted early voting programs would reduce the number of voters using mail-in absentee ballots and, thereby, reduce the workload burden and other challenges in processing mail-in absentee applications. In addition, some of the jurisdictions that we visited had deadlines for absentee ballot applications that were very close to election day—as little as 1 to 5 days before election day. Such jurisdictions faced challenges in ensuring that all ballot applications received by the deadline could be processed and the ballots mailed back to voters with sufficient time for the ballots to be voted and returned. Some officials from such jurisdictions expressed doubt that voters would be able to return their ballots by the election night deadline if they received the ballots 5 days or less before the deadline. For example, one jurisdiction had an mail-in absentee application deadline of the Saturday before election day, clearly a short amount of time to mail the voter the ballot and have it returned by election night. To address these deadline issues, some officials stated that they used overnight mail to speed up ballot distribution as the deadline approached. When allowed by state law, some jurisdictions also encouraged voters, at their own expense, to return voted ballots by overnight mail. In addition, several local election officials indicated that their states were considering legislative changes, such as allowing more time between primaries and general elections, to provide for more time for the mail-in absentee process. Once local election officials obtained any additional needed information and approved the application, they mailed an absentee ballot to the registered voter. Once registered voters receive their absentee ballots, it was their responsibility to vote and return their ballot. As on election day, the type of voting methods used for mail-in absentee voting varied from one jurisdiction to another, even within the same state. We estimate that most jurisdictions nationwide used either optical scan or W ti paper ballots for mail-in absentee voting. Nationwide, for the November 2000 general election, we estimate that over half of the local jurisdictions, about 61 percent, used the same method for mail-in absentee voting as they used on election day for the November 2000 general election. Moreover, we estimate that 89 percent of jurisdictions nationwide that used election day methods that lent themselves to mail-in voting (i.e., punch card, optical scan, and paper ballots) used the same voting equipment for both types of voting. Overall most jurisdictions nationwide used either optical scan or paper ballots for mail-in absentee voting during the November 2000 general election. Specifically, as seen in figure 27, nationwide for mail-in absentee voting, we estimate the following: about 44 percent of election jurisdictions used optical scan ballots; about 45 percent of election jurisdictions used paper ballots; and about 13 percent of election jurisdictions used punch card ballots. Some jurisdictions using either punch card or paper ballots as of November 2000 indicated that they are considering or have already made plans to change to optical scan ballots for mail-in absentee voting. One jurisdiction indicated that it was keeping its punch card equipment for mail-in absentee ballots, but was planning to change to a styrofoam-backed ballot to reduce the occurrence of pregnant or dimpled chads. For more information regarding characteristics of these voting methods, see chapter 1 of this report. In addition to voting the ballot, absentee voters must complete additional information on the ballot or return envelope, often referred to as the affidavit envelope, in accordance with their state’s administrative requirements. Typically, the absentee voter’s signature, and, possibly, name and address, were required on the absentee ballot or return envelope. In addition, as shown in appendix V, in an effort to ensure that the appropriate person completes the ballot, five states require that the voter’s signature be witnessed; one state requires that the signature be notarized; and seven states require that the statement be witnessed or notarized. Frequently, the voted ballot was to be sealed within a series of envelopes. For example, as seen in figure 28, the ballot was to be sealed within a secrecy envelope. The secrecy envelope containing the ballot was to be subsequently sealed in the return envelope on which the voter was to provide the required administrative identifying information (e.g., signature). In some jurisdictions, the entire package is then further sealed in an additional envelope provided by the election office in which to return the ballot. Once the ballot and accompanying materials are completed, the voters are to return their voted ballots to their local election jurisdiction’s office. State requirements vary regarding the manner in which absentee ballots may be returned. Some states, such as Oklahoma and Texas, required that these ballots only be returned by mail, and other states, such as New York and New Mexico, allowed the voter return the voted ballot by personally delivering it. In addition, some states we visited, such as Michigan, Illinois, and California, allowed for the voted ballot and accompanying materials to be delivered in person by the voter or by a family member of the voter to the local elections office and/or the voter’s precinct on election day. In an effort to ensure integrity of the process, some states require the voter to provide written authorization in order for the family member to deliver the ballot. By contrast, California allows any authorized representative to return a voter’s absentee ballot during the last 7 days of an election, up to and including election day. State deadlines for receiving absentee ballots from civilians living within the United States range from the Friday before election day to 10 days after election day. However, as seen in figure 29, most states require absentee ballots to be returned no later than election day, unless the voter meets certain special circumstances, such as being in the active military or residing overseas. In the nine states and the District of Columbia where a mail-in absentee ballot may be returned after election day, all but one required the envelopes to be postmarked on or before election day. See appendix V for each state’s specific deadlines for receiving mail-in absentee ballots. Several local election officials recommended that a standard, nationwide deadline for receiving mail-in absentee ballots should be set for federal elections. In some jurisdiction election officials stated that they consider postal problems a significant challenge for mail-in absentee voting within the United States. Generally, these jurisdictions reported that they had experienced some problems with postal deliveries and/or the priority given to the delivery of election and balloting materials, such as applications. However, officials expressed fewer concerns about postal delivery and timeliness in the jurisdictions we visited for domestic delivery than for overseas delivery. In one jurisdiction, election officials said that election day was designated as a holiday and, as such, they had trouble receiving mail delivery of absentee ballots on election day, the last day they could be received. Officials from a very large jurisdiction reported that, generally, postal delivery problems do not occur repeatedly in the same area of their jurisdiction. However, one jurisdiction reported consistent delivery delays after the U.S. Postal Service centralized its operations. Election officials worked with the Postal Service to mitigate this problem. Several other election officials provided additional examples of having worked closely with the local Postal Service offices to develop workable solutions regarding delivery and timeliness issues. In many jurisdictions we visited, absentee voting materials were printed in colored or specially marked envelopes to assist Postal Service employees in identifying and facilitating delivery. Rather than waiting for postal delivery, several other jurisdictions sent election employees to local post offices several times a day to pick up absentee ballots as the deadline approached and/or arrived. In addition, officials at some locations we contacted had suggestions for changes in their procedures to mitigate postal delivery challenges. For example, on official suggested requiring additional information on the voter’s absentee ballot application, such as an e-mail address and/or a telephone number, to facilitate processing applications with incomplete information, rather than having to rely solely on correspondence through the Postal Service. In addition, some jurisdictions allowed voters to use overnight mail, at their own expense, to return voted absentee ballots, which was particularly useful to voters as the deadline approached. Other jurisdictions stated that they were required by state law to only accept ballots through mail delivery by the U.S. Postal Service. Some of these officials agreed that a change in state laws allowing receipt of absentee ballots from overnight carriers, at the voter's expense, would be helpful in addressing the problem of absentee ballots from some voters that arrive too late to be counted. Generally, jurisdictions pay for postage-related costs for mail-in absentee voting, such as the costs to mail ballot applications and ballots to voters. As deadlines approached, some jurisdictions even incurred overnight delivery costs in an attempt to provide absentee balloting materials to voters in a timely fashion. Voters often must pay for the postage to return applications and ballots to local election offices. Some local election officials expressed concerns regarding growing postal costs to provide election-related materials, such as absentee applications and ballots, to voters. From our mail survey, we estimate that about half of the jurisdictions nationwide (54 percent) would like for the federal government to assist them with postage for election related materials. As another alternative, several election officials suggested having special postage rates for election related materials, particularly absentee balloting materials. In some instances, states have begun to assume all or some of the postage costs for absentee voting materials for statewide elections. In addition, some jurisdiction officials said that they provided voters with postage-paid return envelopes for absentee ballots. In some instances, these envelopes were provided through fiscal support from the state. Other officials suggested that they would like to provide such services to voters but did not have the funds to be able to do so. One jurisdiction official stated that the state or federal government should, at a minimum, assume the costs incurred by voters to return absentee ballots by mail, which could be interpreted, in his opinion, as a poll tax. Further, a few jurisdiction officials commented that U.S. Armed Forces personnel and overseas citizens do not have to pay postage to return their voted absentee ballots in some jurisdictions and questioned whether this service should be extended to all voters. Election officials in two jurisdictions said that, although the jurisdictions indicated the required postage in the corner of the return envelope, they would assume the costs if the voter did not pay. In addition to mail-in absentee voting, some jurisdictions have conducted entire elections by mail. The state of Oregon conducted its first general election using all voting by mail in November 2000. All registered voters in the state were mailed a ballot and allowed to return the ballots by election day through the mail, or by personally delivering them to the elections office or various manned, drop-off sites located throughout the jurisdiction. Oregon reported some increases in voter turnout for the November 2000 general election as well as other statewide elections. For example, voter turnout in an all vote-by-mail primary in 1995 rose to 52 percent, up from 43 percent previously. In a vote-by-mail special election for U.S. Senator, voter turnout was 65 percent, representing a record for special elections. In addition, some jurisdictions have conducted all voting by mail for certain elections or in certain precincts in which the number of registered voters are very small. While jurisdictions have procedures to address certain potentials for fraud in mail-in absentee voting, some local election officials expressed concerns regarding their ability to fully address this issue, particularly regarding an absentee voter being unduly influenced or intimidated while voting. Based on our telephone survey of jurisdictions, we estimate that less than 1 to 5 percent of jurisdictions nationwide experienced special problems with absentee voting fraud during recent elections. In general, absentee voting fraud concerns tend to fall into three categories, including (1) someone other than the appropriate voter casting the mail-in absentee ballot, (2) absentee voters voting more than once, and (3) voters being intimidated or unduly influenced while voting the mail-in absentee ballot. Local election jurisdictions use a number of procedures to ensure the appropriate voter completes a mail-in absentee ballot. For example, from GAO’s telephone survey of jurisdictions, we estimate that nationwide 55 percent of the voting jurisdictions check a voter’s signature on the absentee ballot materials with the signature originally provided on the voter’s registration documents (as illustrated in figure 30); 55 percent of jurisdictions check a voter’s signature on the absentee ballot materials with the signature originally provided on the application for a mail-in absentee ballot; and/or 36 percent of jurisdictions require a voter’s signature on the absentee ballot materials to be witnessed or notarized. All of the jurisdictions we visited used either one of these or other procedures, and most jurisdiction officials did not identify this type of fraud as a major concern. In particular, Oregon officials expressed confidence in their procedures designed to reduce the potential for someone other than the registered voter voting the mailed ballot. Oregon officials compared signatures on mailed ballot materials to voter registration materials. The officials said that this signature comparison provides even greater security against this type of fraud than many jurisdictions’ election day procedures in which voters may not have to show identification or have their signatures checked before casting a ballot. However, even with the described procedures in place, a few jurisdiction officials said that they ultimately have no way of knowing with absolute certainty that only the appropriate person requests and casts an absentee mail ballot. Likewise, local election jurisdictions in November 2000 employed several procedures to prevent voters from voting more than once. From GAO’s telephone survey of jurisdictions, we estimate that, before election day, 64 percent of jurisdictions nationwide checked the absentee ballot applications against their voter records to determine whether a voter had previously applied for a mail-in ballot for that election before providing a voter an absentee ballot. On election day, we estimate that 78 percent of the jurisdictions nationwide checked election day poll books, lists, or logs to determine whether a voter had requested, been sent, or already voted an absentee ballot. For example, as seen in figure 31, one jurisdiction used bar coding on mail-in absentee applications to identify voters who have been sent absentee mail ballot packages. This information is to be scanned into the system used to generate election day poll books, so that voters who have been sent a mail-in absentee ballot can be identified if they attempt to vote on election day. We also estimate that 46 percent of jurisdictions nationwide checked absentee ballots received against election day poll books, lists, or logs to determine if an absentee voter voted on election day before counting the absentee ballot. In addition, we estimate that 10 percent of jurisdictions nationwide employ other methods to ensure an absentee voter only votes once during an election. For example, poll workers on election day can check on-line database containing absentee voting information to verify that voters had not voted before election day. All of the jurisdictions we visited used either one of these or other procedures, and most jurisdiction officials did not identify this type of fraud as a major concern. Officials from some jurisdictions stated that a potential for abuse continues to exist with mail-in voting through voters possibly being intimidated or unduly influenced in their homes when casting their mail-in ballot. This more general fraud concern is, to some extent, inherent in the process and, thus, more difficult to address and causes more concern among some officials. For example, an election official from one very large jurisdiction stated he experienced a situation with absentee ballot fraud allegations during a recent local election. He was informed that people were going door-to-door in low-income neighborhoods to obtain and complete absentee ballot applications and ballots. Because of these types of allegations, he stated that absentee voting by mail is the area that concerns him the most about the elections process. Generally, he said these problems are more likely to occur in smaller elections, such as primaries or local elections, where such efforts have the greatest potential to have an effect on the actual outcome of the election. However, smaller elections, such as primaries, can still have significant impacts on the outcome of general elections in certain circumstances for certain races. This official stated that, at a minimum, he would like to see state law designate people’s homes as polling places while they are completing their absentee ballot. This type of law would make electioneering illegal while a person is casting his or her mail absentee ballot. In addition, one jurisdiction officials stated that political parties attempt to increase turn out for their party by sending ballot applications to voters directly. These efforts result in the election officials not knowing for certain who filled out the application and, subsequently, the ballot, or if it was even completed per the voter’s wishes. Besides the general procedures for preventing mail-in absentee fraud, a number of jurisdictions have taken specific measures to prevent such abuses in high-risk places, like nursing homes. For example, several jurisdictions send a team of election workers, at times consisting of members from both major parties, to nursing homes to give out ballots, assist voters, and deliver the voted ballots back to the elections office. Another location placed restrictions on the number of absentee ballots that a single person could sign as a witness. One election official in a small jurisdiction stated that she personally knows and has provided specific training to the nursing home employees who witness and assist nursing home patients in voting. In addition, in almost all of jurisdictions we visited, the mail-in absentee ballot package provided to voters included statements and/or reminders, such as within the oath or other materials, regarding the possible legal consequences of providing inaccurate or fraudulent information on the balloting materials. Several jurisdiction officials commented that, in the few instances in which they identify or suspect mail-in absentee voter fraud, they refer the case to the local district attorney’s office for possible prosecution. Although states establish the requirements for qualifying mail-in absentee ballots to be counted, local election officials must implement and, at times, interpret these requirements. Most frequently, election officials disqualify mail-in absentee ballots due to voter error in completing the balloting materials or the ballots arriving after the deadline. However, due to differences in procedures and requirements, the likelihood that voters’ errors in completing and returning mail-in ballots will result in their ballots being disqualified varies, even, in some instances, among jurisdictions within the same state. In addition, this qualification process results in local election officials facing similar workload challenges in processing mail-in absentee ballots as they faced in reviewing applications. Generally, once the election officials receive the absentee ballots, the ballots were to be secured until state requirements allow the officials to review them. As with many other aspects of voting, the process for qualifying absentee ballots for counting varied across voting jurisdictions, even within the same state. In some jurisdictions, absentee ballots are reviewed centrally by election officials or special absentee voting boards. In other jurisdictions, absentee ballots are sent to the precincts in which the voters would have voted on election day and reviewed by poll workers. Regardless of who conducts this effort, the accompanying documents (e.g., affidavit envelopes) are reviewed to determine whether all the required information is complete and state requirements are met. Absentee ballots may be disqualified from the count for a number of reasons. For example, as seen in figure 32, the voter may have failed to appropriately sign the affidavit or ballot envelope, or provide other information as required by the jurisdiction. Absentee ballots may also be disqualified if the jurisdiction receives them after the deadline. While the states establish the requirements for mail-in absentee voting, local jurisdictions’ interpretation of the requirements and the resulting practices may vary within the same state—with some jurisdictions holding strictly to the letter of the law, and others applying more flexibility in qualifying ballots. The following examples demonstrate this variety: In one state, officials in three counties said that they accepted any ballot that showed a signature anywhere or return envelope to compare with registration documentation, although officials in two other counties disqualified any ballot when the envelope did not strictly meet all the technical requirements. In another state, officials in two jurisdictions told us that there is no discretion in accepting ballots—either they meet the technical requirements completely or they do not meet them and are not accepted. On the other hand, officials in another jurisdiction told us that if a returned ballot envelope lacked some information, such as an address, that is available on the return address, the ballot would be accepted. In another state, officials in one jurisdiction strictly followed the ballot receipt deadline and did not count any absentee mail ballots received after the Friday before election day. In contrast, officials in another jurisdiction told us that ballots received after Friday but before 8:00 PM on election day were counted. We estimate that less than 2 percent of the total mail-in absentee ballots received for the November 2000 election were disqualified; about two-thirds were disqualified because the ballots arrived late or because the envelopes or forms accompanying the ballots not being properly completed, such as having missing or incorrect voters' signatures. As with processing absentee ballot applications, officials from several jurisdictions cited voter error in completing absentee balloting materials, such as envelopes, as a major problem. States do not routinely collect and report data on the number of mail-in absentee ballots that are disqualified. We estimate that 230,000 (plus or minus 50,000) absentee ballots were disqualified nationwide in the November 2000 election and that the national disqualification rate for absentee ballots was 1.7 percent. We estimate that 64 percent of all disqualified absentee ballots were rejected because the ballots arrived late or the envelopes or forms accompanying the ballots were not completed properly (e.g., missing the voter’s signature or containing an incorrect voter’s signature). Another 35 percent were rejected for one of the following reasons: no postmark or date; late postmark or date; voter not registered or not qualified; improper witness, attestation, or notarization; a previous vote in the election; and other. In general and as with absentee ballot applications, the principal challenges to successfully processing absentee ballots, according to local officials, are caused by voters’ failure to provide critical information. The errors include such things as the ballot envelope lacking a voter’s signature, witness’ signature and/or notarization, or the voter not providing a valid address within the local jurisdiction. For example, in one very large jurisdiction about one-third of the ballots disqualified were because the voter’s signature was missing or the envelope was improperly completed. In addition, election officials in one jurisdiction estimated that about 80 percent of the ballots disqualified were due to being returned after the deadline. The other major challenge the officials mentioned was receiving the ballot after the required deadline. Some jurisdictions have attempted to address problems with voters returning ballots unsigned or otherwise incomplete. In California, a number of counties have begun to put brightly colored stickers with arrows pointing to the signature line or fluorescent colored inserts reminding the voter to sign the envelope. In addition, in several jurisdictions election officials pre-print labels on the absentee ballot envelopes to minimize the amount of information the voter has to provide. Officials from the counties taking these steps reported a reduced number of voters submitting unsigned or incomplete absentee ballots. In a further effort to address these challenges, one large jurisdiction implemented a project for the November 2000 general election in which trained volunteers physically took unsigned absentee ballot envelopes, with the ballots still enclosed, to the voters to obtain their signatures. This reduced the number of unsigned ballots from 500 in previous general elections to 50 in November 2000. In addition, to obtain a necessary signature, one jurisdiction indicated that it returned unsigned mail-in absentee ballot envelopes, with the ballots still enclosed, to the voters through the mail, when time allowed before the deadline. Other jurisdictions said that they are considering doing so as well. Furthermore, our telephone survey results indicated that notifying voters about whether their ballots were received and counted was not a standard practice. We estimate that 29 percent of jurisdictions nationwide notified absentee voters when their ballots are disqualified and, in so doing, provided the reason for the disqualification. Several of the jurisdictions we visited stated that they are required by state law to notify voters whose mail-in absentee ballots were disqualified. These jurisdictions often use a standard letter to do so, which details the reasoning behind the disqualification. This feedback represents one way in which election officials can educate voters regarding proper completion of the mail-in absentee balloting materials. We estimate that 29 percent of jurisdictions nationwide notified absentee W ti voters when their ballots were disqualified and, in so doing, provided the reason for the disqualification. In addition, some election officials said that they plan to begin maintaining data on the number of disqualified mail-in absentee ballots, the reason for the disqualification, and the type of absentee voter (e.g., military, overseas civilian, domestic civilian) whose ballot is being disqualified. Election officials stated that they had not previously tracked this data because they were not required to report this data to their state elections office. Each of the millions of mail-in absentee ballots received by local election officials had to be qualified before being counted. We estimate that nationwide local election officials received about 13 million mail-in absentee ballots (plus or minus 2.7 million) for the November 2000 general election. Officials from several local election jurisdictions considered the mail-in absentee voting process a challenge because of the workload involved in reviewing the sheer volume of ballots. For example, officials from one very large jurisdiction stated that the sheer volume of mail-in ballots received creates a greater potential for errors. Once mail-in absentee ballots are qualified, the ballots are counted. After the November 2000 general election, some voters expressed doubt that local jurisdictions count absentee ballots at all if they would not change the outcome of the election, especially if they were received during extended deadlines after election day. On the basis of our telephone survey, we estimate that between 98 and 100 percent of counties nationwide include absentee ballots in their certified vote totals. All officials in each of the counties we visited confirmed that all ballots are included in certified totals, although ballots arriving during extended deadlines may not be included in totals announced on election night. The process for counting absentee ballots varies across voting jurisdictions. As with qualifying the ballot, some jurisdictions counted absentee ballots centrally by election officials or special absentee voting boards, while others had absentee ballots counted by poll workers at the voters’ respective precincts. For more information on the counting of absentee ballots, see chapter 5 of this report. Crucial to the successful casting of mail-in absentee ballots is the voter’s knowledge of application and casting, such as necessary signatures and deadlines. Although voters have the ultimate responsibility for understanding and complying with state and local requirements for mail-in absentee voting, the process is complicated. If absentee voters did not fully understand and, subsequently, comply with the absentee voting requirements in their state, their votes may not have been counted. Thus, for each election, local election officials said they needed to educate voters regarding how and when to cast a valid mail-in absentee ballot. The information officials needed to provide to voters included deadlines for submitting applications and ballots, any requirements that registrants must meet to vote the mail-in absentee ballot, how often the registrants must apply for an absentee ballot, and any administrative requirements, such as signatures and witnesses. Local election officials used a variety of means to provide this necessary information. Almost all local election offices we visited prepared press releases and/or asked the media to inform the public how and when to vote absentee by mail. Several locations we visited had informational fliers developed by the state or local jurisdictions, which were provided to voters on request or were available at local election offices, voter registration locations (e.g., motor licensing agencies), or public offices (e.g., libraries). Some jurisdictions relied on various organizations, such as political parties and other election watchdog organizations, to inform their respective constituents on the requirements concerning absentee voting. In addition, the officials in one jurisdiction we visited appealed directly to its eligible absentee voters to encourage them to vote an absentee ballot in the November 2000 general election. These officials believed that the November 2000 ballot in their jurisdiction was particularly complex and decided it would be beneficial for their eligible absentee voters, particularly those over age 62, to vote an absentee ballot rather than trying to vote the ballot at their precincts. In addition, most states and many counties had Web sites that provided information on mail-in absentee voting. Generally, these Web sites had very detailed information regarding mail-in absentee voting, including information on the requirements, how to apply, what information is required in completing the absentee voting application, the deadline for applying, and how often an application has to be completed. Some Web sites even include an absentee ballot application, which can be printed and mailed to the appropriate local election office. Voter educational materials provided on or with the mail-in absentee applications and/or ballots from the jurisdictions we visited contained instructions and/or information necessary for voters to successfully obtain and cast an absentee ballot. Some jurisdictions also included a number of user-friendly, reminders and notices to assist absentee voters in properly completing their absentee ballots and envelopes. For example, some jurisdictions, in addition to providing instructions on how to mark the ballot, provided absentee voters with reminders and additional notices highlighting information that was key to successfully completing and returning the absentee ballot. These notices included reminding voters to use a number two pencil on an optical scan ballot (or even providing the pencil), seal their ballots in the secrecy envelopes, and sign the appropriate envelope. Several election officials made or planned changes to improve voter education on mail absentee voting, such as clarifying or simplifying voter instructions in absentee mail materials. Although a variety of methods is used to provide necessary information for voters to vote by mail-in absentee, we estimate that only 15 percent of jurisdictions nationwide actively sought feedback from voters regarding the absentee process, based on our mail survey of jurisdictions, for the November 2000 general election. Thirty-nine States and the District of Columbia Allow In-Person Programs Differ, but Challenges Similar to Election Day Voter Education Efforts Vary Between Jurisdictions There is no clear distinction in state statute between in-person absentee and early voting. Basically, these programs offer voters the opportunity to obtain and cast a ballot in person during a certain period of time prior to election day. However, the length of the early or in-person voting period, location(s) at which voters may vote, and statutory requirements and paperwork required to vote in-person absentee or early differ among states. For example, in-person absentee voters generally must complete an application before voting similar to voters that vote mail-in absentee ballots, while early voters are not always required to do so. Generally, local election officials were comfortable with their procedures to ensure that an early or in-person voter only voted once during an election. However, election officials still faced several challenges similar to those encountered on election day when conducting in-person absentee and early voting, such as having adequate staffing, supplies (including ballots), and locations for voting. For the November 2000 general election, in addition to mail-in absentee ballots, over three-quarters of the states and the District of Columbia allowed some or all registered voters to obtain and cast ballots in person before election day. We estimate that about 4 percent of voters cast their ballots this way for the November 2000 general election. It is difficult to differentiate between in-person absentee and early voting programs in state statutes. As with mail-in absentee voting, states may or may not require voters to provide a reason or excuse for casting an absentee ballot in person. Most frequently, in-person absentee voting programs allow voters to obtain their ballot, complete any paperwork required, and vote their absentee ballot at their local election office. For example, in one jurisdiction in Virginia, in- person absentee voting is conducted at the local election jurisdiction’s office during normal business hours during the 45 days before the election. To cast an in-person absentee ballot, registered voters were to go to the office and complete an in-person absentee application on which they provide one of several reasons or excuses defined in state statute. These reasons could include being a student at an institution of higher learning, being absent for business or vacation, being unable to go to their precinct due to illness, having a religious obligation, working 11 of the 13 hours the polling precincts are open, or being a caretaker of a confined family member. During the visit, election officials approve the application and give the applicant a ballot, which the voter casts before leaving the office. Thus, to vote in-person absentee in Virginia, registered voters must go to their local election office, complete an application, and meet certain requirements (i.e., provide an excuse). Some states also have initiated “early voting” as a unique form of in-person voting in which local election jurisdictions may establish one or, possibly, several polling places a number of days before election day where any voter may cast their vote in person without having to provide an excuse. Voters were not required to cast their ballot at a particular polling place; rather, registered voters can vote at whatever location is most convenient for them. For example, in Texas, local jurisdictions are allowed to establish several “early voting” polling places at schools, libraries, shopping malls, or other locations that essentially function in the same manner as any election day polling place. Election workers staffed these early voting locations for each day they were open and, generally, followed whatever voting procedures would be used on election day. For example, voters at these early voting locations show up and vote their ballots without having to fill out an application, provide a reason for voting early, or complete any additional paperwork or provide any information other than what would normally be required on election day. Thus, to vote early in Texas, registered voters may be allowed to vote at any of several early voting locations, do not have complete an application, and do not have to meet any requirements (i.e., provide an excuse). In the November 2000 general election, in one jurisdiction in Texas, about 44 percent of the ballots were cast by voters at early voting locations, representing about a 10-percent increase from the previous presidential election in 1996. As seen in figure 33, 39 states and the District of Columbia have developed various types of early and in-person voting programs, some of which are more similar to the Texas and Colorado programs and others closer to the Virginia program. For example, California and Arkansas, allow in-person, early voting without a reason or excuse, which may be conducted at more than one location; however, both states require early voters to complete an application before voting—an additional step that is not required on election day nor at early voting locations in Texas and Colorado. Other states, such as North Carolina and New Mexico, allow for no-excuse, early voting in person, but only at the local election jurisdictions’ offices; these states also require voters to apply to vote early. There is no clear distinction in state statute between in-person absentee voting and early voting. However, in effect, in-person absentee voting and early voting programs stretch an election from a single day into an election period ranging from 1 to over 40 days. In-person absentee and early voting programs vary considerably from one state to another. Variations include the number and type of locations at which this type of voting is conducted, duration of the in-person or early voting period, and voting methods used. However, local election officials faced many of the same challenges in administering their in-person and early voting programs. These challenges, such as obtaining sufficient poll workers, ballots and supplies, and locations, were similar to the challenges faced in administering election day voting. The location(s) and time periods in which voters may cast in-person absentee or early ballots differ based on the requirements established by each state. The number of locations vary from one to an unspecified number to be established at the discretion of local election officials. For example, in one very large jurisdiction in Texas, 25 early voting locations were established throughout the jurisdiction for the November 2000 general election. The in-person absentee and early voting period also varies, ranging from 1 day to 45 days before election day. Appendix V summarizes the various in-person absentee and early voting programs established in state statutes as of July 2001. In addition to differences among states, in-person absentee and early voting may even vary from one jurisdiction to another within the same state. For example, in Texas, larger jurisdictions may establish numerous early voting locations, such as at schools and libraries, which are open for extended hours, even some weekends. In contrast, smaller jurisdictions may hold early voting only at the local election official’s office during regular business hours. We estimate that most jurisdictions used optical scan or paper ballots for in-person absentee and early voting, as they do with mail-in absentee voting. As with the type of voting methods used for election day and mail-in absentee voting, the type of ballots used for in-person absentee or early voting also varies from one jurisdiction to another, even within the same state. Nationwide, we estimate that two-thirds of the local jurisdictions, about 67 percent, used the same method for in-person absentee and early voting as they used on election day for the November 2000 general election. We further estimate that most jurisdictions used either optical scan or paper ballots for in-person absentee or early voting during the November 2000 general election. Specifically, as seen in figure 34, we estimate that nationwide 42 percent of election jurisdictions used optical scan ballots; 35 percent of election jurisdictions used paper ballots; and 14 percent of election jurisdictions used punch card ballots. Unlike voting a mail-in absentee ballot, absentee in-person and early voting includes the use of DREs and lever equipment, which voters of a mail-in ballot could not use for logistical reasons. As seen in figure 34, we estimate that 14 percent of election jurisdictions used direct recording electronic machines, and 1 percent of election jurisdictions used lever machines for early or in-person absentee voting. Several election officials indicated that they are considering or planning to change to DRE equipment for early and/or in-person absentee voting. For more information regarding the characteristics of these voting methods, see chapter 1 of this report. Most jurisdictions we visited that allow early or in-person absentee voting at numerous voting locations, used a direct on-line, electronic link to their registration records to ensure an absentee in-person or early voter votes no more than once. Whether the early or in-person absentee voter is required to fill out an application and/or show a voter identification card is established by state law. In on jurisdiction, election officials or poll workers check the voter’s signature in the poll book or on the application against the registration record to confirm the voter’s identity. In some states, the voter’s voting record is checked to determine if he or she has voted previously in the election–even as recently as a few minutes earlier on the same day. For example, typically, in jurisdictions we visited that established more than one early voting location, once poll workers give a voter a ballot, the voter’s voting record was updated automatically on the registration or election management system to which all early voting locations had direct, on-line access. In addition, as with mail-in absentee voting, the poll books used on election day note every voter who has voted early. However, one jurisdiction we visited held early voting that ended on the day before election day. The election day poll books in this jurisdiction identified voters who had been sent a mail-in absentee ballot, but not early voters, because of the jurisdiction’s need to begin printing the books before the close of early voting. In this case, it is possible that an individual could have voted early and again on election day. However, these election officials said they track which registered voters have voted on their election management system by giving each voter credit for having voted during the election. According to election officials in this jurisdiction, after the election when they attempted to give voters credit for voting election day, their on-line election management system would alert them to any people casting two ballots because they had already been given credit for early voting. According to these officials, any cases of duplicate voting would have been provided to the district attorney’s office for possible prosecution. The officials said that in the few instances when this has occurred over the past 10 years, it was generally an older individual who was confused about the election process, rather than an individual intending to commit voter fraud. In our discussions with election officials about early and in-person absentee voting, the officials raised a number of challenges or concerns specific to this type of voting. The issues generally fell into three categories: obtaining poll workers, ballots and other supplies, and suitable early voting locations. Officials from several jurisdictions cited having difficulty obtaining and/or training the poll workers who were needed to work over the period required for early voting (as much as over 40 days). One jurisdiction said that they did not have enough staff to support early voting at the election office and conduct other election day preparations at the same time, especially in the days just before election day. In particular, election officials from one very large jurisdiction with numerous early voting locations stated that their biggest challenge for each election is obtaining sufficient staff to handle the number of voters who vote on the last day of the early voting period. In fact, the lines and waits for certain elections and locations have been longer for voters on the last day of early voting than on election day. Officials from a number of jurisdictions cited ensuring that early voting locations had enough ballots and supplies as a challenge. For example, one medium-sized jurisdiction in Texas that used a punch card voting method needed to have enough copies of every ballot style voted in their jurisdiction, at every satellite location, to support all the voters who could come in to vote, because voters are not assigned to a particular location like they are on election day. For the November 2000 general election, this included 26 different ballot styles. By contrast, two very large jurisdictions, which use a DRE touch screen voting method, had all the ballot types electronically stored within each unit, but still needed to have enough other election-related supplies to support their operations through the entire early voting period. Officials from a few jurisdictions had concerns with getting enough adequate polling locations, such as locations that were sufficiently large, had digital lines for electronically connecting to the registration system, and were conveniently located. For example, officials in one large jurisdiction stated that they had problems establishing early voting locations that were convenient to all voters, and that some early voting locations were too small for the crowds that came at peak times. Another challenge faced by jurisdictions that conduct early voting is the limited amount of time between finalizing and printing the ballots and accompanying materials. For example, in one jurisdiction early voting begins 17 days before election day. Thus, election officials essentially have 17 fewer days to prepare for elections. For each election, state and local election officials are to provide information to voters about when and where to vote early or absentee in- person, including the time during, dates on, and locations at which to vote, among other information. As with by-mail absentee voting, most jurisdictions we visited that offered in-person absentee or early voting prepared press releases and/or asked the media to inform the public when and where to vote early or absentee in-person. In addition, most states and/or counties had Web sites that provide information on such voting. In some jurisdictions, political parties and other election organizations provided information to voters on in-person absentee and early voting. In one very large jurisdiction, election officials, in conjunction with the vendor of the jurisdiction’s voting equipment, advertised their early voting program on a billboard at the juncture of the county’s two major freeways. In summary, election officials identified the following challenges in the absentee and early voting process: Preventing mail-in absentee voting fraud. Our telephone survey of jurisdictions and discussions with local election officials revealed that officials had established procedures to address certain potentials for fraud, such as someone other than the registered voter completing the ballot or voters casting more than one ballot in the same election. However, some mail-in absentee voting fraud concerns remained, particularly regarding absentee voters being unduly influenced or intimidated while voting. Addressing voter error issues, such as unsigned or otherwise incomplete application and ballot materials, and receiving late applications and ballots. Our telephone survey of jurisdictions and discussions with local election officials showed that voters’ failures to provide critical information, such as signatures and addresses, or jurisdictions receiving applications and ballots after state statutory deadlines represent principal challenges to successfully processing mail-in absentee applications and qualifying ballots for counting. Processing large numbers of mail-in absentee applications and ballots in a timely manner. Local election officials indicated that large volumes of mail-in absentee applications and ballots represent workload and administrative challenges. In particular, officials expressed concerns regarding the timely processing of applications received close to the deadlines and the enhanced potential for errors in processing large volumes of applications and ballots. In addition, officials identified some concerns with postal costs, delivery, and/or timeliness. However, officials expressed fewer concerns about postal delivery and timeliness for domestic delivery than for overseas delivery. Obtaining adequate staffing, supplies (including ballots), and locations for conducting early voting. As on the election day, local election officials indicated that the principal challenges in conducting in-person absentee and early voting were having enough workers and locations for the entire early voting period, as well as having all ballot styles available at a single location. Despite the numerous responsibilities that involve coordinating people, preparing and using voting technologies, and following election rules and processes, the behind-the-scenes efforts of election officials generally attract little public notice. Election officials ordinarily find themselves in the spotlight only when citizens experience difficulties on election day. Long lines at the polls, voters’ names missing from the registration lists, a complicated ballot, voting machine malfunctions preventing vote casting, or, as was the case in the 2000 presidential election in Florida, hotly contested election results, may focus public attention on the otherwise unnoticed details of election administration. This chapter describes those activities that election administration officials identified to us as important to planning and conducting an election. This chapter also outlines the challenges those officials encountered in the November 2000 election. Conducting an election involves activities that must be concluded prior to the election and on election day itself. As illustrated in figure 35, election officials are responsible for a wide range of activities, all necessary to ensure that all eligible citizens may freely cast their votes in private and have them counted in federal, state, and local elections. The ways that local jurisdictions perform what can be an enormously complicated civic duty vary widely across the country for several reasons. First, states have different laws and regulations that govern elections; some states exercise a relatively high degree of control over local elections while others allow local jurisdictions to operate with more autonomy. For example, some states have statewide election systems so that every voting jurisdiction uses the same procedures for administering elections, including registering voters, processing absentee ballots, using common voting equipment, and tallying votes. Oklahoma, for example, standardizes most aspects of local and statewide elections. In other states, local jurisdictions run elections with less direction from the state, which means local officials may exercise a larger degree of autonomy in conducting elections. For instance, in Pennsylvania, local election officials told us there are 67 counties and consequently 67 different ways of handling elections. Figure 36 illustrates these differences. Other states are somewhere in between Oklahoma and Pennsylvania on the continuum of greater to lesser state direction of local elections. Virginia, for example, requires local jurisdictions to follow many standardized election procedures, but leaves their implementation largely to local jurisdictions. Second, the type of voting technology used by a jurisdiction influences how election officials plan and conduct an election. Usually it is local election officials who choose the voting technology to be used in their precincts, often from a list of state certified options, but in some states, state law prescribes the use of common voting technology throughout the state. The types and uses of voting technology are extensively described in chapter 1. Depending on their jurisdiction’s type of voting equipment, election officials face different challenges in ballot preparation, voter education, poll worker training, and setting up the polls. “the logistics of preparing and delivering voting supplies and equipment to the county’s 4,963 voting precincts, recruiting and training 25,000 election day poll workers, preparing and mailing tens of thousands of absentee ballot packets daily and later signature verifying, opening and sorting 521,180 absentee ballots, and finally, counting 2.7 million ballots is extremely challenging.” In contrast, one small jurisdiction we visited had only 2,843 registered voters, 5 voting precincts, and 28 poll workers. As illustrated in figure 37, the magnitude of key tasks for election officials in the large jurisdiction is a thousand times larger than for the small jurisdiction. Fourth, jurisdictions face different burdens in preparing for election day because where some have relatively homogeneous populations, others service highly heterogeneous publics, with diverse histories, cultures, and languages. In some jurisdictions, large segments of the population speak languages other than English. In these jurisdictions, ballots must be prepared in those languages. In November 2000, Los Angeles County, for instance, provided ballots in Spanish, Chinese, Korean, Vietnamese, Japanese, and Tagalog, as well as English. On the basis of a consent decree with the Justice Department, Bernalillo County, New Mexico, will provide certain types of voting assistance in the Navajo language, including translation of the ballot. Election officials said, in the future, they anticipate having to provide ballots in other Native American languages, some of which have no written form. And finally, the voting jurisdictions themselves may develop their own election day traditions and cultures. For example, jurisdictions generally seek to ensure that only eligible voters can cast their ballots on election day. However, the procedures adopted to determine whether a citizen who appears at the polls is eligible to vote differ. Jurisdictions may place different emphasis on preventing ineligible people from voting than they do on facilitating voting for eligible voters. States have different legal requirements for verifying voters’ identities, and localities develop different procedures for handling questions about eligibility that arise on election day. In some jurisdictions, voters identified themselves by stating their names and addresses to the poll workers, who also matched the signature on the voter application with the voter registration records. Other jurisdictions require voters to present a valid photo identification card and require the signature on their application to vote to match the signature on their voter registration card. In other jurisdictions presenting some form of identification, such as a hunting or fishing license, is sufficient to verify one’s identity. Still other jurisdictions require no identification other than the voter stating his or her name. Recruiting and Training Poll Workers Was Major Problem for Many Selecting Polling Places That Met Standards Was Not Always Possible Designing Ballots That Were Clear to Voters Was More Challenging for Long, Complex Ballots Educating Voters Can Help Reduce Election Problems Preparing and Delivering Equipment and Supplies Was Logistical In some jurisdictions, preparing for the presidential election began as early as 10 months before the November 2000 general election. Despite differences among local voting jurisdictions, five key tasks have emerged from our interviews with election officials as integral to preparing for elections. Prior to election day, officials must recruit and train a sufficient number of poll workers with appropriate skills to open, operate, and close polling places. Suitable polling places located in the voting precincts must be reserved. Election officials are responsible for designing and producing multiple versions of ballots, which may vary not only by voting precinct but by address within a voting precinct. Many jurisdictions educate voters about the ballot, the voting technology they will use, and where to vote. In the days leading up to election day, voting equipment and supplies, prepared weeks in advance, must be delivered to thousands of polling places. According to the results of our mail survey of local election officials, nationwide 57 percent (plus or minus 4 percent) of voting jurisdictions said they encountered major problems in conducting the November 2000 election. During our on-site visits, election officials described in greater detail the problems and challenges they faced and the ways they addressed these challenges. These challenges include labor shortages among the ranks of qualified poll workers, exacerbated limited access to a shrinking number of appropriate polling places; complicated ballots or new voting technology unfamiliar to voters; and limited resources for voter education. We estimate that 51 percent of the jurisdictions nationwide reported that it was somewhat or very difficult to find a sufficient number of poll workers. Elections in all states could not take place without an army of poll workers who run the polls on election day. Poll workers are the frontline of democracy. They are the public face of elections for most citizens, whose voting experience is largely informed by their interaction at the polls with poll workers. Although these workers are usually employed for only one day, the success of election administration partly hinges upon their ability to perform their jobs well. Therefore, recruiting and training qualified poll workers becomes one of the most crucial tasks that election officials face in most locations. On the basis of our mail survey, we estimate that 51 percent of jurisdictions nationwide had a somewhat or very difficult time getting enough poll workers. Of these jurisdictions, 27 percent had difficulty obtaining enough poll workers, and 23 percent had difficulty obtaining enough required Democrat or Republican poll workers. These problems were the most frequently identified by the jurisdictions in preparing for elections. Factors that can work in concert to complicate an already difficult task for election officials include an aging work force, low pay, little or no training, and limited authority to hold poll workers accountable for their job performance. To meet these challenges, some election officials said that they have developed specific recruiting and training strategies. Some poll workers are elected; some are appointed; and some are volunteers. For example, Pennsylvania law specifies that poll workers be elected to the position. One official in a small jurisdiction told us that “We beg people to do it.” Political parties often play a key role in identifying poll workers. For example, Illinois statutes require leading political parties to nominate all election judges needed at the polls on election day. Many jurisdictions require that poll workers from each of the two major parties staff each precinct. For example, New York law requires that each polling place must be staffed with four election inspectors equally divided between the major political parties. Poll workers have different titles, levels of pay, training requirements, and responsibilities, depending on state law and the organization and traditions of the local jurisdiction. Jurisdictions assign their poll workers different responsibilities in the polling place and call them by different titles, including clerks, wardens, election judges, inspectors, captains, and precinct officers. Often jurisdictions have a chief poll worker. Virtually all the jurisdictions we visited provide some compensation to poll workers for their service on election day, ranging from $55 a day for clerks to $150 a day for a coordinator. These amounts differ by jurisdiction and level of responsibility within the polling place. Jurisdictions also differ in the training that they provide and require for poll workers prior to the election. Most of the election officials we talked to said that they offer some training for poll workers, and some said that the training is mandatory. One jurisdiction requires that each poll worker be certified as an inspector by the county board after attending an official training class and passing a written test. Some jurisdictions only require training for individuals who have not previously served as poll workers. Other jurisdictions require only that the lead poll workers be trained before each election. In addition to the number, pay, and training of poll workers, jurisdictions differ in the levels of authority and responsibility they grant to poll workers. Poll workers may have significant autonomy over the operation of the polling place and decisions, being the final authority on interpreting guidance in areas such as deciding who can vote and determining voter intent. In other jurisdictions, poll workers have limited discretion and function primarily as clerks and facilitators, referring decisions back to elections headquarters. “ inspectors serve 17 or 18 hours, a very long day. Because many of our inspectors are senior citizens, between the age of 70 and 80-plus years, such conditions are difficult on them physically, as well as creating the potential for errors at the end of election day. Since compensation for this job is only $80 to $135 per day, depending upon the election district, it is not sufficient to attract a younger workforce.” Election officials often face a plethora of problems recruiting and training their poll workers. Some election officials simply cannot recruit enough poll workers; others have a stable but aging workforce, and still others cannot recruit reliable workers with the requisite skills. Particular recruitment problems vary. Election officials from several jurisdictions mentioned that they have problems getting enough poll workers in the manner specified by law. For example, in a jurisdiction that requires election of poll workers, election officials told us that they rarely have enough poll workers running for the positions. Several election officials noted that often the political parties do not provide enough poll worker nominations to cover the needs of the jurisdiction, despite a legal requirement that they provide all the poll workers. One official in a small jurisdiction that typically votes for candidates of one party said that they often could not find enough poll workers from the other party. Several officials said that their election workforce was aging and they were having difficulty recruiting younger workers. The pool of potential poll workers may be shrinking because a greater proportion of the population have full time employment and poll worker pay is inadequate to attract employed or more skilled workers. One official remarked that volunteering is characteristic of an older generation. Another official said that “hat they used to consider as a fun and interesting day and an American duty has become ‘heavy duty.’” The length of the day is a complaint of many poll workers. In one large jurisdiction, election officials asked poll workers to provide feedback on their experience in the November 2000. One poll worker responded that it was “bsolutely, positively too long a day. I am 26 years old and very athletic and still went home at night and fell asleep with my clothes on. With the majority of helpers either older or disabled, I have no idea how they survived the day.” Another problem is addressing the specialized labor needs unique to particular polling sites, according to several local election officials. Some polling places required poll workers to have specific language skills; other locations needed poll workers who were able to learn the technical skills necessary to operate voting equipment. Finding qualified bilingual workers, specifically workers fluent in Asian languages, is one very large jurisdiction’s biggest recruiting problem. Some places had trouble finding poll workers with the skills to use computers and newer technologies. One election official wrote that “it is increasingly difficult to find folks to work for $6 an hour. We are relying on older retired persons – many who can’t/won’t keep up with changes in the technology or laws. Many of our workers are 70+.” Officials in one very large jurisdiction said they have no scarcity of people willing to serve, but finding people to meet specialized needs is the issue. Because election officials have little ability to hold poll workers accountable for how well they do their jobs on election day, they try to find reliable workers, but must sometimes take whomever they can find. Officials we talked to cited a number of examples from the November 2000 election. An election official in a medium-sized jurisdiction said that not only did she have difficulty finding a sufficient number of poll workers, but also that she was not satisfied with the performance of some of the workers she did recruit. Some officials said that problems with performance and an aging poll worker labor pool can overlap. As an example, one official said she had to let an elderly worker go because the person could no longer reconcile the ballot roster at the end of the day. An election official in a large jurisdiction said that the worst part of his job was signing letters to older poll workers thanking them for their years of service and telling them that their services would no longer be needed. Because workers are in short supply, some election officials stated that they found themselves on the horns of a dilemma, choosing between finding enough workers versus hiring skilled and reliable workers. One major problem for election officials is absenteeism on election day. As one official from a very large county told us, “our biggest fear concerning election workers is whether they will show up on election day.” In the November 2000 election, one very large jurisdiction had 20 percent of its poll workers cancel or not show up on election day. Some jurisdictions tried to plan around poll worker absenteeism by recruiting and training more than they needed, but still had insufficient poll workers on election day. As one official from a medium-sized jurisdiction said, “e are usually able to recruit more poll workers than needed. However, because of no- shows, we came up short on election day. No one has an abundance of good poll workers.” We estimate that 87 percent of jurisdictions nationwide provided some training for poll workers. Poll worker training courses generally span a few hours time and focus on the key processes that poll workers should follow, including how to operate voting equipment. Although most of the jurisdictions we visited required some poll worker training, election officials cited instances where poll workers who had attended either still did not understand what they were to do or chose not to follow specific instructions on how to run the polls. For example, to handle unregistered voters in one very large jurisdiction, the poll workers were instructed to provide those voters with questionable credentials a provisional ballot. However, some poll workers failed to follow these rules and turned away some voters from the polling place. Poll worker training in the sites we visited rarely included discussion of the interpersonal skills that poll workers should employ when dealing with frustrated citizens or with each other. Some jurisdictions have developed strategies for addressing the particular challenges associated with poll worker recruitment and training. Officials in the jurisdictions we visited described both measures that their jurisdictions have adopted and ones that they would like to institute if they had the funding and legal authority to do so. Many election officials told us that increasing poll worker pay would be an important step in efforts to solve poll worker recruitment problems. Recruiting Strategies Targeted Youth, Civil Servants, Businesses, and Civic Groups To recruit more poll workers, jurisdictions have special recruitment programs in place. Student Poll Worker Programs: Some jurisdictions have been participating in student poll worker programs. For example, in its 1999- 2000 legislative session, Colorado passed legislation that allowed junior and senior high school students, ages 16 and older, to serve as election judges as long as they also met other criteria, such as being recommended by a school official and having a parent’s or guardian’s permission. Students must pass the same training courses as nonstudent election judges. Other states also allow for the use of student judges. In the 2000 general election, one very large jurisdiction used 969 students from 91 schools as election judges. This number included 453 bilingual students. State and County Employees as Poll Workers: Civil servants were recruited to serve as poll workers in a number of jurisdictions. One very large jurisdiction had a County Poll Worker Program that permitted county employees to volunteer as poll workers. Those employees participating received their county pay for election day, plus either a $55 or $75 stipend, and $25 for attending the training. For the November election, 1,400 county employees worked as poll workers. Our mail survey results showed that 21 percent of jurisdictions nationwide used workers from local governments or schools to help staff the polls in the November 2000 general election. Election officials in one medium-sized jurisdiction we visited said they used 25 to 30 state employees as election judges in November 2000. These state employees received their regular pay in addition to the poll worker compensation. Adopt-a-Poll Programs: Some jurisdictions have developed a program to let businesses or community groups adopt a poll and use their employees or volunteers to staff that polling place. Election officials in a very large jurisdiction encouraged companies and service organizations to adopt a poll. Participating organizations provided the poll workers, who were allowed to wear shirts with the logo of the company or organization. In another large jurisdiction, volunteers from a charity organization adopted a poll and donated their poll worker pay to the charity. In this case, staffing a poll was both an exercise of civic duty and a fundraising event. Split Shifts for Poll Workers: To make the poll worker’s day more manageable, some jurisdictions are allowing poll workers to serve only half of the election day, rather than asking them to commit to a 12 to 18 hour day. Election officials from one jurisdiction that uses split shifts said that poll workers are very pleased with the option of working only part of a day. Additionally, they said that they have had less trouble recruiting poll workers since they don’t have to work an entire election day. In addition to these recruiting strategies, jurisdictions have proposals that are pending necessary legislative changes and funding. Several jurisdictions told us that their state has legislation pending that would allow serving as a poll worker to satisfy jury duty requirements. Officials in several jurisdictions expressed the view that an election holiday at the state or national level would, among other things, make more citizens who are employed full time free to serve at the polls. Our mail survey results indicate that 29 percent of the jurisdictions nationwide favor establishing election day as a national holiday; 19 percent support providing federal employees time off to assist at the polls; but only 5 percent favor extending voting hours or holding Saturday voting. Officials Turned to Training Efforts to Improve Poll Worker Performance To prepare poll workers for election day, many jurisdictions have focused on improving poll worker training. Although training may be required, some poll workers do not attend and are still allowed to work. To encourage attendance at training sessions, some jurisdictions offer attendees a stipend in addition to their nominal poll worker pay. Localities have pursued a variety of approaches for improving training classes. For example, one very large jurisdiction hired experts in adult education to improve the quality of their training courses. Some states provide localities with training resources. For example, in Washington and West Virginia, the states produce standard training materials, relieving the local voting jurisdiction from the cost of producing such materials, and offering a consistent curriculum for poll workers. Some jurisdictions tailored the content of the training sessions to focus on changes that have occurred in the election system or on problematic tasks that poll workers are likely to encounter on election day. For example, when introducing a new voting technology, one very large jurisdiction produced a video to train poll workers in the use of their new optical scan counters. When introducing its touchscreen DRE voting equipment, another very large jurisdiction had the equipment vendor provide the training video and materials. To prepare poll workers for situations they may encounter on election day, several jurisdictions had poll workers participate in simulated precinct operations in their training class. Recruiting and training poll workers are major concerns for election officials. When asked what their three top priorities would be if federal funds were available for election administration, over half of the election officials from the jurisdictions that we visited told us that they would use the money to increase poll worker pay and/or to improve poll worker training. We estimate that 9 percent of the jurisdictions nationwide had a major problem obtaining enough polling places accessible to voters with disabilities. Election officials are responsible for obtaining a sufficient number of polling places that meet basic standards. To meet the needs of the voting population, the polling places should be available on election day and easily accessible to all voters, including voters with disabilities. They should also have a sufficient infrastructure to support voting machines and provide basic comforts for voters and poll workers alike. This infrastructure includes electricity, communication lines, heating, and cooling units. Many public and private facilities are used as polling places, including schools, churches, community buildings, malls, and garages. Specific legal requirements relating to the number, location, and characteristics of polling places can vary from state to state. For nearly two-thirds of the jurisdictions nationwide, we estimate that obtaining polling places did not pose a major problem. Our mail survey results also indicate that only 5 percent of the jurisdictions nationwide said they had a major problem obtaining enough polling places and 9 percent said that they had a major problem obtaining enough polling places accessible to voters with disabilities. However, in our site visits many election officials did identify difficulties they had securing polling places. According to election officials, low rental fees, the disruption of business that ordinarily takes place at a facility, and the possibility of damage to facilities are the primary reasons that fewer and fewer locations are willing to serve as polling places. In many jurisdictions, officials said that they still had jurisdictions that were not fully accessible to voters with disabilities. To address this challenge, some officials have consolidated precincts or created a “super precinct,” a single, centralized location where all voters cast their ballots no matter what the geographic boundaries of their assigned precinct. Some jurisdictions have adopted election day holidays, which help resolve some problems of using schools as polling places when students are present. Additionally, officials said they have taken steps to provide alternatives to voters with disabilities when the polling places are not fully accessible. Among jurisdictions where reserving polling places is an ongoing problem, officials may be faced with the problem of accepting polling places that do not meet all of the basic standards in order to have enough places to conduct the election. For example, election officials in different jurisdictions said that they used polling places in the November 2000 election that did not fully meet requirements that polling places limit the number of voters who may vote in one location, be located within the precinct they serve or be centrally located within be accessible to voters with disabilities, or provide the infrastructure necessary to support election activities. Finding locations that are handicapped-accessible is a particular concern for local election officials; in many places, officials have not located enough polling places that meet the needs of voters with disabilities and the elderly. Our onsite work on the November 2000 election found that polling places are generally located in schools, libraries, churches, and town halls, as well as other facilities. Although the extent to which any given feature may prevent or facilitate access is unknown, we estimate that, from the parking area to the voting room, 16 percent of all polling places have no potential impediments. Fifty-six percent have one or more potential impediments but offer curbside voting, and 28 percent have one or more potential impediments and do not offer curbside voting. Although efforts have been made to improve voting accessibility for people with disabilities, state and local election officials we surveyed cited a variety of challenges to improving access. Facilities used as polling places are generally owned or controlled by public or private entities not responsible for running elections, complicating attempts to make them more accessible. Places in older, denser cities have particular difficulties locating not only buildings that are accessible but that also have accessible parking facilities. For example, in one very large jurisdiction we found that of the 1,681 polling places used in the November 2000 election, only 440 were handicapped accessible. Even fewer, 46, had handicapped parking. A scarcity of available polling places also led some officials to accept facilities that did not meet other specifications. Officials in a large jurisdiction told us they had to settle for substandard buildings, some of which were being renovated, that did not have electricity or heating. Additionally, the officials told us that every year the department of elections buys heaters for some buildings that serve as polling locations. A small jurisdiction faced a temporary problem with the school gymnasium that the town uses as its super precinct–a single polling location for all precincts. During the 2000 primary election, the gym was undergoing significant renovation, and half of the space usually available for elections was closed off. Additionally, temporary electricity, communication lines, and toilet facilities had to be added for the election. Because the construction was completed before the general election, the jurisdiction did not have these problems in November 2000. Election officials expressed concern that it is not only difficult to retain current polling places but also challenging to find replacements. Some jurisdictions lack funds to pay a large enough stipend to a facility to provide an incentive for its owners to offer it for use as a polling place. In one case, according to the election official, the stipend was so small that it may not have even covered the owner’s electricity costs. Election officials may be hampered by laws that restrict them from spending public funds to modify private facilities to make the spaces ready for the elections or to repair damage to those facilities that result from their use as a polling place. Schools are often used as polling places. But space constraints and security considerations raised by having nonstudents entering the school grounds during school hours have led some schools to withdraw their facilities as polling places. Election officials do not generally have control over polling places. Some must rely on building managers or custodians to unlock the buildings and ready the space for election day. Because the polls typically open so early in the morning, custodians may not have opened the space so that the poll workers could enter on time. For example, officials in both a large and a medium-sized jurisdiction reported that poll workers were delayed because buildings were not unlocked and accessible at the appointed time on election day. Before every election, some jurisdictions provide information to voters about their polling place location. For example, one medium-sized jurisdiction mailed out polling place location information to every household. Many jurisdictions may also describe the location of the voter’s polling place in print, radio, and television announcements. Canceling locations after they have been publicized presents difficulties for election officials who must find substitute locations and then try to notify the voters of the last minute change. For example, in one very large jurisdiction, five locations canceled after the sample ballot, which lists the precinct the voter is assigned to, was mailed. The jurisdiction had to mail out 110,500 post cards to the affected voters notifying them of their new polling place. To compensate for the lack of an adequate number of facilities, election administration officials have pursued or proposed the following actions: Consolidated Precincts: To ease the difficulty of finding polling places for each voting precinct, some jurisdictions are consolidating several precincts into a single location. One small jurisdiction crafted a super precinct with all six precincts in one polling place. This solution offers the advantages of providing a known, central location easy for voters to find and alleviating the pressure to provide poll workers for each polling place. By using this super precinct, the jurisdiction is able to provide handicapped access and parking to all its voters. Additionally, the county clerk, who is the chief election official, is on site to resolve any issues over voters’ eligibility to vote. Rather than creating a super precinct, some jurisdictions are consolidating voting precincts. One large and one medium jurisdiction consolidated several precincts resulting in fewer polling places. One of these jurisdictions has 45 polling places with as many as 4 precincts per polling place; the other has 270 polling locations for 576 precincts. Revised State Limits on Number of Voters Per Precinct: In some cases the election officials’ proposed strategies for dealing with these problems involve changing state laws that prescribe the number of registered voters per precinct. By increasing the number of registered voters per precinct, officials hope to decrease the number of required polling locations. California introduced legislation to increase the number of voters in each precinct from 1,000 to 1,250, which would reduce the number of polling places needed. This solution would also reduce the number of poll workers needed on election day. However, as one election official observed, an unintended consequence of condensing precincts may be longer lines at polling places, which makes voting a more time-consuming and difficult activity. School Holidays on Election Day: Traditionally, schools have served as polling places. However, several election officials mentioned that they are increasingly difficult to obtain because of security concerns and competition for space when students are present. In one large jurisdiction, election officials, in cooperation with school boards, have made election day a student holiday. The schools, which account for two-thirds of the polling places, are then available as polling locations with teachers present, alleviating some of the security concerns. Similarly, a medium-sized jurisdiction persuaded three of its four school districts to schedule a student holiday on election day. All-Mail Voting: Oregon is the only state that has adopted mail voting for all its elections statewide. Election officials told us that one of the positive effects of their move to all-mail voting is that election jurisdictions no longer have to contend with the logistical problem of securing polling places or hiring poll workers. Other jurisdictions use all-mail voting on a more limited scale. For example, one medium-sized jurisdiction has mail-only precincts for sparsely populated areas. In another medium-sized jurisdiction, officials said they also permit smaller election jurisdictions, such as a water district, to opt to hold a special election entirely by mail. We estimate that 42 percent of the jurisdictions nationwide indicated that the federal government should subsidize the operational costs of elections (e.g. printing ballots or paying poll workers). Despite the controversy over the “butterfly ballot” and other ballot problems in the aftermath of Florida’s 2000 election, few election officials we spoke with reported experiencing major difficulty with ballot design for the November 2000 general election. We estimate that only 2 percent of jurisdictions nationwide thought that confusing ballot design was a major problem. However, we emphasize that this is the view of election officials and not voters. Election officials are responsible for designing ballots that meet both statutory requirements and the requirements of the particular voting equipment and that are easy for voters to understand. Officials we met with did identify a number of challenges they faced in ballot design. They noted that designing usable, easily understood ballots that meet the constraints of particular voting equipment can become much more difficult in jurisdictions where the ballot is printed in multiple languages, or a large number of offices or initiatives are on the ballot. Many states have statutory requirements that affect the design and layout of ballots. The specific statutory requirements and the level of detail specified differ by state. Many states prescribe specific features of ballot design. For example, some states require that ballots provide for rotation of candidates so that the no candidate of a particular party consistently has the advantage of appearing first on the ballot. State law in other states dictates that voters be offered a ballot that allows them to vote a straight- party ticket. Some states identify the order of races and ballot issues. For example, Washington law specifies that state ballot issues appear before all offices on the ballot. In New York, state law even includes specifications relating to the size of the print and the size of the checkboxes for the ballot. States also differ in the degree of state oversight of ballot design. In some statewide systems, such as those in Oklahoma, ballot design is done primarily at the state level for state and federal offices. In Massachusetts, the state designs and prints all ballots for state elections. In other states, such as Virginia, local officials develop ballots, but the State Board of Elections must approve them. Other states have no statutes that provide instruction on ballot design, leaving ballot design in the hands of local officials without state oversight. The voting technology that a jurisdiction uses is the major factor that influences ballot design and defines the tasks that election officials face as they prepare the ballot. As we discussed in chapter 1, different voting machines require different types of ballots and each different type has its own constraints. For example, the size of ballot, type of paper, and other features of the ballot must follow physical characteristics of the voting machine. Figure 38 illustrates two punch card ballots and identifies some of the characteristics that caused problems with the ballots for the November 2000 election. Figure 39 shows an optical scan ballot and a ballot for a pushbutton DRE voting machine. Election officials must determine all the ballot styles needed for every precinct in the jurisdiction. They must “define the election,” which entails identifying all races, candidates, and issues such as statewide referenda or local tax levies in a particular election. Additionally, officials must determine how many variations of the ballot they need to produce. A voting jurisdiction, which is generally a county, is comprised of precincts. Voters in the same precinct may vote a different ballot because boundaries of certain election districts, such as congressional districts and special districts, may vary within the precinct. Therefore, voters in the same precinct may vote different ballot styles, depending on where the voter lives. Jurisdictions design their ballots to meet the special needs of their constituents in various ways. Certain jurisdictions may require that ballots be prepared in multiple languages. Others prepare audio versions of their ballot for sight-impaired voters. For example, one very large jurisdiction, which uses touch screen DRE machines, provides an audio option to allow blind voters to cast their ballots in privacy without outside assistance. No matter the ballot style or unique aspects of ballot design, all ballots must include instructions to voters on how to complete their ballots. Once election officials determine everything that must appear on the ballot, they must construct detailed layouts for a particular type of ballot used for their election equipment. In many jurisdictions, the ballot layout is completed in-house. Some jurisdictions have computer programs that they use for ballot layout. In other places, election officials rely on voting equipment vendors, printers, or other outside contractors to fit the candidates and issues onto the ballot. Although most officials did not identify ballot design as a major problem area, some officials reported the design of the ballot created problems and confusion for some voters in the November 2000 election. These problems generally varied by the type of voting equipment used by the jurisdiction. On the ballot for a medium-sized jurisdiction that used lever machines, the list of names for president was so long that it extended into a second row. Election officials said that listing candidates in a second row confused some voters. In a small optical scan jurisdiction, officials said that their voters seemed to have problems with the write-in section of their ballot. Voters selected a choice from the candidates listed on their ballots and then also wrote in the candidate's name in the write-in section. The officials believe that this confusion on the part of the voters accounted for much of their county’s 5 percent overvote for president. In one small jurisdiction, officials said that they had to use both sides of their optical scan ballot because of the number of issues on the ballot. They said that two-sided ballots generally created some voter confusion. Some voters did not flip their two-sided ballot over and only voted on one side. In one very large punch card jurisdiction election officials said that after the difficulties with the butterfly ballot in Florida were publicized, they also received complaints that the butterfly ballot for their punch card machines was confusing. Additionally, they said that approximately 1,500 voters put their punch cards into the machine upside down, thereby negating their vote. In a jurisdiction that uses a full-face electronic DRE machine, officials had to use a small print size, difficult for some voters to read, to ensure that their ballot could (1) include all of the races and candidates, (2) meet the legal requirement that the full text of all ballot issues appear, and (3) have all text in English and Spanish. Additionally, because many voters had not received advanced information on the issues on the ballot, they took more time in the voting booth; thus, waiting times at polls became lengthy. The preparation of paper and punch card ballots requires an extra step in the production process. These types of ballots must be printed or produced separately from the voting machine, which introduces the potential for other problems. In a medium-sized jurisdiction that uses punch card ballots, officials said the printer trimmed ballots too closely, and the ballots had to be redone. Locations that use punch card machines provide a ballot book that fits onto the machine and identifies for the voter the correct location to punch. The paper ballot book and the punch card must be correctly aligned in the machine. Small deviations can result in erroneous punches. Officials in optical scan jurisdictions also reported ballot production problems. For example, officials said that a printing error on the ballots caused the counting machines to reject the ballots in one medium-sized jurisdiction. A small ink dot in the ballot coding section made the ballots unreadable by the machines. Election officials told us that they anticipated that long lists of candidates or changes in their traditional ballot format would lead to ballots that would confuse some voters. However, they often had limited alternatives, given everything they had to fit on the ballot for the November 2000 election. Some officials attempted to mitigate the impact of confusing ballot features by focusing voter education on these features. For example, officials in a large jurisdiction anticipated that they would have a problem with their three-column ballot design and the straight-party ballot option. If voters wanted to vote a straight party ticket in the November 2000 election, they had to mark the ballot in four different places, which was a departure from the usual way ballots were voted. These officials said that they tried to avert a problem for the voters by emphasizing this change in the ballot in voter education efforts before the election. Some other jurisdictions have adopted longer range efforts to limit the length and complexity of ballots. To minimize the length of the ballot, officials in South Carolina recommended the creation of two different ballots–one for candidates and one for ballot issues. Washington pursued a similar course of action, scheduling state elections in the off-years of the presidential election cycle. Jurisdictions identified other ideas to improve ballot design that are still in the proposal stage. Officials in one jurisdiction said they would like to use professional design consultants to create ballots that are easy to use and understand. Another jurisdiction is proposing to pretest ballots with selected groups of voters to identify and resolve design flaws before the election. Given the many problems of voter confusion with ballot design identified in the detailed reviews of ballots cast in Florida, many are interested in applying the principles of the field of information design to developing usability standards for ballot design. Some jurisdictions are planning to acquire new voting equipment and the characteristics of the ballots associated with different equipment will play a big role in their decision. One official in a very large jurisdiction told us that they would not even consider optical scan equipment because the amount of paper that would be required for their complex ballots would be prohibitive. We estimate that over a third of the jurisdictions nationwide believed that federal government should provide monetary assistance for voter education programs. To educate voters on how to translate their choices of candidates and issues into votes on election day, jurisdictions employ a range of activities. Jurisdictions place varying degrees of emphasis on educating voters on election processes and procedures. Some officials publish a sample ballot in local newspapers; others publish voter guides, mail out sample ballots and election information to every registered voter, and fund public service announcements. Officials told us that the introduction of new voting technologies or other significant changes in the way elections are conducted increases the need for educating voters on how the changes will affect the way they vote. A lack of funds is the primary challenge that election officials said they face in expanding their efforts to educate voters about elections. On the basis of our mail survey, we estimate that over a third of the jurisdictions nationwide believed that the federal government should provide monetary assistance for voter education programs. Virtually all jurisdictions we visited provide some information to assist voters in knowing how, when, and where to vote. However, there is wide variation in the amount and type of information provided and in the importance elections officials attach to voter education. In one small jurisdiction, for example, an election official told us, “eople have been voting here the same way all their lives. They don’t need voter education.” However, in many jurisdictions, election officials consider more extensive voter education campaigns to be an important way to minimize voter errors on election day. Some jurisdictions use multiple media for providing information to the public before election day, and other jurisdictions would like to provide more extensive voter education, but lack resources to do so. Jurisdictions provide voter education through print and electronic media, public demonstrations of the voting process, and public forums. In our mail survey of jurisdictions, we asked local election officials to identify ways they provided information to voters for the November 2000 election. Making information available at the election office and printing election information in the local newspaper were by far the most common ways of providing information to voters. Our mail survey results indicate that about 91 percent of the jurisdictions nationwide made sample ballots available at the election office; 74 percent printed sample ballots in the local newspaper; and 82 percent printed a list of polling places in the local paper. In contrast, between 18 and 20 percent of jurisdictions nationwide indicated they placed public service ads on local media, performed community outreach programs, and/or put some voter information on the Internet. Mailing voter information to all registered voters was the least used approach. Thirteen percent of the jurisdictions mailed voting instructions; 7 percent mailed sample ballots; and only 6 percent mailed voters information on polling locations. All election officials we visited provide information to the public at the elections office and answer inquiries from citizens. Most jurisdictions also provide information on elections to the public by publishing sample ballots, candidate lists and positions, registration deadlines, polling place location, and times the polls open and close. Fewer jurisdictions mail information on the election directly to voters. Some states mail voter guides, which provide detailed explanations of ballot issues and describe all the candidates for state and federal office to registered voters. Some local jurisdictions have developed voter guides and other information on the election to help educate voters. Jurisdictions we visited provided an array of different types of voter information and aids. In one large jurisdiction, election officials distributed business cards with instructions on how to complete optical scan ballots on one side and dates of elections on the other. A very large jurisdiction provided voters a demonstration that included instructions on punch card voting and sample ballots. Some of the materials alert voters to common mistakes that they should avoid. Voter education materials are often both distributed before the election and available at the polls on election day. Figure 40 provides examples of materials jurisdictions used to inform voters in the November 2000 election. Other forums for educating voters include discussions sponsored by organizations such as churches and civic and advocacy groups. Election officials in several jurisdictions said they frequently spoke to civic and educational organizations about the voting system. One large jurisdiction has an NVRA coordinator with responsibility for outreach to community groups, and another jurisdiction has an Election Ambassador Program aimed at citizens 18 to 35 years old. The Internet provides another medium for communicating voting process information to voters. All but three of the jurisdictions we visited have established a Web site as an additional means of educating voters. Many of the Web sites simply provide general information about elections and the requirements for participation. Others permit the voter to search a database to find information, such as the location of the voter’s polling place. A number of sites have forms the voter can get and print, but none permits the voter to actually submit the form electronically. Some jurisdictions may also operate telephone information hotlines so those voters may call in to obtain information about their polling place location. For example, Delaware has a computerized telephone system answering calls at election headquarters. The system handled over 11,000 calls on election day in November 2000. Many of the calls were from voters using the polling place locator feature. Use of such a system frees up the time of election officials to field questions from poll workers. Some jurisdictions rely on civic organizations, such as the League of Women Voters, to supplement their voter education efforts. In some locations, such groups provide almost all voter education. In one very large jurisdiction, a nonprofit, nonpartisan, watchdog organization provides voter education before election day. On election day, the group operates a voting control center from its offices to respond to questions and field complaints from citizens, election board officials, and party representatives. In another large jurisdiction, officials said that they relied on the League of Women Voters and the media to provide the community with voter education information. To familiarize citizens with the mechanics of voting, some jurisdictions conduct nongovernmental elections for groups such as unions and schools. For example, local election officials in one large jurisdiction will, on request, run local high school elections such as those for student council officers. The officials follow the same procedures as they would in a general election—developing the ballots and using the same voting machines used in the general election. Officials in other jurisdictions also conduct nongovernmental elections at the request of community groups as an educational tool. When election jurisdictions changed the equipment they use for voting, there was a particular need for voter education to help citizens understand how the new equipment would change the way they cast their ballots. Two of the jurisdictions we visited had developed extensive voter education programs in connection with introducing new voting technology. One large jurisdiction introduced new optical scan voting equipment that was used in November 2000. As a part of planning the transition, election officials significantly increased voter education to ease the transition. Consequently, voting error decreased in this jurisdiction in the November 2000 election. A very large jurisdiction was the first jurisdiction in the country to move completely to touchscreen DRE machines. The vendor supplying the new voting technology also provided $80,000 for voter education. Among other things, their education program included the development of videotapes and billboards. The vendor also published a voter guide with the county. Many jurisdictions would like to provide more extensive voter education tailored to the needs of particular elections. However, voter education programs compete with other needs for scarce local resources in conducting an election. Officials in two large jurisdictions said that they could not mail sample ballots to registered voters because of the postal costs they would incur. Spending for voter education is considered discretionary. Some local officials must first take care of mandatory items such as equipment, supplies, poll workers and polling places. Many officials said that they see voter education as an area where federal funds could be particularly helpful. When asked what their priorities would be were federal funds to become available for election administration, two- thirds of these election officials identified increasing voter education among the top three spending priorities. Supplies and equipment are generally prepared before the election and either delivered to each polling location or picked up by poll workers. Although no election official mentioned this task as a major problem, it is crucial to administering a successful election. The logistics of preparing supplies and machines for election day can be daunting, particularly for larger jurisdictions. As discussed in chapter 1, the type of voting equipment a jurisdiction uses influences the equipment testing routines required before election day as well as the kind of ballots and supplies that are needed. Officials typically put all supplies needed by voters and poll workers in a supply box which, in many jurisdictions, doubles as a ballot box. Generally, officials assemble a supply box for every precinct which typically includes (1) voter registration books or lists; (2) signs to identify the polling places; (3) voter education materials; and (4) instructions for poll workers that explain how to open, operate, and close the polls. The supply boxes may also contain incidentals such as bibles, American flags, and other items; for example, one jurisdiction’s box included a 50-foot length of string to mark an electioneering-free zone around the polls. Additionally, supply boxes can have forms, such as voter challenge forms and voter assistance requests; tally sheets to count blank, spoiled, absentee, and properly voted ballots; and a ballot box. The boxes may include color-coded envelopes or other dividers to separate different kinds of ballots. All boxes are checked by an election official to ensure that they contain the correct supplies. A lock or security tab must secure the supply boxes. In addition to preparing the supply boxes, election officials must prepare and deliver the voting equipment, except in jurisdictions that use paper ballots. Depending on the size of the jurisdiction and the types of equipment, the logistics of delivering the voting machines will vary. For example, in one very large jurisdiction, the election board hires a fleet of trucks to distribute the supplies and equipment to nearly 5,000 precincts for election day. The election board in a medium-sized jurisdiction hires a contractor who stores and delivers the equipment. The machines are prepared and tested while they are still in the warehouse, and then the contractor delivers them to the appropriate polling place. Jurisdictions using lever machines have different logistical problems. Lever machines weigh 700 to 900 pounds apiece, depending on the construction material. Prior to election day election officials in one jurisdiction delivered 464 of these lever machines to 327 election districts. A small jurisdiction that uses lever machines avoids delivering heavy lever machines by storing the machines at the polls. Setting Up the Polling Place Required Different Steps Determining Voter Eligibility Often Created the Biggest Election Day Our site visits with election officials indicated that these officials were generally satisfied with the way the November 2000 general election was conducted within their jurisdiction. However, few of them reported keeping data or evaluating the way in which the election was conducted. Therefore, it is likely that the election officials’ views about how well the election was run at the polling place level were shaped by anecdotal information that was voluntarily supplied or by public complaints. In our mail survey, jurisdictions nationwide identified determining voter eligibility at the polls and communication inadequacies as the key problems they faced on election day. Election officials we visited noted that the problems they face with registration, absentee voting, and other preparations for election day often manifest themselves on election day. Election day marks the point at which election officials delegate much of the actual operation of the election to poll workers, who become the public face of the election to most citizens. Entrusting an election to temporary workers requires a leap of faith for some election officials. One election official told us that he could spend a year planning for an election, preparing for every possible contingency, meeting all required deadlines, and ensuring all materials were in their proper places. However, on the day of the election, the fate of his professional reputation rested in the hands of strangers, and at the end of the day he would learn how well he had done his job during the preceding year. Poll workers carry out many important tasks on election day. In a number of jurisdictions, election administrators have developed detailed checklists that direct poll workers in opening, running, and closing the polls. From our mail survey, we estimate that 74 percent of the jurisdictions nationwide provided poll workers with checklists of procedures to follow on election day. The checklists we saw in different jurisdictions varied significantly in detail. Before the polls open on election day, election officials must ensure that the people, processes, and technology to conduct the election are in place. Election officials did not identify the setting up of the polling place as a major problem although they did encounter routine glitches on election day in November 2000. To set up the polling place and begin preparing the site for the voters, poll workers in some jurisdictions arrive at the polling place as early as 5:45 a.m. In other places the polls are set up the night before election day. Opening the polls entails swearing in the officials, setting up the machines, unpacking the supply box, setting up voting booths, testing equipment, and completing paperwork such as confirming that the correct ballot styles and number of blank and demonstrator ballots have been delivered, and posting signs. There are many different ways polls are set up. The type of voting technology influences the types and sequence of tasks poll workers perform. For example, in a small jurisdiction that uses paper ballots, the lead poll worker is responsible for picking up the supply box the day before the election. He or she must be the first person to enter the polling place the next day, and the supply box must be opened in the presence of the other poll workers in the morning before the polls open on election day. In contrast, in a very large jurisdiction, which uses precinct-count optical scan machines (in which the ballots are counted at the polls), the supply box contains the ballots and is locked inside the machine. Election warehouse employees deliver the machines to the polling places the night before election day. The election judge and at least one other poll worker go to the polling place to unpack supplies and prepare and test the optical scan vote-counting machine. When they complete these tasks, they secure the polling place until the next morning. One very large jurisdiction uses touchscreen DRE machines that are portable voting devices. On election eve, the poll workers set up the machines in each polling place. The lead poll worker must test the separate devices at home that will be used to activate the DREs. Election morning, the lead poll worker powers up the machines and runs the self- test to ensure the system is operating properly. The first voter of the day activates the machines for all subsequent voters. Although election officials did not say that setting up the polls created major problems for them, they did remark that they always have last minute problems to deal with, such as absent poll workers and polling places canceling on the day of the election. But election officials said that they have contingency plans for most of these problems. For example, in one small jurisdiction, the polls cannot open until all the poll workers are present. In this jurisdiction, each polling location has alternative poll workers in case a designated poll worker cannot be present on election day. However, in the November 2000 election, one polling location opened 45 minutes late because an alternate who lived a great distance from the polling place had to be summoned at the last minute. The schematic diagram in figure 41 illustrates the way that poll workers in one jurisdiction were instructed to position the voting booths, election judges’ tables, signage, and the ballot box in each polling place. This diagram also shows the path the voter takes upon entering the polling place. State law determines the hours that polling places open and close for all jurisdictions within the state, as shown in table 21 in appendix VI. When the polls open and voters enter the polling place, they will generally follow the path laid out in figure 41. The particular steps and stops on the way to casting a ballot differ, but in most cases, voters must check in at an official table and a poll worker must verify that they are registered and otherwise eligible to vote. When eligibility has been verified, the voter receives a ballot or an authorization to use a voting machine and proceeds to the voting booth. Once the voter’s choices have been recorded on the ballot, the voter must make sure the ballot is cast. For punch card and paper ballots, the voter must take the ballot to the ballot box or ballot counter; for lever and DRE voting machines, the voter casts the ballot on the machine. At each step, there is the potential for problems or voter confusion. We estimate that 30 percent of jurisdictions considered dealing with unregistered voters at the polls to be a major problem and 20 percent considered other voter eligibility issues to be major problems at the polls. From the perspective of election officials that we contacted, the biggest problems on election day stem from resolving questions about voter eligibility. Provisional ballots, court orders, and affidavits were used in some jurisdictions to resolve voter eligibility problems. High numbers of voters with these eligibility issues create challenges on election day, particularly by creating frustration for voters, long lines, and problems communicating between the polls and election headquarters as poll workers work to resolve the problems. Election jurisdictions have different requirements for establishing that the voter is eligible to vote at a particular polling place on election day. As noted in figure 42, different states have different requirements for checking the voter’s identity. Although many jurisdictions have stringent requirements for identifying voters and confirming their eligibility to vote, many others have very limited procedures. Twenty-three states require or authorize poll workers to inspect proof of the voter's identity, such as a driver's license or a birth certificate, before allowing him or her to vote. Thirty-eight states and the District of Columbia require a voter signature at the polls. Sixteen of these states provide for verification of the voter's signature based, for example, on a comparison with the voter's signature on a registration application. Before a voter receives a ballot, his or her eligibility must be confirmed. Typically, the poll worker examines the registration list for the person’s name. As discussed in chapter 2 of this report, jurisdictions produce poll books or lists of registered voters in a number of different ways. If the name appears on the list and other identification requirements are met, the voter is given a ballot and proceeds to vote. If the voter’s name does not appear on the registration list, jurisdictions have different procedures for dealing with the question of the voter’s eligibility. Twenty states plus the District of Columbia utilize some form of provisional ballot. Provisional balloting is typically identified by (1) the provision of a ballot to voters whose names are not on the precinct level voter registration list, (2) the identification of such ballot as some type of special ballot, and (3) the post election verification of the voter’s registration status before the vote is counted. Provisional balloting measures go by differing names among the states including, provisional ballot, challenged ballot, ballot to be verified, special ballot, emergency paper ballot, and escrow ballot. Five states use a form of affidavit ballot whereby upon completion of an affidavit the vote is cast and counted without the confirmation of such registration prior to the counting of the ballot. Table 22 in appendix VI details the provisions in the laws of different states for provisional voting and other procedures to address voters whose names do not appear on the registration list. Our mail survey showed that over three-quarters of the jurisdictions nationwide had at least one procedure in place to help resolve eligibility questions for voters who did not appear on the registration list at the polling place. Poll workers will often first try to reconcile this type of problem by contacting election headquarters and verifying their registration list against the more current master registration list. If election headquarters cannot provide a definitive answer about a voter’s eligibility, many jurisdictions allow the individual to vote some type of provisional ballot. Several election officials told us that provisional ballots are a great help in conducting elections. One director of elections said that in order to keep the polling places operating smoothly, no person who asks to vote is denied a ballot. In this jurisdiction, poll workers are instructed to give a provisional ballot to persons whose names do not appear in the poll book. The provisional ballot will not be counted if the person is not a registered voter. In the 2000 general election, this jurisdiction distributed 18,000 provisional ballots to voters, and about half of these ballots were rejected, primarily because the person casting the ballot was not registered. This jurisdiction, unlike most, posted the names of those persons whose ballots were rejected and, therefore, not counted in the election. Voters whose ballots were rejected could appeal the decision. The procedures and specific instructions that jurisdictions develop to permit provisional voting differ across jurisdictions. For example, in some jurisdictions, the voter must sign a sworn statement to cast a provisional ballot, but not in others. Figure 43 shows a provisional flow chart that officials in one very large jurisdiction developed to spell out for poll workers and voters the specific steps that have to be taken to vote a provisional ballot. Figure 44 illustrates the special envelope or sleeve that one very large jurisdiction uses for provisional ballots. In this jurisdiction, the voter must place his or her punch card provisional ballot in the sleeve, fill in the required information, and sign the ballot. Our mail survey results indicate that nationwide only 12 percent of jurisdictions reported turning away from the polls persons who desired to vote but whose names did not appear on the list of registered voters. Several election officials we visited in jurisdictions that did not have provisional voting said that introducing provisional voting would be an important step in helping assure that all eligible voters were permitted to vote at the polls on election day. Additionally, they said that the option of provisional voting could also help minimize other problems that interfere with the smooth operation of the polling place. According to the election officials we spoke with, resolving a high number of voter eligibility questions contributed to two other election day problems: communications between polling places and election headquarters and long lines at polling places. To help resolve these problems, election officials have proposed or taken the following steps: Adding Telephone Lines: Some jurisdictions have added telephone lines both in the election headquarters office and at polling places to alleviate some of the communication problems. Other jurisdictions are providing poll workers cell phones to ensure that they have access to telephones to call headquarters. One of the most promising solutions to this problem is to provide poll workers direct access to central registration files. Electronic Poll Books: If funds were available, officials in one very large jurisdiction said they would buy electronic poll books that can be directly linked to the central registration files. We estimate that communication between the polls and the central election office was a major problem for 17 percent of the jurisdictions nationwide and long lines at polling places was a major problem for 13 percent. There is tremendous variability in the tasks performed throughout election day among jurisdictions. Not only is this variability dictated by the voting system, but also by the culture and traditions that have emerged in each jurisdiction. Typically, many of the tasks required to successfully conduct voting are handled routinely. However, election officials identified long lines and inadequate communication links as major challenges. Once officials have ascertained the voter is eligible to vote, they give the voter the appropriate ballot or authorize the voter to use the voting machine containing the appropriate ballot. Some precincts have multiple versions of the ballot because some voters in the same precinct for the presidential election live in different jurisdictions for other races. In one medium-sized jurisdiction, the different ballot styles were color-coded so that the poll workers could quickly identify the appropriate ballot for the voter. Once a voter completes the ballot, how he or she casts the ballot depends on the type of voting system. In precincts that count paper, punch card, and optical scan ballots centrally, typically the voter will carry the ballot to an election official, who deposits the ballot in the ballot box. Where there are precinct-level counters for punch cards or optical scan ballots, voters place their ballots in the automatic feed slot of the counting machine. The precinct-level counting machine tells the voter if there is an error on the ballot, such as an undervote, an overvote, or a damaged ballot, giving the voter an opportunity to correct the ballot. To cast a ballot using electronic voting systems or lever machines, the voter pushes a “cast vote” button or pulls a lever to register the vote. Figure 45 illustrates how a voter would cast an electronic vote on a touch screen DRE machine that resembles an ATM. Voters can change their votes on the DRE machine until they push the “vote” button. Many jurisdictions using other voting equipment, such as optical scan or punch card machines, permit voters who request them, a second or third replacement ballot if they have spoiled the previous one. Our mail survey results indicate that nationwide, 71 percent of jurisdictions allowed voters to correct their ballots or get new ones if the original is spoiled. However, the voter must realize that he or she has made a mistake and ask for a new ballot. Once the ballot is cast, some jurisdictions require a checkout procedure, and some simply give the voter an “I voted” sticker. Election officials perform many other tasks throughout the day to ensure that the elections run smoothly and that voters move expeditiously through the polling place. Culture and tradition influence how the polling places carry out these tasks on election day. Some polling places are more indulgent, while others more rigorously follow required procedures. For example, jurisdictions using DRE machines require the voter to push a button to record his or her vote, but if the voter exits before properly recording the vote, various jurisdictions follow different procedures. Election officials in a large jurisdiction using DRE machines told us that if the voter leaves the voting machine without pushing the green “vote” button, the poll worker at the machine is to void the vote. In contrast, in a different jurisdiction, the election official said the poll worker may reach discreetly under the curtain and press the “vote” button, thus recording the vote. In another jurisdiction, if a voter leaves without hitting the “cast vote” button, then the poll worker can cast the vote only if two poll workers, a Democrat and Republican, are present. For many places, an election is not only a civic event but also an occasion for socializing. In small voting jurisdictions, the poll workers often share potluck meals with one another. Neighbors and friends not only vote, but also visit at the polls. In contrast, many large jurisdictions manage their polling places in a business-like fashion, and voters want to get in and out of the polls as quickly as possible. While the polls are open, poll workers are responsible for making sure that no one violates electioneering laws; for example, by passing out campaign literature at the polling place. In one jurisdiction, a string is included in the supply box to mark off the “electioneering free zone” outside the polling place. Periodically, the poll workers check to ensure that no one has left campaign or other materials in the voting booths, that the instruction cards are still posted and intact, and that the voting equipment is still functioning properly. Poll workers also monitor voters in the polling place and provide assistance and information as needed. Our mail survey results indicate that nationwide, 51 percent of jurisdictions instructed poll workers to ask voters if they had any questions about operating the voting equipment or casting their votes before voting. This assistance may include helping handicapped voters. In one jurisdiction, if voters call in advance, they may arrange for curb side voting, in which case the town clerk and another poll worker deliver ballots to the voter’s vehicle. Although many jurisdictions are required to have voting instructions on every machine, poll workers also provide other types of voter education. As illustrated in figure 46, poll workers can explain how to complete ballots before the voter enters the voting booth. Most of the jurisdictions we visited identified several types of assistance that are offered to voters at the polls, although the amount and type of voter education at the polls varied. Of the voting jurisdictions nationwide, our mail survey results indicate that 84 percent made written instructions available for voters to review before voting, and 37 percent provided demonstrations on how to vote through a videotape or in person. At some polling places, poll workers hand the voter an instruction card to take in the voting booth with them. When introducing a new technology, one jurisdiction dedicated a voting machine for teaching purposes, allowing voters to familiarize themselves with the equipment before actually voting. Other places have continuously running video for voter education. Long voter wait times are a problem that election officials try to avoid. Our mail survey results indicate that 13 percent of jurisdictions in the United States considered long lines at the polling places to be a major problem in the November 2000 election. These results also indicate that 88 percent of jurisdictions did not collect information on the average time that it took voters to vote in November 2000; thus, the cause of long wait times remains unclear. However, some jurisdictions reported to us anecdotally that the length of time voters must wait is affected by ballots that include many races and issues. Underestimating voter turnout also may contribute to long wait times. Some jurisdictions reported their ballot was so long that it took voters a long time in the voting booth to read it and vote. As a result, lines backed up, and some voters had to wait for over an hour to cast their votes. Officials in a very large jurisdiction said that their voters experienced long wait times, in part because redistricting caused confusion among voters, who often turned up at the wrong polling places. Election officials cited inadequate communication links from the polling places to headquarters as a problem. For instance, officials from a medium-sized jurisdiction told us that their phones were inadequate to handle the large volume of calls coming into the office so poll workers found it difficult to get through with their questions. For the November 2000 election, some jurisdictions dealt with the problem of inadequate communication links by installing more phone lines or using cell phones. One small jurisdiction distributed cell phones to poll workers whose polling places did not have phone lines. A large jurisdiction provided all polling places a cell phone. In another large jurisdiction, even though more phone lines were installed in election headquarters offices and additional staff were added to answer questions from precincts and voters, the phone system was overloaded and down at various points during election day. Overall, election officials reported a high degree of satisfaction with how the November 2000 general election was conducted in their jurisdiction. However, jurisdictions did not comprehensively collect and report on their performance. According to our mail survey, four-fifths of the jurisdictions nationwide did not seek feedback from voters on how well voter registration, absentee voting, polling place locations and times, voting equipment, polling place procedures, or other areas were administered. Some jurisdictions conducted selective evaluations of their elections. For example, some jurisdictions maintained information on overvotes and undervotes, but many did not. In one large jurisdiction, election officials conducted a survey of poll workers after the election to obtain their views of problems encountered on election day. In one medium-sized jurisdiction, officials performed an evaluation of their voting procedures. Many jurisdictions maintained logs of voter complaints. An election official from a large jurisdiction said that they do not need to solicit feedback from the voters because they receive enough unsolicited feedback. In summary, election officials face many challenges as they pursue their goal of planning and conducting an election that permits eligible citizens to cast their ballots without difficulty on election day. The following are the key challenges that election officials faced as they planned and conducted the November 2000 general election and their views on how these challenges might be addressed. Local election officials were generally satisfied that the election of November 2000 was conducted well in their jurisdictions. However, many also identified major problems that they faced, particularly in recruiting qualified poll workers who, for nominal pay, would commit to a long election day, and in handling a range of problems associated with determining voter eligibility at polling places on election day. There is wide diversity in how elections are conducted within and across states. Often these differences reflect local needs and customs. Local election officials frequently told us that “one size does not fit all.” However, local election officials acknowledge that standardization of certain aspects of election administration may be appropriate at the state and even the federal level. Based on our mail survey, we estimate that over 14 percent of local election officials nationwide are supportive of federal development of voluntary standards for election administration similar to the voluntary standards now available for election equipment. An additional 26 percent support federal development of mandatory standards for election administration. Few local election officials systematically collected information on the performance of the people, processes, and equipment on election day or conducted post-election assessments to help them understand the impact of some problems on the election. For example, few of the jurisdictions surveyed voters to obtain their views on how easy it was to understand the ballots or other voting procedures. Additionally, few states routinely ask for information on or compare the problems and performance of local election jurisdictions. However, some local election officials believe that greater sharing of information on best practices and systematic collection of standardized information on elections can help improve election administration across the United States and within states. Some also suggested this would be an appropriate role for a national election administration office and clearinghouse. If federal funds are made available for election reform, local officials believe that such funds should not be limited to equipment replacement but that they should have the option to use funds for other improvements to election administration, such as increasing poll worker pay or voter education. They also believe that they should be able to use such funds to help with what they believe are their most pressing needs. In the jurisdictions we visited, officials identified purchasing new equipment or software (for registration, absentee voting, or election day voting), increasing voter education, and poll worker pay to be their top priorities for the use of federal funds. The polls close on election day. The votes are counted, and final election results are reported. It sounds simple, but the presidential election in Florida in November 2000 revealed just how difficult the vote counting process can be as the state scrambled to provide an accurate count of the votes cast. Problems with vote counting can occur because of the way people—election officials or voters--interact with technology. For example, in New Mexico, an election official in one county incorrectly programmed the software used to count votes. The result was that more than 20,000 votes cast for President were not included in the initial counts, and the final vote totals could not be determined until the problem was resolved. In another example, the Clerk for Cook County, Illinois reported that a defect in the some of the templates used for punch card votes may have accounted for one-third of the 123,000 ballots with errors in the November 2000 election. The Methods Used to Count Votes Varied Among the Jurisdictions but The Greatest Vote Counting Challenges Occur, Not When the Margin of Victory Is Wide or Ballots Are Properly Marked, but When Elections Are Close or Voters Mark Their Ballots in Ways That Prevent the Vote Counting Equipment from Reading and Counting the Vote The methods used to count votes vary among jurisdictions, depending on the type of voting method or methods used, the type of ballots being counted, and whether some or all ballots are counted at the precinct or at a central location. However, all vote-counting methods have certain steps in common. Following the close of the polls, election officials and poll workers generally take a number of basic steps to count or tabulate votes, including securing voting machines and ballots so that no additional votes can be accounting for all ballots, reconciling any differences between the total number of ballots on hand at the beginning of the day with the number of voters who signed in at the polling place, the number of ballots distributed, and/or the number of ballots cast; qualifying and counting mail absentee ballots and provisional ballots (i.e., ballots issued to voters whose voter registration could not be confirmed at the polling place); securely transferring—electronically, physically, or both—ballots and election results (if ballots are counted at the polling place) to a central location; canvassing the votes, which includes reviewing all votes by precinct, resolving problem votes, and counting all valid votes (absentee and other preelection day; regular election day, provisional election day) for each candidate and issue on the ballot and producing a total vote for each candidate or issue; certifying the vote, in which a designated official certifies the final vote totals for each candidate and each issue on the ballot, within a specific timeframe; conducting any state-required recounts and responding to any requests responding to allegations regarding a contested election. Vote counting is not necessarily completed on election day or even on the day after. For example, nine states and the District of Columbia allow absentee ballots to be counted if they arrive after election day. To be counted, however, all of them but one require that the absentee ballot be postmarked on or before election day. Canvassing the vote—when election officials combine totals for each type of vote and the votes from each voting precinct into a total vote for each candidate and issue on the ballot—usually occurs one or more days after election day. With regard to certification of the vote, some states have a specific deadline following an election, and others do not. The election board or official may order a recount or partial recount. Most state codes contain specific provisions for conducting a recount, which may be mandatory if there is a tie vote or if the vote for a specific office falls within a certain margin of victory, such as one-half of 1 percent. If there is no recount, or when the recount has been resolved, the local results are totaled, certified, and reported to the state’s chief election official. The greatest vote counting challenges occur not when the margin of victory is wide or ballots are properly marked, but when elections are close or voters mark their ballots in ways that prevent the vote counting equipment from reading and counting the vote. This can occur, for example, when voters circle a candidate’s name on an optical scan ballot instead of filling in the oval, box, or arrow beside the candidate’s name. In close elections where there are a large number of ballots that vote counting equipment cannot read, questions may arise about the accuracy of the vote count, and recounts may be required or election results contested. Local Election Jurisdictions May Need to Count Several Different Types of Votes That Were Cast at Different Times Using Different Voting Methods Votes May Be Counted at the Precinct, at a Central Location, or at a The Counting of Each Type of Vote May Be Done by Some Type of Vote Tabulating Machine, by Hand Count, or a Combination To determine the final vote count, local election jurisdictions may need to count several different types of votes that were cast at different places using different voting methods. These types of votes include votes cast at individual polling places by registered voters who appear in the registration lists for that precinct, votes cast at individual polling places by voters who do not appear in the registration lists for that precinct and whose eligibility to vote cannot be determined at the polling place, absentee votes cast by mail before election day, and absentee and early votes cast in person before election day. Each of these types of votes may be counted at the precinct, at a central location, or at a combination of the two. In one medium-sized jurisdiction, absentee votes exceeded the number of votes cast at the voting precincts on election day in November 2000. Absentee ballots may be counted centrally, while the votes cast at the polling place by eligible voters may be counted centrally or at the precinct. The results of our national mail survey indicate that many jurisdictions count votes both centrally and at the precinct. We estimate that about 52 percent of the local election jurisdictions nationwide counted votes centrally and about 58 percent counted votes at the precinct. Of the optical scan jurisdictions, about 56 percent counted votes centrally, and about 51 percent counted votes at the precinct. We estimate that nationwide, of those jurisdictions that counted votes at a central location, about 70 percent of all jurisdictions and 90 percent of optical scan jurisdictions programmed their equipment to reject or separate ballots that the equipment could not read. The counting of each type of vote may be done by some type of vote tabulating machine, by hand-count, or a combination. According to our analysis of available data on voting jurisdictions, about 2 percent of the approximately 186,000 precincts nationwide are in jurisdictions that hand- count paper ballots. The remaining 98 percent of the precincts use some type of vote-counting equipment. The 27 local election jurisdictions we visited illustrate the wide variation among election jurisdictions. Twelve of the these jurisdictions used one voting method for casting election day ballots and a different method for casting absentee or early voting ballots. Ten jurisdictions used ether DRE or lever equipment on election day. With DRE and lever equipment, voters cast their ballots directly on the equipment; they do not use individual paper ballots. Thus, DRE and lever jurisdictions use a different type of voting method that uses some type of individual paper ballot for mail absentee voting. Fourteen jurisdictions used the same voting method for election day and absentee and early voting ballots—all were jurisdictions in which voters cast their votes on individual punch cards or paper ballots. Eighteen of the 27 jurisdictions counted ballots cast on election day at the precinct, and 10 of the 27 counted absentee ballots at the precinct. In one jurisdiction, absentee ballots were qualified for counting at the precincts, but counted centrally. One jurisdiction counted mail absentee ballots centrally, but counting other preelection day ballots at the precinct. Details for each jurisdiction are shown in table 23 in appendix VII. The way in which votes are counted on each type of voting equipment is described in detail in chapter 1. Here we focus on the ways in which election jurisdictions used those technologies. After voting, the voter deposits his or her ballot in a ballot container placed in the polls. The ballot may remain in a secrecy envelope or slip from the secrecy envelope as it is deposited into the ballot container. After the polls close, the ballots are transported to a central-count location where they are fed into a tabulator and counted by precinct. After the completion of the tabulation process, the election workers responsible for managing the counting center use the tabulator to generate a report, which lists the voting results by precinct and by candidate. Figure 47 shows a central- count tabulation machine. Nationwide, of those jurisdictions that used central vote counting equipment in November 2000, about 70 percent programmed the vote counting equipment to reject or separate ballots that the equipment could not read. Almost 90 percent of jurisdictions that used central-count optical scan equipment did this. Where central counting was used, voters did not have an opportunity to correct ballots that could not be read by the counting equipment. Votes may be counted at the precinct. Hand-counted paper ballots are usually counted at the voting precinct. Lever and DRE equipment is designed to automatically tabulate the votes cast on each machine at the precinct. Generally, punch card jurisdictions use central counting equipment. However, punch cards may be counted at the precinct in some cases. One advantage of precinct counting is that the counting equipment at each precinct can be configured to notify voters of errors they have made on their ballots that would prevent any of their votes from being counted. This includes overvotes—voting for more than the allowed number of candidates for an office—and undervotes—voting for no candidates or fewer than the permitted number of candidates for an office. DRE and lever equipment can be programmed to prevent voters from casting overvotes. DRE equipment can also be programmed to alert voters to undervotes. A jurisdiction may have had the precinct count technology available, but could not use it in the November 2000 election. For example, Cook County, Illinois, which includes Chicago, had the technology for their punch card ballots but were prohibited by state law from using it. All five of the punch card jurisdictions we visited used central counts, where the punch cards were collected from the precincts and sent to a central-count location. About half of optical scan jurisdictions used precinct counts in November 2000. Generally, in jurisdictions that count ballots by hand at the precinct, election workers remove ballots from the ballot container and tally the valid votes. We visited two small jurisdictions that counted votes by hand. As described by local election officials in one of these jurisdictions, each precinct filled out a certificate of results once the counting was complete. The certificate showed how many votes each candidate received. Poll workers also must record the number of unused, spoiled, challenged, and absentee ballots on a separate form. When the poll workers have completed the certificate, they posted a copy of the precinct results outside the precinct and sent another to the county clerk’s office. With lever machines and DREs, voters do not receive individual paper ballots to mark. Poll workers take counts at the precinct from lever machines. For lever machines, the votes cast by each voter trigger mechanically controlled tumblers, which are concealed in a sealed compartment at the back of the machine. After the polls close, poll workers open the sealed compartment and record the vote totals shown on the tumblers. After recording the vote results, the machine is resealed to prevent tampering. Some lever machines can print a paper copy of the vote totals shown on the tumblers. To get the printed copy, a poll worker must pull a sheet of roll paper over the tumblers and rub the number indicated for each candidate in each contest and for each issue. Figure 48 shows the back of such a machine and the sheet of paper with the vote totals. With DREs, the votes cast by the voter are stored in the unit’s memory component after the voter indicates that he or she has completed the voting process, usually by pressing a “Vote” button or screen. After the close of the polls, the poll workers responsible for managing the precinct use the unit to generate a report, which lists the voting results. Different methods may be used to transmit the results. For example, in one medium-sized jurisdiction, the DRE cartridges were delivered to the various municipal clerks’ offices, where the voting results were transmitted electronically to the county clerk’s office. In a large jurisdiction, the DRE cartridges were transported to one of seven counting centers. The results were transmitted over the county’s secure data network to the registrar’s office. With precinct-based optical scan equipment, the voter removes the ballot from the secrecy envelope and feeds it into a tabulator placed in the polls. “Read heads” engineered in the tabulator identify the votes cast on the ballot and electronically record them in a memory component housed in the tabulator. After passing over the read heads, the ballot is channeled into a storage bin, where it remains until the close of the polls. After the close of the polls, the election workers responsible for managing the precinct use the tabulator to generate a report that lists the voting results. Figure 49 shows a precinct-count optical scan machine. Voting Equipment Can Be Locked and Ballots Sealed so That the Voting Results May Not Be Altered Once the Precinct Has Closed Poll Workers May Use Some Method to Ensure That All Ballots Are Accounted for at Precinct Closing Once a precinct has closed, voting equipment can be locked and ballots sealed so that the voting results may not be altered. When this is done depends on whether votes are counted at the precinct or centrally. In jurisdictions in which all votes are counted centrally and in precinct-count jurisdictions in which absentee and provisional votes are counted centrally, poll workers can lock voting equipment and secure ballots shortly after the polls close. In jurisdictions in which only absentee and provisional ballots are counted at the precincts, one or more precinct counters may remain unlocked so that poll workers may use them to count these ballots after the polls close. The procedures for securing and locking voting equipment varies by the type of voting equipment used. For example, for optical scan equipment, poll workers may read an “end” ballot into the optical scan counter at the precinct, which instructs the equipment to accept no more ballots and locks it, at which point the counter begins tallying the vote. For DREs and some optical scan equipment, poll workers may use a key to initiate the program that tabulates the total votes counted for each candidate and issue from the ballots read by the equipment. This procedure can lock the vote reading mechanism in the equipment. Poll workers can lock lever machines so that no additional votes can be recorded. However, in precincts at which absentee and provisional votes are counted, an optical scan counter or a DRE may remain unlocked so that it may be used to count these votes. In conjunction with securing voting machines and ballots at the precinct, poll workers may use some method of ensuring that all ballots are accounted for at closing. Jurisdictions can also employ one or more methods to reconcile the number of blank ballots on hand at the voting precinct at the end of election day (including any supplemental ballots provided during the day) with the number of ballots issued or the number of voters who signed in. This reconciliation may take place before or after the votes are counted at a precinct. In jurisdictions that use central count, this reconciliation can occur at the precinct before poll workers transport the ballots to the central tabulation center. Figure 50 shows a form that poll workers used at one of the jurisdictions we visited for reconciling the ballot count. We estimate that about 88 percent of jurisdictions nationwide compared the number of ballots cast with the number of voters who signed in at the voting precinct in the November 2000 election. Our mail survey of local election jurisdictions indicates that most jurisdictions nationwide compared the number of ballots cast to the number of voters who signed in to vote on election day. Specifically, we estimate that in November 2000 about 88 percent of jurisdictions nationwide compared the number of ballots cast to the number of voters who signed in to vote on election day. We estimate that about 64 percent of jurisdictions nationwide compared the total number of ballots cast, spoiled, and unused to the original supply of ballots. Nationwide, we estimate that about 78 percent of optical scan jurisdictions did such a comparison. However, only about 1 in 10 DRE jurisdictions noted took this step. This difference may be due to the differences between voting technologies that use individually marked paper ballots and those that do not. Except for voters who cast a provisional ballot, jurisdictions that use DRE or lever equipment had no paper ballots for voters to complete. About 6 percent of jurisdictions used some other type of procedure. A medium-sized punch card jurisdiction we visited provided an example of other types of procedures used to reconcile ballots and voters. There, election officials said that election judges counted the number of ballots in the ballot box after the polls closed and compared the total with the number of ballots cast. If there was a discrepancy, the ballots were recounted and the applications checked to make sure they were numbered correctly. If the count was a ballot short, it was noted. If the count was a ballot over, a ballot was randomly withdrawn from the box and placed in an envelope for excess ballots. Two election judges took the ballots in a locked transfer case to the counting center. The ballots were machine tabulated and a count provided. If the count did not match the judges’ count, the ballots were retabulated by a different machine. If the count still did not match, the ballots were sent to a discrepancy team where they were hand counted again. After this, the ballots were once again machine tabulated. These processes were from guidelines provided by the state election board. Jurisdictions May Use Different Equipment to Count Absentee or Provisional Ballots Than Regular Ballots Cast at the Voting Precinct Absentee or Provisional Ballots May Also Be Counted at a Different Place Than Regular Ballots Cast at the Voting Precinct Both mail absentee and provisional ballots must first be qualified as eligible for counting. For mail absentee ballots, this may include checking postmarks, voter signatures, or other required items on the outer envelope containing the ballot envelope. For provisional ballots, this means determining that the voter was registered and eligible to vote in the precinct in which the provisional ballot was cast. Absentee and provisional ballots may be counted at a different place using different types of vote counting equipment than those cast at the voting precinct on election day. Different equipment may also be used to record the votes. There were considerable variations in how absentee ballots were counted; for example, by hand at the precinct or by machine at the precinct or centrally. One large jurisdiction we visited used DRE equipment at the polling place in November 2000 but paper ballots for absentee ballots. These paper ballots were counted by hand at the precinct and the votes entered into a DRE unit at the precinct by poll workers. Two other DRE jurisdictions we visited also used DRE equipment at the polling place but counted both absentee and provisional ballots at a central place, using optical scan equipment. However, in one of these jurisdictions, voters casting early voting ballots used an optical scan machine that notified voters if their ballot could not be read, allowing them an opportunity to correct errors. Absentee ballots were initially counted at a central location after a review by an absentee board. Voting results stored on cartridges from the optical scan equipment from both absentee and early voting ballots were tabulated at a central location, using software customized for each election. Jurisdictions used different methods to allow a person to vote when his or her name did not appear on the official voter registration list and their voter registration could not be confirmed at the voting precinct. In such cases, jurisdictions in some states provided voters with a provisional ballot. Provisional ballots were generally kept separate from other ballots and researched by election officials to determine the voter’s eligibility to vote. Only those ballots cast by voters whose eligibility had been confirmed were generally counted. However, provisional ballots were not always counted. In a small jurisdiction we visited, for example, if a voter was not listed in the voting precinct’s list of registered voters, local election officials searched for the person’s name by computer using a statewide database of voter registration records. If the voter’s name still could not be found, the voter was permitted to fill out an “escrow” ballot, this jurisdiction’s term for provisional ballots. However, these provisional votes are not counted unless the election is close enough that the provisional votes, if all cast for the same candidate, would be sufficient to change the outcome of the election for one or more offices on the ballot. If the number of provisional ballots were sufficient to change the outcome, the ballots would only be counted after additional research was completed to verify the voter’s registration status. In one large jurisdiction, election officials said that, partly to avoid confrontation with people on election day, they provided provisional ballots to individuals who appeared at the front desk of the central election office and stated that they were registered to vote and wished to vote. If a person’s registration was confirmed, his or her vote was counted with all the rest. Election officials tracked the number of provisional ballots that could not be counted because they found that the person was not registered. In the November 2000 election, 1,302 provisional ballots in this jurisdiction were rejected from the count—less than one-half of 1 percent of the total 299,776 votes cast in the election. A Canvass of the Election Results Is Usually Conducted a Day or Two After Election Day by the Jurisdiction’s Canvass Board or an Official, at Which Time All the Precinct Results Are Tabulated Together Eight of 27 Election Jurisdictions Selected for Our Site Visits Reported Problems With the Vote Counting Equipment, Involving Either Technical Difficulties or Human Error That Caused Problems in Obtaining an Accurate Count Once the polls close and the votes are transported to a central location where they are counted, or voting results are transmitted from the polling place to a central location, the canvassing process may begin. Canvass is the term used in many states to describe the process of vote counting, including aggregating the votes from all precincts to obtain the jurisdictional totals, and from all jurisdictions to obtain statewide totals. A recanvass is a repetition of the canvass. A canvass of the election results is usually conducted a day or two after election day by the jurisdiction’s canvass board or an official. Once the canvass is completed, the final vote counts are certified, the official results issued, and the canvass board or other official certifies the vote count by a specific date after the election. Dates vary by state. The canvassing process varies widely, as illustrated by several examples from our site visits. The process may be conducted by a canvassing board, board of elections staff, or bankers and lawyers hired for the canvass. It may include provisional ballots in the canvassed totals. The process can involve some hand counts, a comparison of results from individual voting machines to precinct totals or totals reported to the state, or a comparison of hand counts of absentee votes to the machine counts for absentee votes. Regardless of how canvassing is done, its principal purpose is to produce an accurate vote count. In one medium-sized jurisdiction, the election canvass process consisted of an internal audit conducted by the canvass board. Canvass board duties included processing absentee ballots, checking postmarks, verifying signatures, opening envelopes, and sorting ballots. The canvass was required by state law to ensure the accuracy of election results. The canvass board certified special elections or primary elections on the tenth day after the election and general election results on the fifteenth day after the election. During the canvass process, absentee and provisional ballots not counted on election night were researched to validate their eligibility to be counted. In addition, the canvassers conducted an audit and reconciliation of the number of signatures indicated by the poll inspector on the poll roster with the number of ballots tabulated by the counter. The canvass was completed with the certification and issuance of official election results. In another medium-sized jurisdiction, officials noted that the voting machines were canvassed after the polls close. All of the paper ballots, including affidavit ballots, which is this jurisdiction’s term for provisional ballots, and emergency ballots were returned to the Board of Elections. If required, affidavit ballots and absentee ballots were researched. The paper ballots were counted and the results tallied. The ballots were counted during the 7 days after the election at the county court house office. Officials said the lever machine totals were recanvassed by Board of Elections staff, including one Democrat and one Republican. In a large jurisdiction, bankers and lawyers were hired for the canvass and worked together in separate banker or lawyer teams; each team did its own vote tally sheet. Bankers did not review the tally until the lawyers were done. Write-in votes for candidates were added as adjustments to DRE machine tabulations. The teams verified the information on the tally sheets by comparing information from each DRE machine’s paper tape to printed results collected by the State Election Director’s office. Absentee votes were tallied by hand and then compared to the machine’s reported count for absentee votes. This was done to confirm the accuracy of the hand- counted absentee vote totals entered into one of the DRE machines at each precinct. The Chancery Court certified the canvass in the county. The canvass process began the Thursday following election day. Two judges from different political parties are to resolve any challenges to the vote count. As discussed in the section on voting technology, pre- and post-election tests were widely performed on voting equipment, at precincts and central counting locations, to make sure the equipment was operating properly, to check for accuracy, and to guard against tampering. In addition to testing the voting equipment, a manual recount may be routinely performed on a small percentage of ballots, as a check on the validity and accuracy of the machine count. Accuracy operational tests are most difficult with DRE and lever equipment, where there is no ballot document and the count is recorded at the voting booth on each individual machine. A thorough preelection test would require hundreds of simulated votes to be placed on each machine. Election officials in the 27 sites we visited were generally satisfied with the performance of the vote casting and tabulating equipment used in the November 2000 election. Officials in 18 jurisdictions reported no problems with vote counting; 8 sites reported problems; and 1 site provided no response. The problems reported by the 8 sites mostly concerned the vote counting equipment, involving either technical difficulties or human error. Other problems mentioned included reconciling hand and machine counts with poll books and the counting of absentee and provisional ballots. Some of the technical difficulties included punch cards that stuck together and could not be read by the counting machines that were fed stacks of cards at a time; punch card counting machines that froze up during the count; 5,000 regular and absentee punch card ballots that had to be remade because they could not be machine read; slight variances in the punch card ballots produced by two different card vendors that made it difficult to use the machines that counted the punch cards; optical scan equipment that stopped working because it became clogged with paper dust due to the size of the ballot and the number of ballots received; and integrating the operations of two different DREs that were being tested in the same jurisdiction. Some of the human errors that contributed to problems in counting the vote included incorrect marks by voters on optical scan ballots that could not be programming errors in the software used to tally optical scan ballots. Among those jurisdictions that reported no problems, officials from one site mentioned some growing pains with remote tallying. One reported that checks and balances used throughout the day prevented counting problems, and another reported no problems since switching to DRE equipment. The remaining sites identified a “smooth election” or simply no problems in counting the vote. State Guidance on What Is a Proper Mark on a Ballot and How to Interpret Variations From Proper Ballot Marks Varied Some States Are Voter Intent States, and Election Officials Are Tasked With Determining How a Voter Intended to Cast a Vote When a Question About the Ballot Arises Other States Do Not Try to Interpret Voter Intent, but Instead Rely Solely on Specific Voter Actions In the canvassing process, election officials generally must consider issues regarding ballots that have not been marked properly—for example, an optical scan ballot in which the voter has circled a candidate’s name, instead of completing the oval, box, or arrow next to the candidate’s name. State guidance on what is a proper mark on a ballot and how to interpret variations from proper ballot marks vary. Each type of voting equipment presents different issues. What constitutes a proper mark on a ballot can differ based on the type of voting method used. With DRE and lever equipment, voters record their vote directly on the equipment. Because there is no separate ballot, there is generally no need for a specification of what constitutes a properly marked ballot. With paper, optical scan, and punch card ballots, there is the possibility that such a determination would need to be made. With these methods, a voter must make the proper mark or punch to indicate which candidate or issue he or she is voting for. If the mark is not made correctly, it can result in an improperly marked ballot that may be subject to review. Depending on the requirements in the jurisdiction, these problem ballots may be reviewed to determine a voter’s intent; in other jurisdictions, they will not. On the basis of our survey of state election directors, 30 states and the District of Columbia reported that they had a state law or other provision that specified what is a proper ballot marking for each voting method. Definitions regarding what constitutes a proper ballot marking for paper, punch card, and optical scan ballots varied by state, where they existed, and for the type of machine. Some statutes did not contain specific definitions of proper ballot markings, but instead referred to instructions on the ballot or to requirements of the voting method. For example, in Maine “the voter must mark the ballot as instructed in the directions on the ballot to indicate a vote for the name of each nominee for whom the voter wishes to vote.” In Iowa "the instructions appearing on the ballot shall describe the appropriate mark to be used by the voter. The mark shall be consistent with the requirements of the voting system in use in the precinct.” Other states had statutory provisions that were more specific regarding the type of marks that would count as a valid vote. For paper ballots, for example, Michigan was specific about the type of proper marks that should be counted as a valid vote, requiring that a cross, the intersection of which is within or on the line of the proper circle or square, or a check mark, the angle of which is within a circle or square, is valid. Some states also provided specific instructions on how optical scan ballots should be marked. For example, Alaska requires that the mark be counted if it is substantially inside the oval provided, or touching the oval so as to indicate clearly that the voter intended the particular oval to be designated. In Nebraska, to vote for a candidate, “the registered voter shall make a cross or other clear, intelligible mark in the square or oval to the left of the name of every candidate, including write-in candidates, for whom he or she desires to vote.” For states that use punch card ballots, the definitions varied from general instructions on what should constitutes a proper ballot mark under all types of voting methods, as previously described, to more specific instructions. For example, in Massachusetts, the instructions state “a voter may vote by punching holes in a data processing card.” In Texas, in any manual count, the instructions state a punch card ballot may not be counted unless “(1) at least two corners of the chad are detached; (2) light is visible through the hole; (3) an indentation on the chad from the stylus or other object is present and indicates a clearly ascertainable intent of the voter to vote; or (4) the chad reflects by other means a clearly ascertainable intent of the voter.” The problem of trying to interpret variations from proper ballot marking was clearly evident in the November 2000 presidential election in Florida. Issues arise with paper, optical scan, and punch card ballots, not when the ballots are marked properly for the type of ballot used, but when there are variations from proper marking. In our survey of state election directors, 25 states and the District of Columbia reported that they had a state law or other provision that specified for variations from proper ballot markings. In addition, some states are voter intent states, and election officials are tasked with determining how a voter intended to cast a vote when a question about the ballot arises. Other states do not try to interpret voter intent but instead rely solely on specific voter actions. Some states had general statutory provisions that they provided general provisions that covered all types of voting methods. For example, California law requires that each voting method have procedures adopted for use with that method and each set of procedures addresses this issue in detail. In California, these procedures are set out in a separate voting procedures manual. Some states had specific guidance for different types of voting methods. Some states had specific instructions on how to interpret variations from proper markings on paper ballots. Minnesota law contains detailed specifications as to where the mark “X” on the ballot can be placed and still be a valid vote, and regarding the use of marks other than the mark “X.” New Jersey law is also specific as to where the mark is placed and the type of mark to make on the ballot. Marks must be substantially in the square to the left of the candidate’s name and must be substantially a cross, plus, or check. State law differed among some states for interpreting variations from proper marking on optical scan ballots. In Illinois, a voter casts a proper vote on a ballot sheet by making a mark in a designated area. A mark is an intentional darkening of the designated area on the ballot sheet, and shall not be an “X,” a check mark, or any other recognizable letter of the alphabet, number, or other symbol which can be recognized as an identifying mark. On the other hand, Wisconsin requires that a mark be counted if a voter marks a ballot with a cross or other marks within the square to the right of the candidate’s name, or any place within the space in which the name appears, indicating an intent to vote for that candidate. Some state laws are specific on how to count punch card ballots, but these laws can vary by state. For example, under a recent amendment to Ohio law, effective August 2001, a chad with three corners attached to a ballot and detached at one corner must not be counted as a vote. Under a recently passed Nevada law, effective October 2001, a chad with three corners attached to the ballot and one detached must be counted as a vote. Other punch card states provided general or no guidance for interpreting variations from proper marking directive or procedures. In Arizona, according to the Secretary of State’s procedures manual for inspection boards, board members are to remove hanging chads prior to tabulating the ballots; “hanging chad” means hanging by one or two corners. In Oregon, a Secretary of State directive provides the instruction to “remove loose chad to insure that voters’ choices are accurately reflected in the count,” but there were no specific instructions about how many corners must be hanging to be counted. We estimate that nationwide about 32 percent of local election jurisdictions had no written instructions, either from the state or local jurisdiction, on how to interpret voter intent, such as stray marks on ballots or partially punched punch card boxes. As discussed earlier, states have varying requirements for the counting of improperly marked ballots. Even if a state has specified how a ballot should be marked, there are often variations from those ballot markings that are allowed to be counted. Beyond counting ballots with specified variations from proper ballot markings, many states specifically require election officials to count ballots if the “intent of the voter” can be determined. In our survey of state election directors, 31 states and the District of Columbia reported that they make some determination of voter intent. State statutes specifically address voter intent in a number of different contexts, including the count of all votes, absentee votes, write-in votes, manual recounts, and others. Certain states apply either an “intent of the voter” standard or an “impossible to determine the elector’s choice” standard in the review of ballots. For example, Vermont law states that “in counting ballots, election officials shall attempt to ascertain the intent of the voter, as expressed by his markings on the ballot.” Illinois law states that “if the voter marks more candidates than there are persons to be elected to an office, or if for any reason it is impossible to determine the voter’s choice for any office to be filled, his ballot shall not be counted for such office….” Although many states allow for a determination of voter intent, it is difficult to describe how this determination is being made in each of the states, because the responsibility is often delegated to local election officials. Below the state level, we asked the local election jurisdictions in our national mail survey if they had specific instructions on how to interpret voter intent, such as stray marks on paper ballots, dimples, or partially punched chads on punch card ballots. Our mail survey results indicate about 30 percent of local jurisdictions nationwide had written state instructions, about 15 percent had instructions developed by the jurisdictions, and about 23 percent had both. Optical scan jurisdictions were the most likely to have any one of the three types of instructions and DRE jurisdictions the least likely. Overall, we estimate that about 32 percent of jurisdictions nationwide had no written instructions and about 92 percent of DRE jurisdictions had no written instructions. In addition, during our visits to 27 election jurisdictions, we asked election officials if they had a definition of what constitutes a vote. We also asked the officials if they had written instructions on how to handle those ballots that could not be machine counted, such as those with hanging chads. Instructions, when they existed, were often detailed and specific to a location. The most notable differences were in the punch card jurisdictions. With regard to punch card ballots, jurisdictions we visited reported various ways to handle problem ballots. For example, in one medium-sized jurisdiction, election officials told us if the punch card ballot contained a dimple with a pinhole, employees were instructed to put the original ballot over a pink (or duplicate) ballot, hold it up to the light, and punch where they saw light. The employee also turned over the ballot and looked for bumps, which indicated the voter inserted the ballot backwards. If a ballot contained bumps on the backside, the ballot could be duplicated properly by election officials so that it could be read by the vote counting equipment. In another medium-sized jurisdiction, a vote on a punch card was defined as any removed chad plus any chad that freely swung by one side. The person scanning the ballot was to inspect it for improperly punched chads by running the ballot through his or her fingers. In one very large jurisdiction, the ballot inspection teams were given a pair of tweezers and told to remove any chads remaining on the punch card. In another very large jurisdiction election workers were to remove a chad if it was broken on three sides and connected to the punch card by no more than two sides. One medium jurisdiction used persons called “scanners” to go over the ballots before they were counted. Each ballot was inspected for improperly punched chads by running the ballot cards between the scanners fingers. Very loose chads would be removed through this process. If the chad did not come off and freely swings by one side, it could be removed. Problem ballots, such as those that were unreadable because of incompletely removed punches or incorrect punches, which can alter the counting results or create problems with the computer processing, were given to “makeover scanners.” Ballots that needed to be reviewed and possibly remade by the make-over scanners were placed in the ballot transfer case, either on top of the rest of the materials, or sideways in the stack of ballots, so that they were easily recognizable. For example, a ballot with an improper punch, such as those made with a pen or pencil, were sent to the “make-over scanners” to be remade. In one medium-sized jurisdiction, all ballot cards were inspected, marked with a precinct, and had the chad removed regardless of whether the ballot was regular or irregular. Careful attention was directed to finding a loose “chad” (partially punched) and bent or torn cards. If a “chad” was loose (attached by two corners or less), it was considered an attempt to vote for that choice and the “chad” was completely removed to enable the ballot tabulator to properly count that vote. Ballot cards were inspected for bends or tears that would prevent the ballot tabulator from counting the votes. Those that were imperfect were placed with irregular ballots. Each ballot card was also checked for punch positions that were circled or crossed out that would have indicated that the voter had changed their vote on the ballot card. Any ballot card with pen or pencil marks, tape, glue, or grease was placed with the irregular ballot cards. Although DRE equipment is designed to minimize voter error, problems can also occur with this voting method as well. However, the problems, do not generally involve the interpretation of improperly marked ballots, but rather with voter error in using the DRE equipment. As with the other voting methods, the jurisdictions may deal with the problems raised in different ways. For example, many DREs require the voter to push a cast- vote button before leaving the booth or the vote is not recorded. However, some voters forget to push this button and leave the polling place. One medium-sized jurisdiction required that an election official reach under the voting booth curtain and push the cast-vote button without looking at the ballot to cast the vote. However, a large jurisdiction required that the election official shall invalidate such ballots and reset the machine for a new voter. After pressing the final cast vote button on DRE equipment, voters cannot alter their votes. Election officials told us of small children being held by parents who kicked the final vote button, located at the lower right of the machine, before the parent had completed their ballot. In such cases, the voter may not be permitted to complete the ballot using some alternative method. When the Results Are to be Certified and by Whom Varied Among the Rather Than a Single Event, the Certification Process Can Occur in The media may report election results on election night and declare winners, but those returns are not official. In most states, the election returns posted on election nights are unofficial results. The results of an election are not final until the results have been certified. Different states have different methods of certifying the final results. In an Election Administration Survey performed by the National Association of State Election Directors in December 2000, respondents from different states replied that different individuals or boards are to certify the election returns. The responses on who is to certify the vote included, depending on the state, the Secretary of State, the Director of Elections, the Governor, the State Board of Canvassers, the State Board of Elections, or the State Board of Certifiers. The response from Pennsylvania cited the Secretary of the Commonwealth as the person who is to certify the election returns. In Tennessee, the response was that the Secretary of State, the Governor, and the Attorney General all are to certify the election returns. When the election must be certified also varied among the states, with some states having no state deadline for vote certification. Some respondents replied that the time that the state has to certify the returns was expressed as a number of days after the election. For example, Texas and Washington have 30 days to certify; Iowa has 27; New Mexico has 21; Hawaii, Michigan, and Illinois have 20; North Dakota has 17; Alabama and Idaho have 15; and Colorado has 14. Some states have extensions and caveats. For example, Louisiana requires certification in 12 days unless the last day falls on a holiday or weekend. Other respondents replied that the time to certify was expressed as a time period, including the third Monday following the election for Arizona, the first day of the next month for Kansas, the fourth Monday after the election for Nebraska, 5 p.m. on the Friday following the election for Oklahoma, the fourth Monday in November for Utah, no later than December 1 for Wisconsin, and the second Wednesday following the election for Wyoming. The response from Alaska was that there was no actual statutory deadline to certify the election results. Maryland also reported having no specific time in which to certify the election returns, but the statewide canvassers convene within 35 days after the election. Rhode Island reported that the requirement on the time to certify the election results was simply sufficient time for the candidates to be sworn in. During our site visits, we also found differences in the how local election jurisdictions certified their results. Rather than a single event, the certification process can occur in steps, as shown in the following examples. At one very large jurisdiction, the Board of Elections completed the certification process. After all the votes had been counted and recorded, the Board of Elections held a public hearing during which the votes for each office were announced. A five-day appeal period followed. The Board of Elections signed the official count of the votes, certified the results, and sent the results to the state election director. According to local election officials, the certification was to occur within 20 days of the date of the election by state law. The officials said that it is difficult to meet that deadline, given all the hand counting and recounting required. In one large jurisdiction we visited, each of 10 counting centers had a modem to electronically transmit the voting results to Election Headquarters in the Department of Elections building. Optical scan equipment counted the absentee ballots at the Central Counting Board in a convention center. The Central Counting Board transmitted the absentee voting results to elections headquarters using a dedicated phone line. The Board of Canvass certified the final count and submitted it to the county, which in turn submitted it to the Board of State Canvassers, which had 20 days to certify the results. In another large jurisdiction, the County Election Board met on election night to certify the election to the state for state and federal candidates. One person was assigned to read the memory packs from the optical scan equipment for each precinct into the equipment as they were received. When all memory packs had been read into the equipment, a precinct report was printed. The report was proofread against the total printout tapes from every precinct. When this task was completed, the certification report was printed and proofread. Two copies of the certification report were printed and signed by the County Election Board secretary and members, and the Election Board seal was affixed. The county kept one copy, and the other was mailed to the Secretary of State on the day after the election. The Secretary of State certified the results after 5 p.m. on the Friday after the election. In one small jurisdiction, the County Board of Elections prepared a county- wide tally sheet for the results from all nine precincts. The county-wide tally sheet numbers were transcribed to a state form, which was secured using tabs and taken by courier to the State Board of Elections in the state capital. The county-wide tally sheets were provided to the Chairmen of the Republican and Democratic Parties, to the General Registrar, and a copy was provided for the Minute Book and the County Office. The sheets are certified by the local county Board of Elections, and the board members signed the county-wide tally sheet. Forty-seven States and the District of Columbia Have Provisions for a Election Officials from 42 of the 513 Responding Jurisdictions in Our Mail Survey Said That They Had One or More Recounts for Federal or Statewide Office Between 1996 and 2000 According to Officials in the 42 Jurisdictions, None of the Recounts Changed the Original Outcome of the Election When the margin of victory is close, within a certain percentage or number of votes, issues may arise about the accuracy of the vote count, and recounts may be required and/or requested. When this occurs, each jurisdiction must recount the votes for the office or issue in question. Each jurisdiction must adhere to different guidelines to ensure an accurate and timely recount of election results. Depending on state law and the type of voting method in each jurisdiction, the recount process differs. Forty-seven states and the District of Columbia have provisions for a recount. The exceptions are Hawaii, Mississippi, and Tennessee. Illinois only allows a discovery recount that does not change the election results. Seventeen states have provisions that call for a mandatory recount, often when there is a tie or the margin between the candidates is a small percentage or number of votes, such as when the difference between the candidates is less than a certain percent or number of votes. For example, the criterion for a mandatory recount in South Dakota and Alaska is a tie vote. The margin for a mandatory recount in Arizona is one-tenth of 1 percent, or 200 votes. In Michigan, the margin is 2,000 or fewer votes. The recount may be conducted before or after the certification, and the recount may be an administrative process or to may be a judicial process or both. The Secretary of State, a state election board, local election officials, or court-appointed counters may conduct the recount, also depending on the state. To determine the recount provisions in each state, we analyzed state statutes and surveyed state election directors and the election director for the District of Columbia. Table 24 in appendix VII provides the conditions for a mandatory recount, whether requested recounts are permitted, and who is responsible for conducting the recount in each of the 50 states and the District of Columbia. When the margin of victory is very close, recounts can occur, and flaws in the vote counting system may become apparent. In the November 2000 presidential election, the winner’s margin was less than one-half of 1 percent in four states—Florida, New Mexico, Wisconsin, and Iowa. From 1948 through 2000, the winning margin in 31 presidential elections in 22 states has been less than 1 percent. In response to a question in our mail survey, election officials from 42 of the 513 responding jurisdictions said that they had a recount for federal or state office between 1996 and 2000. The recounts occurred in 16 states. Because some of the recounts were for the same office and some jurisdictions had more than one recount, the 42 jurisdictions reported recounts for 55 offices. For example, one county in Florida conducted a recount both for a state office in 1998 and President in 2000. Additional details on these jurisdictions are provided in appendix VII in table 25. In addition to the presidential election in Florida in November 2000, jurisdictions reported that they had recounts for the U.S. Senate contests, governor, state representatives, judges, state board of education, superintendent of schools, the register of deeds, state controller, state secretary or commissioner of labor, and state secretary or commissioner of agriculture. Election officials most often identified a requirement in state law as the reason that a recount occurred, such as the margin between the candidates being within a given percentage or number of votes. Other reasons noted were candidate request, secretary of state order, and court order. Officials in a few jurisdictions could not recall why they performed the recount. Figure 51 shows the reasons for which officials in these 42 jurisdictions said the recounts were conducted. The officials who reportedly authorized the recounts are shown in figure 52, and the board or official who actually conducted the recount is shown in figure 53. The jurisdictions were split in their responses as to whether the recount occurred before or after certification. Of the jurisdictions, 26 responded that the recount occurred before certification, and 19 responded that the recount occurred after certification. Eight jurisdictions didn’t know if they recounted the votes before or after certification, and three did not respond. All but one recount involved recounting all precincts. The exception involved a recount of just absentee ballots in one jurisdiction. However, absentee ballots were included in all of the recounts. According to election officials, 27 of the reported recounts involved optical scan ballots that were recounted using vote-counting equipment. Hand recounts were done in 8 cases, some included paper ballots or optical scan ballots. Paper tapes were reconciled to totals from direct recording equipment in 11 cases. Punch cards were recounted by machine in 6 cases. One recount involved a lever machine. However, in the end, it did not matter who requested or ordered the recount, the office that was at stake, who conducted the recount, the method used for the recount, or whether it occurred before or after certification. According to officials in the 42 jurisdictions, none of the recounts changed the original outcome of the election. Additional details on some of these recounts are provided in appendix VII. Contested Elections Can Occur When a Party Alleges Misconduct or Fraud on the Part of the Candidate, the Election Officials, or the Voters CRS Identified Five House of Representative Elections That Were Contested in the Period 1996 to 2000, and None Changed the Original Outcome of the Election Two Jurisdictions From Our Sample of 513 Election Jurisdictions Identified Two Contested Elections for National or Statewide Office Between 1996 and 2000, and Neither Contested Election Changed the Original Outcome of the Election Although recounts are to be conducted when the margin of victory is close and the accuracy of the vote count is questioned, they can also occur as a result of an election that is contested. Contested elections can occur when a party alleges misconduct or fraud on the part of the candidate, the election officials, or the voters. The Constitution provides that “ach House shall be the Judge of the Elections, Returns, and Qualifications of its own Members…” (Art. I, sec. 5). Within this constitutional framework, the Federal Contested Elections Act of 1969 governs contests for the seats in the House of Representatives. By contrast, the Senate does not have codified provisions for its contested election procedures. The act essentially sets forth the procedures by which a defeated candidate may contest a seat in the House of Representatives. The contest is first heard by the Committee on House Administration, which can conduct its own investigation of the contested election and report the results. Then the whole House, after discussion and debate, can dispose of the case by privileged resolution by a simple majority vote. Based on House precedent, certification of the election results is important, since the official returns are evidence of the regularity and correctness of the state election returns. The certification process places the burden of coming forward with evidence to challenge such presumptions on the contestants. The contestant has the burden of proving significant irregularity which would entitle him or her to a seat in the House. Fraud is never presumed but must be proven by the contestant. The Congressional Research Service (CRS) identified 102 contested elections for the House of Representatives from 1933 to 2000. According to CRS, the vast majority of these cases was resolved in favor of the candidate who was originally declared the victor. Since the Federal Contested Elections Act of 1969 was enacted, most cases have been dismissed because the contestant failed to sustain the burden of proof necessary to overcome a motion to dismiss. CRS identified five House of Representative elections that were contested in the period 1996 to 2000. The House of Representatives adopted the House Committee motion to report dismissing the election contests in three cases, and the contestants withdrew the challenges in the other two. In three cases, the House Committee did not find for the contestant and adopted resolutions dismissing the election contests, which were passed by House vote. In one case, Anderson v. Rose, H.Rep. 104-852 (1996) in the 7th District of North Carolina, the contestant presented credible allegations that spotlighted serious and potentially criminal violations of election laws. However, the House Committee found that they were not sufficient to change the outcome of the election if proven true. In another case, Haas v. Bass, H.Rep. 104-853 (1996) in the 2nd District of New Hampshire, the contestant claimed that the other candidate failed to file an affidavit attesting to the fact that he was not a subversive person as defined by New Hampshire law. However, the House Committee found that the law the contestant relied upon had been declared unconstitutional by the U.S. Supreme Court and repealed by the New Hampshire legislature prior to the election. In the third case, Dornan v. Sanchez, H.Rep. 105-416 (1998) in the 46th District of California, the contestant alleged noncitizen voting and voting irregularities, such as improper delivery of absentee ballots, double voting, and phantom voting. The Task Force on Elections found clear and convincing evidence that 748 invalid votes were cast in the election, but it was less than the 979-vote margin in the election. In two cases, the contestants withdrew the challenges. In one case, Munster v. Gejdenson, 104th Congress (no report filed) in the 2nd District of Connecticut, the contestant claimed vote counters made errors of judgment. In the second case, Brooks v. Harman, 104th Congress (no report filed) in the 36th District of California, the contestant claimed the 812-vote margin of victory was based on illegal ballots, including votes from nonresidents, minors, and voters illegally registered at abandoned buildings and commercial addresses. In our survey of 513 jurisdictions, we asked them if they had a contested election for federal or statewide office during the period 1996 to 2000. Two jurisdictions reported contested elections for a federal office, and neither contest changed the outcome of the election. None of the jurisdictions reported a contested election for statewide office during that time period. The first contested election was the 1996 U.S. senate contest in Louisiana, Landrieu v. Jenkins. The jurisdiction reported that candidate Jenkins contested the election, raising questions of voter integrity. Allegations included people voting twice, people voting using the names of the deceased, people voting using the identity of others, vote buying, political machine influences, election official conspiracy, and machine tampering and malfunctions. According to the jurisdiction, the contest went first to the Louisiana state legislature, then to the U.S. Congress, which investigated the issue. Retired FBI agents investigated the allegations by interviewing election officials and testing voting machines. The investigation was completed within 6 months. The contest did not change the outcome of the election. The second contested election was the Florida presidential contest in November 2000, Bush v. Gore. The jurisdiction reported that the narrow margin in the contest triggered a recount, and then voter integrity was also questioned. Both the Republican and Democratic parties and candidates contested the election. Allegations included voters who cast duplicate ballots, voters who were ineligible to vote because of felonies, voters who were not U.S. citizens, people who voted in the name of voters deceased before the election, people who voted using the identity of others, and people who voted but were not registered to vote. There were also allegations that the polls closed too early and that law enforcement officers detained voters on their way to the polls. The contested presidential election in Florida was ultimately resolved by the United States Supreme Court in Bush v. Gore, 531 U.S. 98 (2000). The Court, in determining whether manual recount procedures adopted by the Florida Supreme Court were consistent with the obligation to avoid arbitrary and disparate treatment of the electorate, found a violation of the Equal Protection Clause of the Fourteenth Amendment. Most jurisdictions did not report any problems in counting the vote, but when they did, it usually involved either technical or human error that affected the voting equipment. The challenge for voting officials is developing an awareness of and planning for addressing such errors. Having multiple checks on the people involved and the processes followed can help prevent human errors. Although technical errors cannot always be anticipated, an awareness of the types of errors that have occurred in other jurisdictions and contingency planning for them can help when they do occur. A challenge for many jurisdictions is how to determine voter intent for improperly marked optical scan, paper, and punch card ballots that counting equipment could not read and count or that those who hand counted the paper ballots could not clearly interpret. An issue in the recount of presidential votes in Florida in 2000 was the variation in the interpretation of improperly marked ballots in different jurisdictions. Our data suggest that similar issues could arise in other states. The process for initiating and conducting recounts and contested elections varied by jurisdiction. Regardless of the processes used, the challenge is the same—to complete the recount or determine the contested election in a fair, accurate, and timely manner. Voting methods can be thought of as tools for accommodating the millions of voters in our nation’s more than 10,000 local election jurisdictions. These tools are as simple as a pencil, paper, and a box, or as sophisticated as computer-based touchscreens. However, to be fully understood, all these methods need to be examined in relation to the people who participate in elections (both voters and election workers) and the processes that govern their interaction with each other and with the voting method. This chapter focuses on the technology variable in the people, process, and technology equation. It describes the various voting methods used in the November 2000 election in terms of their accuracy, ease of use, efficiency, security, testing, maintenance, and cost; provides cost estimates for purchasing new voting equipment for local election jurisdictions; and describes new voting equipment and methods that are currently available or under development. Each of the five voting methods was used extensively in the United States in the November 2000 election. Punch card and optical scan equipment were most widely used, together accounting for about 60 to 70 percent of the total. Figure 54 shows the distribution of voting methods in the United States by counties, precincts, and registered voters. As figure 54 shows, the results vary according to whether they were reported by county, precinct, or registered voter, but no matter how the data were reported, optical scan and punch card equipment were the most common voting methods used. Figures 55 to 59 show the distribution of various voting methods by counties, and figures 60 to 64 show the distribution of the various voting methods by MCDs, such as the cities, towns and townships. These breakouts also show that the two most used methods were optical scan and punch cards. People and Process Affect Equipment Accuracy Ease of Use Depends on Friendliness of Voting Equipment Voting Equipment’s Efficiency Is Not Consistently Measured Security of Voting Equipment Is Generally an Area of Mixed State and Local Jurisdictions Generally Tested Voting Equipment Type and Frequency of Equipment Maintenance Performed Varied Equipment Costs Vary by Unit Cost, Jurisdictions’ Size, and Voting equipment can be examined according to a range of characteristics, including accuracy, ease of use, efficiency, security, testing, maintenance, and cost. Because all these characteristics affect election administration, all should be considered in any assessment of voting equipment. Further, all these characteristics depend on the integration of three variables: (1) the equipment itself, (2) the people who use and operate the voting equipment, and (3) the processes and procedures that govern people’s use of the equipment. Accuracy, ease of use, and efficiency can all be considered performance characteristics, and measuring these performance characteristics can help determine whether voting equipment is operating as intended, or whether corrective action is needed. Accuracy refers to how frequently the equipment completely and correctly records and counts votes; ease of use refers to how understandable and accessible the equipment is to a diverse group of voters, and election workers; and efficiency refers to how quickly a given vote can be cast and counted. By measuring and evaluating how accurate, easy to use, and efficient voting equipment is, local election jurisdictions can position themselves to better ensure that elections are conducted effectively and efficiently. However, jurisdictions cannot consider voting equipment’s performance in isolation. To protect the election and retain public confidence in its integrity, other characteristics should also be considered. Ensuring the security of elections is essential to public confidence, and properly testing and maintaining voting equipment is required if its optimum performance is to be achieved. Finally, the overriding practical consideration of the equipment’s lifecycle cost versus benefits, which affects and is affected by all the characteristics, must be considered. Generally, our survey of vendors showed little difference among the basic performance characteristics of DRE, optical scan, and punch card equipment. However, when local election jurisdictions’ experiences with the equipment are considered, performance differences among voting equipment become more evident. These differences arise because a real- world setting—such as an election in which equipment is operated by actual voters, poll workers, and technicians—tends to result in performance that differs from that in a controlled setting (such as in the manufacturer’s laboratory). This difference demonstrates the importance of the effect of people and process on equipment performance. On the basis of the results of our mail survey and visits to 27 local election jurisdictions, we found that while most jurisdictions did not collect actual performance data for the voting equipment that they used in the November 2000 election, jurisdiction election officials were nevertheless able to provide their perceptions about how the equipment performed. For example, our mail survey results indicate that 96 percent of jurisdictions nationwide were satisfied with the performance of their voting equipment during the November 2000 election. Table 2 shows the percentage of jurisdictions satisfied with equipment performance during the November 2000 election, by type of voting equipment. Figure 65 shows a relative comparison of certain characteristics— accuracy, ease of use, efficiency, and security—of the various types of voting equipment used in the November 2000 election. The comparison reflects the results of our survey of voting system vendors and of 513 local election jurisdictions. In our survey of jurisdictions, we grouped those that used punch card, lever, and hand-counted paper ballots, and placed them in an “other” category. In our vendor survey, we excluded lever equipment because it is no longer manufactured and, of course, hand-counted paper ballots, for which no equipment is needed. Confidence intervals were calculated at the 95 percent confidence level. Unless otherwise noted, all estimates from our mail survey have a confidence interval of plus or minus 4 percentage points or less. Overall, from both the vendor and jurisdiction perspective, DREs are generally easier to use and more efficient than the other types of equipment. In the area of security, DRE and optical scan are relatively equal, and in the area of accuracy, all equipment is relatively the same. The differences among voting equipment reported by local election jurisdictions can be attributed, in part, to the differences in the equipment itself. However, they also can be attributed to the people who use the equipment and the rules or processes that govern its use. For example, how voters interact with DREs differs from how they interact with optical scan, punch card, or lever machines. In each case, different opportunities exist for voter misunderstanding, confusion, and error, which in turn can affect the equipment’s performance in terms of accuracy, ease of use, and efficiency. Further, all voting equipment is influenced by security, testing, maintenance, and cost issues, each of which also involves people and processes. Thus, it is extremely important to define, measure, evaluate, and make decisions about equipment choices within the context of the total voting system—people, processes, and technology. We estimate that 96 percent of jurisdictions nationwide were satisfied with the performance of their voting equipment during the November 2000 election. We estimate that only about 48 percent of jurisdictions nationwide collected data on the accuracy of their voting equipment for the election. Ensuring that votes are accurately recorded and tallied is an essential attribute of any voting equipment. Without such assurance, both voter confidence in the election and the integrity and legitimacy of the outcome of the election are at risk. Our vendor survey showed virtually no differences in the expected accuracy of DRE, optical scan, and punch card voting equipment, measured in terms of how accurately the equipment counted recorded votes (as opposed to how accurately the equipment captured the intent of the voter). Vendors of all three types of voting equipment reported accuracy rates of between 99 and 100 percent, with vendors of DREs reporting 100-percent accuracy. In contrast to vendors, local election jurisdictions generally did not collect data on the accuracy of their voting equipment, measured in terms of how accurately the equipment captures the intent of the voter. Overall, our mail survey results revealed that about 48 percent of jurisdictions nationwide collected such data for the November 2000 election. Table 3 shows the percentage of jurisdictions that collected data on accuracy by type of voting equipment. Further, it is unclear whether those jurisdictions that reported collecting accuracy data actually have meaningful performance data. Of those local election jurisdictions that we visited that stated that their voting equipment was 100-percent accurate, none was able to provide actual data to substantiate these statements. Similarly, the results of our mail survey indicates that only about 51 percent of jurisdictions nationwide collected data on undervotes, and about 47 percent of jurisdictions nationwide collected data on overvotes for the November 2000 election. Table 4 shows the percentage of jurisdictions that collected data on undervotes and overvotes by type of equipment. In contrast, less than half of the 27 jurisdictions that we visited indicated that they collected data for undervotes, overvotes, or both. For those that did, the percentage of undervotes was slightly higher for punch cards than for DRE and optical scan. For overvotes, the percentages for both optical scan and punch cards were relatively similar, generally less than 0.5 percent. However, election officials in one jurisdiction that used optical scan equipment reported an overvote rate of 4.9 percent, and officials in one jurisdiction that used punch card equipment reported an overvote rate of 2.7 percent. Although voting equipment may be designed to count votes as recorded with 100-percent accuracy, how frequently the equipment counts votes as intended by voters is a function not only of equipment design, but also of the interaction of people and processes. These people and process factors include whether, for example, technicians have followed proper procedures in testing and maintaining voters followed proper procedures when using the equipment, election officials have provided voters with understandable procedures poll workers properly instructed and guided voters. To illustrate this point, officials from a very large jurisdiction stated that 1,500 voters had inserted their punch cards in the recording device upside down, thus causing the votes to be inaccurately recorded. Fortunately, officials stated that they detected the error and remade and counted the ballots. Election officials further stated that they remake, on average, about 1,100 ballots for every election because voters improperly insert their ballots into the recording device. Similarly, at a small jurisdiction that we visited where optical scan equipment was used, officials reported that some voters incorrectly marked the ovals or used a nonreadable pen to mark the ballot, resulting in partially read ballots. In another medium-sized jurisdiction that we visited, the ballot section permitting write-in votes confused voters. Voters selected a candidate on the ballot and then wrote the candidate’s name in the write-in section of the ballot, thus overvoting and spoiling the ballot. The election officials stated that they believed that this misunderstanding contributed to the jurisdictions’ almost 5 percent overvote rate. In each of these cases, the way that the voter completed the ballot caused the vote to be recorded inaccurately, even though the voting equipment correctly counted the votes as recorded. In addition, the accuracy of voting equipment can be affected by the procedures that govern how voters interact with the technologies. Differences in these procedures can have noticeable effects on the prevalence of undervotes and overvotes, for example. In particular, we found that some precinct-count optical scan voting equipment can be programmed to return a voter’s ballot if the ballot is overvoted or undervoted. Such programming allows the voter to make any changes necessary to ensure that the vote is recorded correctly. However, not all states allow this. For example, election officials in one Virginia jurisdiction stated that Virginia jurisdictions must accept ballots as cast. The extent to which voters can easily use voting equipment largely depends on how voters interact, physically and intellectually, with the equipment. This interaction, commonly referred to as the human/machine interface (or in the case of voting technology, the voter/machine interface), is a function both of the equipment design and of the processes established for its use. For example, how well jurisdictions design ballots and educate voters on the use of voting equipment can affect how easy voters find the equipment to use. Ease of use (i.e., the equipment’s user friendliness) is important not only because it influences the accessibility of the equipment to voters but because it also affects the other two performance measures discussed here—accuracy (i.e., whether the voter’s intent is captured) and the efficiency of the voting process. Our vendor survey showed that, in general, most voting equipment is limited in its ability to accommodate persons with special physical needs or disabilities. Most vendors, for example, reported that their equipment accommodates voters in wheelchairs; however, vendors of DRE equipment reported providing accommodations for more types of disability than other vendors. For instance, many of the DREs offer accommodations for voters who are blind, such as Braille keyboards or an audio interface. In addition, at least one vendor reported that its DRE accommodates voters with neurological disabilities by offering head movement switches and “sip and puff” plug-ins. Table 5 summarizes vendor-reported accessibility options by voting equipment type and device. Our work on the accessibility of voting equipment to persons with disabilities during the November 2000 election found that most voting equipment presents some challenges to voters with disabilities. For example, persons in wheelchairs may have difficulty reaching and manipulating the handles on lever machines or reaching and pressing the buttons/screens on DREs. In addition, persons with dexterity impairments may find it difficult to hold the pencil or pen for optical scan, apply the right amount of pressure to punch holes in punch cards, press the buttons/screens on DREs, or manipulate the levers on lever machines. Similarly, for all the voting methods, voters with visual impairments may have difficulty reading the text. Consistent with our vendor survey, however, election officials and representatives of disability organizations told us that DREs can be most easily adapted (with audio and other aids) to accommodate the widest range of disabilities. We estimate that jurisdictions nationwide that used DREs were generally more satisfied than those that used optical scan or punch cards with how easy their voting equipment was for voters and election workers to use. Differences are apparent in local election jurisdictions’ perceptions of how easy their voting equipment was for the voters to use, with jurisdictions using DREs being generally more satisfied with how easy their equipment was for voters to use and to correct mistakes (see table 6). Likewise, the results of our mail survey reveal that 83 percent of jurisdictions nationwide were satisfied with how easy it was for election workers to operate and set up the voting equipment on election day. Again, jurisdictions that used DREs expressed a higher rate of satisfaction (see table 7). Figure 66 summarizes jurisdictions’ satisfaction with the various types of voting equipment on ease of use by voters, ability to correct mistakes, and ease of operation and setup for election workers. Another key component of the voter/machine interface for voting equipment is the design of the ballot, which is generally a state and/or jurisdictional decision for each election. For example, in a medium-sized jurisdiction that used lever machines, the list of names for president was so long that it extended into a second column. According to jurisdiction officials, this layout confused voters because they were not used to seeing the ballot this way. Similarly, at a small jurisdiction that used optical scan equipment, officials stated that they had to use both sides of the ballot, which was confusing to voters who did not think to turn over the ballot and vote both sides. In addition, the well-known Florida “butterfly” ballot was confusing to many voters, because candidates’ names were printed on each side of the hole punches, with arrows pointing to alternating candidates. For example, the first candidate in the left column was paired with the first hole; the first candidate in the right column with the second hole; the second candidate in the left column with the third hole; and so on. Voters found the arrows confusing and hard to follow. Such situations illustrate the importance of ensuring a friendly voter/machine interface. Efficiency is important because the speed of casting and tallying votes influences voter waiting time, and thus potentially voter turnout. Efficiency can also influence the number of voting machines that a jurisdiction needs to acquire and maintain, and thus the cost. Efficiency can be measured in terms of how quickly the equipment can count votes, the number of people that the equipment can accommodate within a given time, and the length of time that voters need to wait. Like the other characteristics discussed so far, the efficiency of voting equipment (i.e., how many ballots can be cast in a given period of time) is a function of the interaction of people, processes, and technology. As our vendor survey showed, efficiency metrics vary for the DRE, optical scan, and punch card equipment because of the equipment itself. With DREs, the vote casting and counting functions are virtually inseparable, because the ballot is embedded in the voting equipment. In contrast, with optical scan and punch cards, the ballot is a distinctly separate medium (i.e., a sheet of paper or a computer card), which once completed is put into the vote counting machine. As a result, vendors reported that the efficiency of optical scan and punch cards is generally measured in terms of the speed of count (i.e., how quickly the equipment counts the votes on completed ballots). In contrast, DRE vendors reported that because DREs count the votes as soon as the voter pushes the button to cast the vote (i.e., instantaneously), efficiency is measured in terms of the number of voters that each machine accommodates on election day. Complicating any measurements of efficiency is the fact that optical scan and punch card equipment’s efficiency differs depending on whether central-count or precinct-based equipment is used. Central-count equipment generally counts more ballots per hour because it is used to count the ballots for an entire jurisdiction, rather than an individual polling site. For central-count optical scan equipment, vendors reported speed of count ranges from 9,000 to 24,000 ballots per hour. For precinct-count optical scan and punch card equipment, vendors generally did not provide specific speed of count data, but they stated that one machine is generally used per polling site. For DREs, vendors reported that the number of voters accommodated per machine ranges from 200 to 1,000 voters per machine per election day. We estimate that during the November 2000 election, only 26 percent ( W ti of jurisdictions nationwide collected actual performance data on counting speed, and 10 percent collected data on voter wait time. We estimate that more than 80 percent were satisfied with count speed and voter wait time. ±5) The results of our mail survey and visits to 27 local election jurisdictions revealed that most jurisdictions did not collect actual performance data on the efficiency of the voting equipment that they used in the November 2000 election. For example, from our mail survey, we found that only 26 percent (± 5 percentage points) of local election jurisdictions nationwide collected information on the speed at which their equipment counted votes, and only 10 percent of jurisdictions nationwide collected information on the average amount of time that it took voters to vote. Despite the absence of performance data on efficiency, officials in jurisdictions that we visited reported some perceptions about how the respective voting equipment performed. Overall, our mail survey results reveal that 91 percent of jurisdictions nationwide reported that they were satisfied with the speed at which their equipment counted votes. Further, 84 percent of jurisdictions nationwide reported that they were satisfied with the amount of voter wait time at the polling place during the November 2000 election. Figure 67 summarizes jurisdictions’ satisfaction with speed of count of voting equipment and voter wait time, by equipment type. Effectively securing voting equipment depends not only on the type of equipment but on the procedures and practices that jurisdictions implement and the election workers who execute them. Effective security includes, at a minimum, assigning responsibility for security, assessing security risks and vulnerabilities and implementing both manual and technology-based security measures to prevent or counter these risks, and periodically reviewing the controls to ensure their appropriateness. The results of our mail survey indicate that most jurisdictions nationwide have implemented some of these important elements of security, but not all. Figure 68 summarizes jurisdictions’ implementation of security controls. Assigning responsibility: Our mail survey results indicate that 89 percent of jurisdictions assigned responsibilities to one or more individuals for securing voting equipment for the November 2000 election. From our visits to 27 local election jurisdictions, we learned that individuals assigned responsibility for securing voting equipment were generally election administrator’s staff, county warehouse staff, or county clerks before election day, and poll workers or county clerks at the polling site on election day. Assessing risks and implementing controls: Similarly, our mail survey results indicate that 87 percent of jurisdictions nationwide had implemented security controls to protect their voting equipment during the November 2000 election. However, only 60 percent of jurisdictions had ever assessed security threats and risks, such as modification or loss of electronic voting data, loss or theft of ballots, or unauthorized access to software. From our visits to 27 jurisdictions, we learned that the controls implemented generally included physical controls for securing the voting equipment and ballots. For example, officials from one large jurisdiction stated that they provided 24-hour, 7-day-per-week security for voting equipment in a controlled access facility that included a security surveillance system linked to the Sheriff’s Department. In another large jurisdiction officials reported that they stored voting equipment in a warehouse that required a four-digit passcode to enter. In contrast, however, officials from a small jurisdiction reported that they stored their lever machines at the polling places all year, with no control over how the equipment is secured. Election officials in jurisdictions we visited also reported that they have implemented access controls to limit the number of people who can operate their election management system and/or their vote tabulation equipment. For example, officials from one large and one medium-sized jurisdiction reported that they safeguarded their election management software by using a firewall and access controls. In addition, the vendors we surveyed reported that voting equipment has been developed with certain embedded security controls, although these controls vary. In general, these controls include the following: Identification (ID) names and passwords control access to the voting equipment and software and permit access only to authorized users. Redundant storage media provide backup storage of votes cast to facilitate recovery of voter data in the event of power or equipment failure. Encryption technology scrambles the votes cast so that the votes are not stored in the same order in which they were cast. If vote totals are electronically transmitted, encryption technology is also used to scramble the vote count before it is transmitted over telephone wires and to unscramble it once it is received. Audit trails provide documentary evidence to recreate election day activity, such as the number of ballots cast (by each ballot configuration/type) and candidate vote totals for each contest. Hardware locks and seals protect against unauthorized access to the voting equipment once it has been prepared for the election (e.g., vote counter reset, equipment tested, and ballots prepared). Table 8 shows security controls by type of voting equipment for the systems we surveyed. Generally, DRE and optical scan equipment offer more security controls than punch cards. DRE and optical scan equipment are fairly comparable in terms of the security controls that they offer; DREs generally offer more redundant storage media, which provides backup storage of votes cast to facilitate recovery of voter data in the event of power or equipment failure. However, both optical scan and punch card equipment use a paper ballot, which could be recounted in the case of equipment failure. In addition, punch card equipment generally does not have hardware locks and seals. Reviewing controls: The results of our survey indicate that about 81 percent of jurisdictions nationwide periodically review the steps taken to ensure that these are sufficient. However, most jurisdictions that we visited indicated that they did not periodically review controls. To ensure that voting equipment performs as intended on election day, it must be tested, both before it is accepted from the manufacturer and before it is used. Although effective testing does not guarantee proper performance, it can greatly reduce the chances of unexpected equipment problems and errors. Further, the people who plan and conduct the tests, as well as the processes and procedures that govern the conduct of tests, are central to effective testing. Generally, voting equipment testing can be viewed as consisting of five stages. The initial three stages—qualification, certification, and acceptance—are typically conducted before the purchase and acceptance of the voting equipment by the jurisdiction. After the voting equipment has been purchased, jurisdictions typically conduct two additional stages of testing to ensure that the voting equipment operates properly before each election—readiness and verification testing. Each of these five stages of testing includes similar steps: defining the equipment requirements to be tested, planning the tests (e.g., determining what level of tests to be performed), executing the tests, documenting the test results, and completing the tests (e.g., ensuring that the test criteria have been met). (Figure 69 provides a simplified model of the voting equipment testing process.) Qualification testing validates the compliance of the voting equipment with the requirements of FEC’s voting system standards (applicable to punch card, optical scan, and DRE voting equipment) and with the vendor’s equipment design specifications for the equipment. These tests are conducted by independent test authorities accredited by the National Association of State Election Directors (NASED). Vendors are expected to resubmit their voting equipment to the qualification test process whenever they modify the equipment. The majority of states (38) have adopted the FEC standards, which means that the majority of states require voting equipment used in their jurisdictions to be NASED qualified. However, because the standards were not published until 1990 and the qualification testing program was not established until 1994, many jurisdictions may be using voting equipment that did not undergo qualification testing. This may be particularly true for those jurisdictions that use punch card equipment; only one punch card machine is on NASED’s list of qualified voting equipment. However, in our survey of states and the voting equipment they used in the November 2000 election, we identified 19 different types of punch card equipment being used by jurisdictions. Further, the FEC standards do not address lever machines. In contrast, the results of our mail survey revealed that 49 percent (plus or minus 7 percentage points) of jurisdictions nationwide that use DREs and 46 percent (plus or minus 7 percentage points) of jurisdictions nationwide that use optical scan equipment use voting equipment had been qualified by NASED. We estimate that 39 percent (±4.33) of jurisdictions nationwide used voting equipment that was NASED qualified. About half of those using DRE or optical scan equipment used equipment that was NASED qualified. Also, 90 percent used equipment that had been certified by the state. Certification testing validates compliance of the voting equipment with state-specific requirements and can also be used to confirm that the presented voting equipment is the same as the equipment that passed NASED qualification testing. Certification tests are generally conducted by the states and can be used to establish a baseline for future evaluations. Although states establish certification test requirements, FEC recommends that state certification tests not duplicate NASED qualification tests and that they include sufficient functional tests and qualitative assessments to ensure that the voting equipment operates in compliance with state law. Further, FEC recommends that states recertify voting equipment that has been modified to ensure that it continues to meet state requirements. However, it is not clear that this recertification always occurs. For example, one state election director cited repeated problems with local jurisdictions and vendors modifying their voting equipment after state certification. In fact, the election director stated that in some cases, vendors modified equipment without even notifying the local jurisdiction. Forty-five states and the District of Columbia reported that they have certification programs to identify voting equipment that may be used in the state. Of these 46, 38 require certification testing. Four states—Alaska, Mississippi, North Dakota, and Utah—do not require that voting equipment used in these states be NASED qualified and do not perform certification testing of voting equipment. Our mail survey results show, however, that 90 percent of jurisdictions used state-certified voting equipment in the November 2000 election. Table 9 shows the percentage of jurisdictions that use state-certified voting equipment. Acceptance testing checks that the voting equipment, as delivered by the vendor, meets the requirements specified by the states and/or local jurisdictions. State or local jurisdictions conduct acceptance tests, which can be used to establish a baseline for future evaluations. Many of the jurisdictions that we visited had recently procured new voting equipment, and most of these jurisdictions had conducted some form of acceptance testing. However, the processes and steps performed and the people who performed them varied by jurisdiction and by equipment type. For example, in a very large jurisdiction that had recently purchased DRE equipment, election officials stated that testing consisted of a visual inspection, power-up, opening of polls, activation and verification of ballots, and closing of polls. In contrast, officials in one large jurisdiction stated that they relied entirely on the vendor to test the equipment. In jurisdictions that used optical scan equipment, acceptance testing generally consisted of running decks of test cards. For example, officials from another large jurisdiction stated that they tested each voting machine with the assistance of the vendor using a vendor-supplied test deck. Readiness tests, often referred to as logic and accuracy tests, check that the voting equipment is properly functioning. Jurisdictions normally conduct readiness tests in the weeks leading up to election day—often while the equipment is still at the warehouse—to verify that the voting equipment has been properly prepared for the election (e.g., that ballots have been properly installed in voting devices). Our mail survey results indicate that 94 percent of jurisdictions nationwide conducted readiness (logic and accuracy) testing before the November 2000 election. Figure 70 shows the percentage of jurisdictions that conducted readiness testing by equipment type. Although most jurisdictions nationwide performed readiness testing, the actual testing activities varied by the type of equipment and by jurisdiction. For example, jurisdictions that used DREs performed readiness testing by running diagnostic tests that the equipment is designed to perform, using vote simulation cartridges, and by conducting mock elections; jurisdictions that used optical scan and punch cards generally relied on the use of test decks. In a large jurisdiction that used DREs, the election officials stated that the county’s readiness tests included checking the battery, paper tapes, machine labels, curtain rods, and the memory cartridge against the ballot and the equipment; performing voting tests, such as voting for each candidate; and testing the write-in capabilities. At the conclusion of the tests, election officials checked the counters and the memory tapes to ensure that the results matched the testers’ entries. In a very large jurisdiction that used punch cards, election officials stated that they conducted a public test on the Monday before election day with a test deck of 55 cards that included numerous configurations for valid ballots, overvoted ballots, and undervoted ballots. One of the most comprehensive tests was conducted in a very large jurisdiction. This jurisdiction tested the integration of all its voting equipment. Officials conducted a mock election that included testing the precinct-based optical scanner, the central-count optical scanner used for absentee ballots, DREs used for early voting, and the election management system. For this test, they prepared each type of equipment and had each type of equipment transmit vote totals created using test decks to the election management system to ensure that it prepared the results correctly. Effectively testing voting equipment depends not only on the voting equipment itself, but also on the procedures developed by the jurisdiction and the people that implement them. For example, in one large county, an election official misprogrammed software on the optical scan equipment used to tally early and absentee votes, which affected all ballots with a straight party vote in the November 2000 election. About a third, or 66,000, of the ballots cast in the county were cast early or absentee. Of these, over 20,000 voters had cast a ballot with a straight party vote. According to county officials, although the equipment detected the straight party vote, it did not properly distribute the vote to each of the party candidates. That is, if a voter checked a straight party vote for Democrat, the optical scan equipment detected the vote but did not properly add a vote for the Democratic candidates on the ballot. Although county officials agreed that this problem should have been detected during readiness testing, they stated that the confirmation of the results of the test deck had been incomplete. According to county officials, test personnel verified only that the system accurately detected the straight party vote and did not verify whether the tallies resulting from the test deck were correct. Further, the county had no written procedures to ensure that the software was properly tested. Fortunately, county officials detected the software problem during the vote tallying process. However, if the problem had gone undetected, over 20,000 properly cast votes would not have appeared in the official vote totals. We estimate that 94 percent of jurisdictions nationwide conducted readiness testing before the November 2000 election, and 95 percent of jurisdictions nationwide conducted verification testing before the election. The purpose of verification testing is to verify that the voting equipment is operating properly before the election. This testing is typically conducted by poll workers or election officials at the poll site on election day unless a central-count configuration is used. Our mail survey results show that 95 percent of jurisdictions nationwide conducted verification testing before the November 2000 election. Figure 71 shows the percentage of jurisdictions that conducted verification testing by type of voting equipment. Verification tests generally vary by type of technology. For jurisdictions that use optical scan and DREs, verification testing generally includes generating a zero tape that verifies that the equipment is ready to start processing ballots. Zero tapes typically identify the specific election, the equipment’s unit identification, the ballot’s format identification, and the contents of each active candidate register by office (showing that they contain all zeros). In addition to running the zero tapes, jurisdiction officials indicated that they also check the security seals on the machines to ensure that they have not been tampered with, compare the ballot on the machine with the sample ballot for the polling place, and check the protective counter number on the voting machine before voting begins. Figure 72 shows a zero tape. Jurisdictions that use punch cards also need to test the vote recording device. For example, in a medium-sized jurisdiction, election officials stated that before opening the polls, the poll workers inspected each ballot page in the ballot book and compared each to the specimen ballot for the precinct. Further, these officials and officials in another medium-sized jurisdiction stated that poll workers checked that the punch positions for each vote recording device worked properly. Similarly, for those jurisdictions that we visited that use lever machines, verification testing includes making sure the public counters are set to zero and checking the security seals, the protective counters on the machines, the paper rolls, and the ballot labels to ensure that the names of the parties, office titles, candidate names, and ballot proposals match the sample ballot displayed at the polling place. As with security and testing, proper maintenance is important to ensure that voting equipment performs as intended and problems are prevented. According to voting equipment vendors, routine maintenance for most voting equipment generally includes inspecting the voting equipment for damage; testing and recharging batteries, if applicable; and cleaning the equipment before the election. Not effectively maintaining voting equipment could contribute to equipment failures or malfunctions, which in turn could cause voters to wait longer and could cause vote and tally errors. Our mail survey results indicate that about 80 percent of jurisdictions nationwide performed routine or manufacturer-suggested maintenance on their voting equipment before the November 2000 election. For those jurisdictions that we visited, the maintenance activities performed were generally consistent with those recommended by the vendors for their respective voting equipment, such as inspecting and cleaning the machines, testing and recharging batteries, and replacing malfunctioning parts. However, despite performing regular maintenance, jurisdiction officials stated that they had experienced equipment failures during the November 2000 election. In most cases, officials characterized these failures as not significant because they were resolved on-site through repairs or replacements. The specific maintenance procedures that jurisdictions performed varied because of differences in the physical characteristics of the equipment. Table 10 shows examples of maintenance procedures, by equipment type. Our mail survey shows that a significantly higher percentage of jurisdictions nationwide using DRE and optical scan equipment had performed maintenance than had jurisdictions using lever and punch card equipment. Figure 73 presents summary information on jurisdictions that conducted maintenance, by equipment type. Our visits to 27 local election jurisdictions also revealed variations in the frequency with which jurisdictions perform routine maintenance. For example, some jurisdictions perform maintenance before an election, while others perform maintenance regularly throughout the year. For example, officials in a medium-sized jurisdiction that uses DREs, stated that they test the batteries monthly. Likewise, officials from a very large jurisdiction reported that its warehouse staff worked year-round to repair Votomatic units and booths. Our site visits also showed that local jurisdictions have experienced few problems with equipment maintenance. Only one large jurisdiction reported that it had experienced problems with obtaining replacement parts for its optical scan equipment. The cost to acquire, operate, and maintain voting equipment over its useful life varies, not only on a unit cost basis but also on a total jurisdiction basis, depending on such decisions as whether ballots will be counted at poll sites or centrally, who will perform maintenance, and how frequently maintenance will be performed. Our vendor survey showed that voting equipment costs vary among types of voting equipment and among different manufacturers and models of the same type of equipment. For example, DRE touchscreen unit costs ranged from $575 to $4,500. Similarly, unit costs for precinct-count optical scan equipment ranged from $4,500 to $7,500. Among other things, these differences can be attributed to differences in what is included in the unit cost as well as differences in the characteristics of the equipment. Table 11 shows equipment costs by unit, software, and peripheral components. In addition to the equipment unit cost, an additional cost for jurisdictions is the software that operates the equipment, prepares the ballots, and tallies the votes (and in some cases, prepares the election results reports). Our vendor survey showed that although some vendors include the software cost in the unit cost of the voting equipment, most price the software separately. Software costs for DRE, optical scan, and punch card equipment can run as high as $300,000 per jurisdication. The higher costs are generally for the more sophisticated software associated with election management systems. Because the software generally supports numerous equipment units, the software unit cost varies depending on the number of units purchased or the size of the jurisdiction. Other factors affecting the acquisition cost of voting equipment are the number and types of peripherals required. In general, DREs require more peripherals than do optical scan and punch cards. For example, some DREs require smart cards, smart card readers, memory cartridges and cartridge readers, administrative workstations, and plug-in devices (for increasing accessibility for voters with disabilities). Touchscreen DREs may also offer options that affect the cost of the equipment, such as color versus black and white screens. In addition, most DREs and all optical scan and punch cards require voting booths, and most DREs and some precinct-based optical scan and punch card tabulators offer options for modems. Precinct- based optical scan and punch card tabulators also require ballot boxes to capture the ballots after they are scanned. Once jurisdictions acquire the voting equipment, they must also incur the cost to operate and maintain it. Our visits to 27 local election jurisdictions indicated that annual operation and maintenance costs, like acquisition costs, vary by the type and configuration of the voting equipment and by the size of the jurisdiction. For example, jurisdictions that used DREs reported a range of costs from about $2,000 to $27,000. Similarly, most jurisdictions that used optical scan equipment reported that operations and maintenance costs ranged from about $1,300 to $90,000. Most punch card jurisdictions reported that operations and maintenance costs ranged from $10,000 to over $138,000. The higher ends of these cost ranges generally related to the larger jurisdictions. In fact, one large jurisdiction that used optical scan equipment reported that its operating costs were $545,000, and one very large jurisdiction that used punch cards reported operations and maintenance costs of over $600,000. In addition, the jurisdictions reported that these costs generally included software licensing and upgrades, maintenance contracts with vendors, equipment replacement parts, and supply costs. Figure 74 shows the ranges of operations and maintenance costs, by type of voting equipment. For decisions on whether to invest in new voting equipment, both initial capital costs (i.e., cost to acquire the equipment) and long-term support costs (i.e., operation and maintenance costs) are relevant. Moreover, these collective costs (i.e., lifecycle costs) need to be viewed in the context of the benefits the equipment will provide over its useful life. These benefits should be directly linked to the performance characteristics of the equipment and the needs of the jurisdiction. Estimated Costs of Buying Central-Count Optical Scan Voting Estimated Costs of Buying Precinct-Based Optical Scan Voting Estimated Costs of Buying Touchscreen DRE Voting Equipment Election jurisdictions used five basic types of voting methods in the November 2000 election—hand-counted paper ballots and lever machines, punch card, optical scan, and DRE voting equipment. In some cases, the same method was used for all votes cast—mail absentee, in-person absentee, early, normal election day, and provisional election day. Others used different methods for different types of votes. For example, any jurisdiction that used lever or DRE equipment normally used some different method of counting mail absentee ballots, because neither method uses individual paper ballots that could be mailed to absentee voters. As discussed earlier in this chapter, any of these voting methods can produce accurate, reliable vote counts if the people, processes, and technology required to accomplish this task are appropriately integrated. However, in considering new voting equipment, most jurisdictions have focused on two types of equipment—optical scan and DRE. Optical scan equipment can be used for counting ballots at a central location or a counter can be located at each precinct where voters cast their votes. A central-count configuration is generally less expensive, particularly in larger jurisdictions, because fewer pieces of equipment are needed. However, with a central-count configuration, voters cannot be notified of any mistakes they made in filling out their ballots and offered an opportunity to correct them. Optical scan counters located at voting precincts can be programmed to notify voters if they have voted for more candidates for an office than permitted (overvotes) or have not voted for a specific office (undervotes). Such voters can then be offered an opportunity to correct their ballot, if they wish. For example, the voter may wish to correct any overvotes but deliberately chose not to vote for any candidates for a specific office. Properly programmed, DRE voting equipment does not permit the voter to overvote and can also notify the voter of any undervotes. Jurisdictions may have different requirements for evaluating the purchase of new voting equipment. For example, large jurisdictions with long ballots with multiple offices and initiatives that must be printed in multiple languages will have requirements different from the requirements in small jurisdictions with short ballots printed only in English. Some equipment has more features to accommodate those with disabilities than others. For example, with most types of voting equipment, ballots with larger print or magnifying glasses can be offered to voters with impaired sight. Currently, however, only certain models of touchscreen DRE equipment can be configured to accommodate most persons with disabilities, such as persons who are blind, deaf, paraplegic, or quadriplegic. We developed cost estimates for three approaches to replacing existing voting equipment—central-count optical scan equipment; precinct-count optical scan equipment; and touchscreen DRE equipment that could accommodate persons with disabilities, except those who are quadriplegic. The cost estimate for each approach used a set of assumptions that may overestimate the needs and costs for some jurisdictions and underestimate the needs and costs for other jurisdictions. These assumptions and limitations are discussed in more detail in the text that accompanies each estimate. Our estimated purchase costs range from about $191 million for central-count optical scan equipment to about $3 billion for touchscreen DRE units, where at least one of which in every precinct was a unit equipped to enable most voters with disabilities to cast their votes on DRE units in secrecy. Our estimates used vendor cost data provided in August 2001, and these costs are subject to change. With the exception of central-count optical scan units for jurisdictions with fewer than 25,000 registered voters, these cost estimates did not include software or other necessary support items. Our estimates generally included only the cost to purchase the equipment and do not contain software costs associated with the equipment to support a specific election and to perform related election management functions, which generally varied by the size of the jurisdiction that purchased the equipment. Also, our estimates did not include operations and maintenance costs, because reliable data were not available from the jurisdictions. The cost of software and other items could substantially increase the actual cost to purchase new voting equipment. Actual costs for any specific jurisdiction would depend upon the number of units purchased, any quantity discounts that could be obtained, the number of reserve units purchased, and the cost of software and other necessary ancillary items. In a central-count optical scan system, ballots are transported from the precincts to a central location for counting. Our estimates used vendor cost data provided in August 2001. Actual cost per unit may be more or less than those used in our estimates. Vendors provided data on three central-count optical scan units. The least expensive unit costs $20,000, including a personal computer, card reader, and software. The vendor recommends 1 unit for each 25,000 registered voters. This is the unit we used in our cost estimates for election jurisdictions with 25,000 or fewer registered voters. We had data on two high-speed central-count units that we used for jurisdictions with more than 25,000 registered voters. The $24,000 unit had a counting capacity of 9,000 ballots per hour and the $55,000 unit had a capacity of 24,000 ballots per hour. Prices did not include software costs, which varied by the number of registered voters in the jurisdiction, and ranged from $15,000 to $300,000 per jurisdiction. For jurisdictions with more than 25,000 registered voters, we estimated costs assuming that each jurisdiction would have one $55,000 unit and one $24,000 unit. None of our estimates included such associated costs as the cost of purchasing individual “privacy booths” for voters to mark their ballots or the cost of ballots and other supplies. In addition, our estimates for central-count systems did not include separate units for subcounty minor civil divisions that have responsibility for conducting elections in some states. The number of registered voters in these subcounty election jurisdictions— more than 7,500—varied widely. Some had fewer than 100 registered voters; others have 40,000 or more. The cost estimate shown in table 12 would be considerably higher if we assumed that each election jurisdiction within a county purchased central counters. Given the assumptions we used, we estimated that it would cost about $191 million to purchase 2 central-count optical scan units for 3,126 counties election jurisdictions in the United States, plus 1 reserve unit for each jurisdiction with more than 25,000 registered voters. We developed separate cost estimates for replacing each type of voting method used in the November 2000 general election. Of the 3,126 counties, 2,072, or about two-thirds, had 25,000 or fewer registered voters. We estimated it would cost just about $83 million to purchase two $20,000 units—one for election day and one for absentee ballots—for each of these jurisdictions. Each unit would include a personal computer, card reader, and software. Because each individual unit should accommodate the entire vote counting needs of these jurisdictions, we did not include an estimate for reserve units for these smaller jurisdictions. We assumed that the second machine could function as the reserve for these election jurisdictions. For the 1,054 election jurisdictions with more than 25,000 registered voters, we estimated that it would cost about $109 million to buy 2 central-count optical scan machines for election jurisdictions plus 1 reserve unit per jurisdiction. The election day unit would cost $55,000 and have a counting capacity of 24,000 ballots per hour. The absentee ballot and reserve units would cost $24,000 each and have a counting capacity of 9,000 ballots per hour. The cost per unit does not include software or other associated costs. It is important to remember that within each of the categories we used— small and large—there is wide variation in the numbers of registered voters. Some of the small jurisdictions had fewer than 3,000 registered voters. Some of the large jurisdictions had more than 500,000. The largest election jurisdiction in the nation had more than 4 million registered voters. Thus, our assumptions would not necessarily match the needs of individual jurisdictions. For example, the capacity of the 2 central-count units used in the estimate for small jurisdictions would exceed the needs of jurisdictions with fewer than 5,000 registered voters. Similarly, the capacity of the 2 central-count units used in the estimate for large election jurisdictions would probably exceed the needs of jurisdictions with 100,000 registered voters. However, for the largest jurisdictions, these same two central-count units would probably have insufficient capacity to count votes in 1 or 2 days. We have assumed that each election jurisdiction with more than 25,000 voters would have one of the $24,000 units in reserve, should either of the other 2 units break down. The estimate in table 12 included the 36 election jurisdictions in Oregon. We assumed that Oregon would use a central-count system because Oregon used mail ballots for all ballots cast in the November 2000 general election. Purchasing optical scan equipment that is placed in each voting precinct is more expensive than purchasing central-count optical scan equipment because each election jurisdiction usually has multiple precincts. We estimated that it would cost about $1.3 billion to purchase an optical scan unit for each of 185,622 precincts in the country, excluding Oregon. Although the cost per unit is much less, the number of units is much higher. According to vendor-provided data, optical scan units for precincts range from $4,500 to $7,500 each. None of the prices included software. For our estimate, we assumed that each precinct would have a $6,500 optical scan unit—neither the least nor most expensive available. Each unit could be programmed to alert voters to errors (overvotes and undervotes) on their ballots. Each unit would also record and total the votes cast for each candidate and each issue on the ballot at the precinct at which it was placed. With this option, we also assumed that each election jurisdiction would have a central-count optical scan unit for counting absentee ballots within the jurisdiction. Placing a central-count optical scan unit within each subcounty election jurisdiction—more than 7,500—would increase the cost estimates shown in table 13. The unit costs used for the estimates do not include software, which ranges from $15,000 to $300,000 per jurisdiction, depending upon the number of registered voters in the jurisdiction. The estimated costs also do not include training, supplies (such as ballots), or other costs associated with operating and maintaining the units. Finally, although we could determine the types of voting methods used within 36 election jurisdictions that used mixed methods, we could not make this determination at the precinct level for 3,472 precincts in these jurisdictions. Therefore, the cost estimates for any specific type of voting method, such as punch cards, may not include all precincts that used that method. Actual costs would depend upon the number of units purchased, any quantity discounts that could be obtained, the number of reserve units purchased, and the cost of software and other necessary ancillary items. DRE equipment is available in two basic types. With full-face DRE equipment, the entire ballot is placed on the machine, with buttons beside each candidate or issue choice on the ballot. However, it may be difficult to design an easily readable ballot for a full-face DRE machine that includes many candidates and issues or that must be printed in multiple languages. The second type of DRE machine is the touchscreen, analogous to a bank ATM machine. DRE machines range in price from $2,000 to $6,000 depending upon the features offered. These prices did not include costs that can substantially increase per unit cost, such as for software and in some cases such essential equipment as card readers and smart cards for each machine. Our estimate used a touchscreen machine that cost $3,995 for each unit equipped for the disabled and $3,795 for each unit not so equipped. The equipped unit for the disabled can accommodate all disabled voters except those who are quadriplegic. The unit cost includes the vote count cartridge but does not include software, which ranges from $15,000 to $300,000 per jurisdiction, depending upon the number of registered voters in the jurisdiction. One reason that touchscreen DRE equipment is generally more costly than precinct optical scan equipment is that more units are required. Voters do not vote on precinct optical scan units—they mark their ballots at the voting place and then feed their individual ballots into the precinct counter to be read and counted. However, as with lever equipment, voters actually cast their ballots on DRE units. Thus, the cost of purchasing DRE equipment is affected by the number of voters who use each DRE unit during the course of an election day. Some states have statutory standards for the maximum number of voters per voting machine. We used two assumptions—1 unit for each 250 registered voters per precinct and 1 unit for each 500 registered voters per precinct. We also assumed that there would be at least 1 unit equipped for the disabled at every precinct—or a minimum of 185,622 units. Because there were no data available on the number of registered voters in each precinct in Alaska, North Dakota, and Wisconsin, our estimate provides a single disabled equipped unit for each precinct in those states. Consequently, our estimates may understate the total number of touchscreen units needed. Using 250 voters per DRE unit, we estimated that 763,196 DRE units would be required to replace all voting equipment in the United States (see table 14). This includes more than 24,000 reserve units, assuming reserves were 3 percent of the estimated average number of units needed in each election jurisdiction. The estimated total cost of purchasing these units is $3 billion, including one $20,000 central-count optical scan unit for each of the 2,072 election jurisdictions that had 25,000 or fewer registered voters and one $24,000 central-count optical scan unit for each of the 1,054 election jurisdictions that had more than 25,000 registered voters (excluding Oregon). The central-count units were for counting absentee ballots in each election jurisdiction. As shown in table 15, purchasing 1 unit for each 500 registered voters per precinct reduces the estimated number of touchscreen units needed, including reserves, to 388,198 and the cost to around $1.6 billion, including the central optical scan counters for each jurisdiction. Again, software is a substantial additional cost, approximately $46 million ($15,000 per jurisdiction) to $927 million ($300,000 per jurisdiction). Purchasing software separately for each of the more than 7,500 subcounty election jurisdictions—cities, townships, villages—would cost more. For example, if the average software cost for each of 7,500 jurisdictions were $20,000, the additional cost would be $150 million. Actual costs for any specific jurisdiction would depend upon the number of units purchased, any quantity discounts that could be obtained, the number of reserve units purchased, and the cost of software and other necessary ancillary items. Notes for tables 14 and 15 are found at the end of table 15. New DRES Are Similar to Existing DREs, With Added Features to New Optical Scan Equipment Is Very Similar to Those Currently Feasibility of Telephone-Based Voting is Being Proposed Explored On the basis of vendors surveyed, we identified five new models of voting equipment—four DRE touchscreens and one optical scan. We also identified two proposals for a new method of voting—telephone-based voting. None of these were used in the November 2000 election. Four new DRE models are available that build on the advanced features already present in the most recent of the DREs used in the November 2000 election and offer several new options. In general, these new options are intended to improve the DREs’ ease of use and security characteristics. Other characteristics, such as accuracy, efficiency, and cost, are generally not affected. The new options include the following: A “no-vote” option helps avoid unintentional undervotes (offered by three of the four new DREs). These DREs’ touchscreens provide the voter with the option to select “no vote (or abstain)” on the display screen if the voter does not want to vote on a particular contest or issue. A recover spoiled ballots option allows voters to recast their votes after their original ballots are cast. In this scenario, every DRE at the poll site is connected to a local area network. A poll official would void the original “spoiled” ballot through the administrative workstation that is also connected to the local area network. The voter could then cast another ballot. Voice recognition capability allows voters to make selections orally. Printed receipts for each vote option provides a paper printout or ballot each time a vote is cast. Vendors claim that this feature provides voters and/or election officials an opportunity to check what is printed against what is recorded and displayed. It is envisioned that procedures would be in place to retrieve the paper receipts from the voters so that they could not be used for vote selling. One of the new DREs also has an infrared “presence sensor” that is used to control the receipt printer in the event the voter is allowed to keep the paper receipt; if the voter leaves without taking the receipt, the receipt is pulled back into the printer. Our survey also identified one vendor that proposed a new model of its existing precinct-based optical scanner. According to the vendor, the primary advantage of this new model is that it is lighter and quieter than the previous model, and it has expanded memory capabilities. However, this model’s accuracy, ease of use, efficiency, and security characteristics do not generally differ from those of comparable existing optical scan devices. The new model is slightly more expensive than the existing model. Our survey identified two vendors that are exploring the feasibility of a new method of voting in which voters would record their votes using a touch- tone telephone; the votes would be transmitted in real time over public telephone lines and recorded electronically at a central location. According to one of the vendors, this method of voting could be based at poll sites and/or remote locations. In either case, the voter interacts with the telephone in essentially the same way. As with the new DREs, telephone- based voting is generally concerned with improving a voter’s ease of using the equipment. A general description follows of the vendors’ respective approaches to implementing this method of voting. Vendor A (poll-site or remote voting): Once a voter was authenticated (the vendor did not say how this would be done, although for poll-site voting it could be done by traditional means), he or she would be provided with an ID and a list of the candidates or issues, each with corresponding unique code numbers. For poll-site voting, the poll-site worker would hand these code numbers to the voter and provide necessary instructions; for remote voting, the codes would be mailed before election day to the voter. The voter would use the touch-tone telephone feature to key in the ID number to gain access and then enter the code numbers for each selection. After each selection, a recorded message would be sent to the voter to confirm the selection. The voter could make any necessary changes and would have access to live assistance if necessary. For poll-site voting, the vote would be recorded on a PC at the polling site, which would send the information to an election data center over the telephone once the polls closed. For remote voting, the vote would be sent directly to the data center. According to the vendor, the system would provide multiple languages and interactive voice recognition technology to accommodate persons with disabilities. Vendor B (poll-site voting for persons with disabilities): Once the voter was authenticated (again the vendor did not specify how, although traditional approaches could be used), the person would be provided with an ID and directed to a poll worker, who would dial up the system and input the ID. Once the ID number was input, a recording would ask, “Is your candidate ready to vote?” At this point, the poll worker would hand the phone (which could include a headphone set) with button panel to the voter. The voter would then be prompted to request a language of preference and would be directed through the voting sequence. The voter could vote by using the touch-tone keys on the telephone or by speaking responses. After the voter selected a candidate or issue, the system would provide feedback to confirm the selection. The telephone also would read a summary of the results and allow the voter to revise any previous selections. Once the voter finished, the system would hang up, and the ballot would be recorded on a central system. The challenges confronting local jurisdictions in using voting technologies are not unlike those faced by any technology user. As discussed throughout this section, these challenges include the following: Having reliable measures and objective data to know whether the technology being used is meeting the needs of the jurisdiction’s user communities (both the voters and the officials who administer the elections). Looking back to the technology used in the November 2000 election, our survey of jurisdictions showed that the vast majority of jurisdictions were satisfied with the performance of their respective technologies. However, this satisfaction was mostly based not on hard data measuring performance, but rather on the subjective impressions of election officials. Although these impressions should not be discounted, informed decisionmaking on voting technology investment requires more objective data. Ensuring that necessary security, testing, and maintenance activities are performed. Our survey of jurisdictions showed that the vast majority of jurisdictions perform these activities in one form or another, although the extent and nature of these activities vary among jurisdictions and depend on the availability of resources (financial and human capital) that are committed to them. Ensuring that the technology will provide benefits over its useful life commensurate with lifecycle costs (acquisition as well as operations and maintenance) and that these collective costs are affordable and sustainable. Our survey of jurisdictions and discussions with jurisdiction officials showed that the technology type and configuration that jurisdictions are employing vary depending on their unique circumstances, such as size and resource constraints, and that reliable data on lifecycle costs and benefits are not available. Ensuring that the three elements of people, process, and technology are managed as interrelated and interdependent parts of the total voting system. We must recognize that how well technology performs is not only a function of the technology design itself, but also of the people who interact with the technology and the processes governing this interaction. The growing use of the Internet for everyday transactions, including citizen-to-government transactions, has prompted considerable speculation about applying Internet technology to elections. Such speculation was recently fueled by the vote counting difficulties of the November 2000 election, which sparked widespread interest in the reform of elections (particularly the technology used to record and count votes). However, well before the November 2000 election, some groups had already begun considering the pros and cons of Internet voting. In addition to the growing popularity of the Internet, interest in Internet voting was spurred by claims that it would increase the convenience of voting (particularly for those with limited mobility) and add speed and precision to vote counts. Further, it has been claimed by Internet voting proponents that the convenience of Internet voting could increase voter turnout. As a result, academics, voting jurisdiction officials, state election officers, and others have been examining Internet voting for some time. Although opinion is not unanimous, consensus is emerging on some major points: Security is the primary technical challenge for Internet voting, and addressing this challenge adequately is vital for public confidence. Internet voting as an additional method of voting at designated poll sites may be technically feasible in the near term, but the benefits of this approach are limited to advancing the maturity of the technology and familiarizing voters with the technology. The value of Internet voting is uncertain because reliable cost data are not available and its benefits are in dispute. Voter participation and the “digital divide” are important issues, but controversy reigns over their implications. The Internet originated in the late 1960s through government-funded projects to demonstrate and perform “remote-access data processing,” which enabled researchers to use off-site computers and computer networks as if they were accessible locally. Although these networks were initially intended to support government and academic research, when their public and commercial value was realized, they were transformed into the medium known today as the Internet. Over time, these networks were privatized, and additional networks were constructed; the spread of networks along with advances in computing technology fostered the Internet’s growth. The development of the World Wide Web and “browser” software and advancements in the processing capability of personal computers greatly facilitated public use of the Internet. In the early 1990s, a major surge occurred in Internet use that continues unabated today. According to the Department of Commerce, the number of Internet users in the United States rose to about 117 million in the year 2000. (The population of the United States is over 281 million.) Promoting the easy sharing of information was a prime motivation for the Internet. To this end, systems and software followed open rather than proprietary standards, and software tools were put into the public domain, so that anyone could copy, modify, and improve them. This approach is a source of both strength and weakness. Openness and flexibility contributed to the rapid evolution and spread of Internet information and technology. But this openness and flexibility, and the vast web of interconnections that resulted, are also the source of widespread and growing security problems. This interconnectivity has also led to growing concerns about individual privacy. Information that may previously have been publicly available in principle has become easily available in practice to almost anyone, and even private information can be accessed if security protections break down. Another growing concern is that the availability of Internet technology is producing a “digital divide”: two classes of people separated by their ability to access the Internet and all that it offers. In investigating this question, both we and the Department of Commerce found greater home usage of the Internet by more highly educated and wealthier individuals. For Internet-based voting, the generic Internet issues—security, privacy, and accessibility—are entwined with issues relating to the unique nature of voting (such as ballot secrecy). Another important issue is the practical consideration of the costs of Internet voting versus its benefits. Poll-site Internet Voting Kiosk Voting Remote Internet Voting When Internet voting is discussed, the popular image is of citizens voting on-line from any computer anywhere in the world. However, other possible scenarios have been suggested for applying Internet technology to elections. Such groups as the Internet Policy Institute and the California Internet Voting Task Force have pointed out that various approaches to Internet voting are possible, ranging from the use of Internet connections at traditional polling stations to the ability to vote remotely from anywhere. An intermediate step along this range is an option referred to as “kiosk voting,” in which voters would use conveniently located voting terminals provided and controlled by election officials. Some voting experts see the three types of Internet voting as evolutionary, because the issues become more complex and difficult as elections move from poll sites—where limited numbers of voting devices are physically controlled by election officials—to sites where voting devices are not under such direct control, and the number of devices is much greater (see figure 75). In poll-site Internet voting, Internet-connected computers either replace or reside alongside conventional dedicated poll-site equipment. In its most limited configuration, in which voters vote only at their traditional assigned polling places, poll-site Internet voting is little more than another type of voting equipment. An expanded configuration would permit voters to vote at any polling place within their jurisdiction, thus expanding their voting options—as well as increasing the complexity of the system required to support these options. In poll-site Internet voting at assigned polling places, poll workers would authenticate voters as they traditionally do; that is, they would follow the local procedures for ensuring that the voter was who he or she claimed to be and that the voter was registered in that precinct. However, if a voter wished to use an Internet device to vote, a poll worker would also assign the voter a computer-recognizable means of identification—a password or personal identification number (PIN), for example. At the Internet voting device, the voter would identify himself or herself to the system using the identification assigned; the voter would then be presented with an electronic ballot on which to vote. When the voter submitted the ballot electronically, it would be encrypted and sent via the Internet to the jurisdiction’s central data center, where the vote would be decrypted, the voter ID separated from the vote, and the vote and voter ID stored separately. Through software checks, the system would check the validity of the ballot and ensure that it had not been altered in transit. The system would also send an acknowledgment to the voter that the vote was received. However, the acknowledgment would not indicate how the voter voted, because the system would have separated that information from the voter’s identity to preserve the secrecy of the ballot. An extended version of poll-site Internet voting would allow voters to vote at other poll sites within a jurisdiction, rather than limiting them to their traditional assigned sites. These poll sites could be either within the same precinct or beyond the precinct within the voting jurisdiction. In any case, poll workers would have to be able to authenticate voters from a larger population than they do now that is, the voters in the entire precinct or voting jurisdiction, rather than simply those assigned to an individual poll site. Further, the election officials would have to present voters with the appropriate ballot style for which they were eligible to vote (corresponding to their local precinct). Figure 76 summarizes the process for poll-site voting. Of the various types of Internet voting, poll-site Internet voting requires the least change to current election processes. For example, traditional means can be employed for poll watching and physical security. For voting at assigned poll sites, voter authentication could also be done traditionally. However, if jurisdictions offer more options for polling places, the voter authentication system becomes more complex. Poll-site Internet voting in general does not offer advantages over traditional voting technology. The California Internet Task Force described poll-site Internet voting as primarily useful for testing technology that would allow voters to cast ballots from sites other than their assigned polling places. In the November 2000 federal election, poll-site Internet voting was tested in nonbinding pilot projects in four counties in California to ascertain voter satisfaction and acceptance of the technology. Voters who chose to participate, as well as election officials, generally reacted positively to the tests. However, some voters had security concerns, and some jurisdictions questioned the cost-effectiveness of expanding the pilots. An extension of poll-site Internet voting is the proposal to establish Internet voting sites at convenient public places, such as libraries and community centers. In this scenario, jurisdictions would provide Internet voting equipment but generally not staff the voting sites. If the voting sites were unstaffed, the voting equipment would require protection against tampering, and advance voter authentication would have to be implemented. In kiosk Internet voting, voters would have to be authenticated and provided with a means of identification (such as a password or PIN), just as in poll-site Internet voting. How this process would take place would depend on whether the voting sites were staffed by poll workers. In this scenario, poll workers could use the same means of voter authentication used for the expanded poll-site voting. In an unstaffed setup, voters would have to authenticate themselves in advance. For advance authentication, the voter would contact the authentication authority before the election, and the means of identification would be sent to the voter, similar to the way absentee ballots are requested and mailed out in a conventional election system. Once the voter received the means of identification, the rest of the voting process would be the same as for extended poll-site Internet voting. Figure 77 summarizes this kiosk voting process. Steps differing from the process described in figure 76 are shown in heavily outlined boxes. Retaining some of the features of traditional poll-site voting, this option adds some of the features of remote voting. As in traditional poll-site voting, the equipment is under the control of election officials. (For unmanned voting kiosks, some form of security is usually proposed to avoid tampering, such as camera surveillance or security guards.) However, as in remote voting, procedures and technology must be in place for voter authentication in the absence of poll workers. Kiosk voting is currently a purely conceptual alternative; no jurisdiction has yet tried to demonstrate the concept. In its ultimate form, remote Internet voting allows voters to cast ballots from any Internet-connected computer anywhere in the world. This form of Internet voting would allow maximum convenience to those voters with access to networked computers. However, because neither the actual machines used for voting nor the network environment could be directly controlled by election officials, this option would present election systems with the greatest technological challenge. Proposals for remote Internet voting, as well as for kiosk voting, usually assume that voters will submit requests for Internet voting in advance and that means of identification will be sent to these voters before the election. In addition to the means of identification, the jurisdiction would also have to take steps to ensure that voters secured the platform on which they proposed to vote. Some have suggested that the jurisdiction would have to send out software for the voter to install, such as a dedicated operating system and Web browser; such software would have to accommodate many platforms and system configurations. Once the voter had secured the computer by the means prescribed by the jurisdiction, the rest of the voting process would be similar to that described earlier. One difference, however, would be that after voting, voters would have to reconfigure their computers to return them to their previous state (for example, they might need to reset their network settings to those needed to connect to their Internet service providers). In cases where voters wished to vote from computers they did not own (at schools or businesses, for example), this process could be problematic. Figure 78 summarizes the process for remote Internet voting. Steps that differ from the processes in figures 76 and 77 are shown in heavily outlined boxes. Like any form of remote voting, including the mail-in absentee voting used in most states today, remote Internet voting lacks some of the safeguards associated with voting within the controlled environment of a traditional polling place; that is, election officials cannot guarantee that the ballot is kept secret and that voters are not coerced. Likewise, traditional citizen poll watching is impossible, because voting takes place in private settings. Remote Internet voting has been used for private elections for several years, but only recently have attempts been made to use Internet technology for public elections in which candidates were running for federal office. To date, no jurisdiction has attempted to use remote Internet voting in a binding general election, although some political parties have used remote Internet voting in binding primary elections. In addition, the Department of Defense (DOD) conducted a pilot project to allow military service members, their dependents, and citizens stationed overseas to send binding absentee ballots over the Internet rather than by mail. The DOD pilot, however, differed in a number of aspects from what a jurisdiction-run remote Internet election would be. In the DOD pilot, the ballots were not sent to an electronic data center for tallying, but rather were sent to various local jurisdictions, where officials printed the ballots out and processed them like paper absentee ballots. Further, responsibility for voter authentication was delegated to DOD, so the local jurisdictions did not have to perform that step or issue computer-readable means of identification. In some of the primary elections that allowed for remote Internet voting, results were mixed. Many voters were comfortable with the process, but some also expressed concerns about security. Disputes about Internet accessibility also led to a lawsuit in the case of the 2000 Arizona Democratic primary. Further, a range of problems surfaced, from the technical (some computers and Web browsers were incompatible with the election system) to the procedural (additional telephone help lines had to be added). The standards by which new election technologies, such as Internet voting, should be judged combine practical considerations (such as cost and benefits) with generally recognized requirements for free and fair elections: (1) the secrecy of the ballot should be ensured; (2) only authorized persons should be able to vote; (3) no voter should be able to vote more than once; (4) votes should not be modified, forged, or deleted without detection; (5) votes should be accurately accounted for and verifiable; and (6) voters should not be denied access to the voting booth. For Internet voting to reasonably meet these requirements, a number of issues need to be resolved. These issues have been raised by groups and individuals with voting expertise, including election officials, citizens groups, voting technology vendors, and academics. Among these issues, we have identified those that have received the widest discussion and are generally agreed to be of primary importance; these can be placed into four general categories: ballot secrecy/voter privacy, security, accessibility, and cost versus benefits. Although ballot secrecy and voter privacy are closely related, they can be distinguished and are treated differently in practice in many forms of elections. Ballot secrecy refers to the content of the vote; voter privacy refers to the voter’s ability to cast a vote without being observed. In poll- site voting, protecting voter privacy is generally ensured by election officials and observers. However, in voting that does not take place at poll sites, including traditional mail-in absentee balloting, election officials cannot safeguard voter privacy, although they can and do take steps to protect ballot secrecy. In any form of voting that takes place away from a poll site (including conventional mail-in absentee voting), safeguards are imposed to protect ballot secrecy at the receiving end (the election office) and in transit, but it is not practical to impose such safeguards at the origin (the voter’s location). The current mail-in absentee balloting process offers some procedural assurances that election officials cannot trace votes back to individuals. That is, the voter returns the absentee ballot in two envelopes: the outer envelope includes identifying information about the voter and is signed, but the internal one has no identifying information that links the ballot to the voter. When absentee ballots arrive at the election office, election workers separate the inner envelopes from the outer ones and randomize them before the ballots are inspected. This procedure ensures secrecy at the receiving end (as long as more than one absentee ballot is received). It does not ensure ballot secrecy or voter privacy at the origin or in transit. With absentee balloting, like remote Internet voting, practical solutions are not available to ensure that voters are not spied on or coerced by a third party. The digital process proposed by the California Voting Task Force for transmitting ballots over the Internet is generally patterned after the mail-in absentee ballot process. The process aims to preserve ballot secrecy and integrity through the use of encryption technology working with various forms of authentication, such as digital certificates. Encryption technology would act as the “envelopes” preserving the secrecy and integrity of the ballot, and the electronic voter authentication would be automatically stripped from the ballot before the votes were tabulated. As in the mail-in absentee ballot process, the voter authentication and the actual ballot would be stored separately and randomized to preserve ballot secrecy. Assuming that these technologies work as designed, this means of transmitting and receiving the ballot would protect the ballot’s secrecy. As in mail-in absentee balloting, voters would be responsible for protecting their own physical privacy. Like other forms of remote voting, then, proposed implementations of remote Internet voting would not protect voters’ physical privacy (leaving open the risk that voters may be coerced—through threats, bribery, and other forms of pressure); however, unlike paper-based voting, remote Internet voting also introduces threats to electronic privacy. For example, voters who access the Internet through a local area network (such as at an office, school, or library) might have their privacy compromised by a network administrator who could access the voter’s computer while the ballot was in an unencrypted state. In one of the Internet voting pilots where remote voting was allowed, voters relied heavily upon computers at offices and public libraries. Because these computers were tied into central networks, the potential for compromise was present. Reducing the likelihood of such breaches of privacy require that substantial legal penalties be imposed on such activities. Finally, any connection to the Internet brings with it the possibility that hackers or malicious software could target the connected computer for attack. Software is available now that allows users to remotely monitor other people’s activities over the Internet, without necessarily being detected or causing any obvious harm. Such snooping allows hackers to look for transactions of interest to them. As transactions increase in significance, their attraction to hackers increases. The challenge and high stakes of an Internet election are very likely to attract not only snooping, but also determined efforts at disruption and fraud. The process described for transmitting and receiving ballots would be used in all the forms of Internet voting proposed, not just remote voting. This process does not address protection of voters’ privacy while they are generating ballots. However, in poll-site Internet voting as in other poll-site voting, election officials can institute procedures to protect voters’ physical privacy at the poll site. Similarly, in kiosk voting, election officials could also establish procedures to protect against coercion. Of the three types of Internet voting, remote Internet voting is recognized as least protective of ballot secrecy and voter privacy. On the assumption that techniques such as digital certificates and encryption are effective safeguards for transmission and reception, poll-site Internet voting provides the most privacy safeguards, covering origination, transmission, and reception; kiosk Internet voting could safeguard transmission and reception and (depending on the setup) provide some safeguards on origination; and remote Internet voting could safeguard transmission and reception, but not origination. Some experts consider that the safeguards now available would be effective for protecting ballot secrecy during transmission and reception. However, other voting experts believe that although digital certificates and encryption could in theory provide the transmission and reception safeguards described, these technologies are not yet mature enough to do so in any large-scale implementation of Internet voting, particularly remote voting. These experts note that as encryption algorithms improve, so do the encryption-cracking tools and the power of the computers that run them. Further, even with perfect technology, they note that the human factor can undermine the goal. Digital certificates and encryption depend on passwords or keys, which can be stolen or voluntarily revealed. A further practical difficulty is the cost and technological challenge of creating the infrastructure required for a large-scale implementation of digital certificates. Systems would have to be set up to positively identify voters, issue digital certificates, and manage the exchange and verification of certificates. In the DOD Voting Over the Internet pilot, the system depended on the public key infrastructure that was already in place on DOD’s systems for electronic certificate registration and management services. In addition, for remote Internet voting, some experts believe that any large- scale solution would have to address the problem of maintaining ballot secrecy across different Internet browsers and computing platforms (that is, computers running various versions of Windows, Macintosh, and Linux operating systems). This problem would require continual attention as operating systems themselves evolve and change; it was not solved in the remote pilot elections in November 2000. In one of these pilots, the vendor that ran the Internet voting software discovered during the election that its voting encryption software was not supported by older Internet browsers. The vendor also reported that several Macintosh users had problems casting their votes on-line and were advised to vote in person. Beyond the cost and technological problems are the social problems that some experts foresee arising from more widespread use of remote voting. Some voting experts believe that remote voting would encourage organized voter coercion by such groups as employers, unions, nursing homes, and others. One election expert has also noted that the laptops now prevalent in campaign organizations could be used to turn out the vote in favorable precincts, removed from the scrutiny of election officials or poll watchers. The risk of fraud in remote Internet voting has been likened to that in mail- in absentee balloting. In a 1998 report, the Florida Department of Law Enforcement concluded that “The lack of ‘in-person, at-the-polls’ accountability makes absentee ballots the ‘tool of choice’ for those inclined to commit voter fraud.” Some experts suggest that remote Internet voting could compound this problem significantly. Election officials can provide reasonable assurance to voters of the secrecy of their ballots when these officials control the voting equipment. However, when elections are remote, this assurance fades, and when Internet technology is introduced, local election officials can have very little control over the technology. Even with encryption, election officials would not be able to guarantee that the voter’s computer or the jurisdiction’s election servers or communication link would not be compromised. Further, given the vulnerability of the Internet to manipulation, it may ultimately be difficult to convince voters that their votes over the Internet will remain secret. The primary issue for Internet voting is security-that is, ensuring that the voting technology (and related data and resources) is adequately safeguarded against intentional intrusions and inadvertent errors that could disrupt system performance or compromise votes. In Internet voting, the familiar security threats of the Internet are compounded by the particular security requirements of elections-that is, primarily the secret ballot, but also their low tolerance for fraud and disruption. Because the Internet is being increasingly used to transmit proprietary or privacy-sensitive information—health care records, business documents, engineering drawings, purchase orders, credit information—it has become an increasingly tempting target for attackers. Security experts contend that significant efforts are needed to define, develop, test, and implement measures to overcome the security challenge posed by the increasing complexity, interconnectivity, and sheer size of the evolving Internet. Although complete summary data are not available (many computer security incidents are not reported), the number of reported Internet- related security incidents is growing. For example, the number of incidents handled by Carnegie Mellon University’s CERT Coordination Center increased from 1,334 in 1993 to 8,836 during the first two quarters of 2000. Similarly, the Federal Bureau of Investigation (FBI) reported that its caseload of computer intrusion-related cases is more than doubling every year. The fifth annual survey conducted by the Computer Security Institute in cooperation with the FBI found that 70 percent of respondents (primarily large corporations and government agencies) had detected serious computer security breaches within the last 12 months and that quantifiable financial losses had increased over past years. These Internet security hazards are especially significant in the context of voting, because voting is an especially significant form of electronic transaction. For remote Internet voting, the problem of malicious software (such as computer viruses, worms, or “Trojan horses”) is acute-that is,such software could be introduced into computers without voters being aware of its presence. Hackers could thus alter ballots, spy on citizens’ votes, or disrupt Web sites, preventing voters from voting. The accessibility and speed that are the hallmarks of the Internet—the very attributes that make Internet voting attractive—are also attractions for malicious or mischievous individuals and organizations that might wish to attack on-line elections. Recent software attacks (such as the ILOVEYOU virus in May 2000, the 1999 Melissa virus, the 2001 Code Red worm, and the Nimda worm of September 2001) illustrate the disruptive potential of such malicious software. In addition, inadvertent errors by authorized computer users could have equally serious consequences if the election systems were poorly protected. Hackers could attack not only the computer on which voting was taking place, but also the communication links between the voters and the election system. Commercial Web sites have been brought down by a technique known as a “denial of service” attack, in which the attacker overloads a Web site with requests for information, jamming the communication lines and preventing legitimate users from interacting with the site. A more refined version of this type of attack, developed recently, is referred to as a distributed denial of service attack. In this type of assault, software programs called worms, which propagate through the network without user intervention, are installed on several computers without the knowledge or consent of their owners. The hacker basically penetrates several computers and turns them into agents, using them to target Web sites. These types of attacks spread quickly and are very difficult to trace. The public became aware of these attacks in February 2000, when Web sites owned by eBay, E*Trade, CNN, and Yahoo were assaulted and their operations affected. Denial of service attacks would be especially threatening to remote Internet voting, since they could prevent voters from voting. In poll-site voting, however, the election system could mitigate the denial of service problem, because voting devices could be disconnected from the network until the attack was over, votes could be stored and transmitted later, or other voting technologies could be used. All types of Internet voting are at risk from malicious software attacks. Remote voting is riskiest; in poll-site and potentially kiosk voting, in which the voting equipment is under the control of election officials, the danger of such attacks is reduced, although not eliminated. Poll-site voting does permit remedies that are not available with remote voting (e.g., controlling the computers used for voting, disconnecting machines from the network if an attack or other disruption occurs, and offering alternative means of voting); some of these remedies would also be available for kiosk voting. Other kinds of remedies for all types of Internet voting would include measures such as system redundancies and backup systems; contingency plans would also need to be designed into any Internet voting system. Internet voting systems face greater security challenges than other Internet systems, because voting requires more stringent controls than other electronic transactions. In particular, elections could not tolerate the level of fraud that occurs in other electronic transactions, such as on-line banking and commerce. (One study reported that 6 million Internet users claimed that they had been victimized by credit-card-related fraud in e- commerce transactions.) Compounding the problem of fraud for Internet voting is a security requirement that is unique among on-line transactions: ballot secrecy. Under current election laws, the requirement for ballot secrecy prevents election systems from associating voters with their ballots or providing confirmation of how they voted. As a result, audit trails in public elections are specifically designed not to associate the voter with a ballot; for Internet voting, this would mean that voters could not be issued electronic receipts confirming that their votes were cast as they intended. In contrast, in both e-commerce and on-line banking, receipts providing transaction details for verification are routinely used to protect consumers from error. To date, there is no way to authenticate every voter’s identity on-line. This raises the problem of devising means to ensure that electronic ballots are not cast by individuals who are not registered to vote, who are ineligible to vote, or who have already voted (whether on-line or by other means). Although this problem is mostly avoided with the poll-site approach to Internet voting, it emerges with any system in which voting takes place at sites that are not monitored by election officials. It is generally agreed that system security is the biggest obstacle to Internet voting. In view of the Internet’s multiple vulnerabilities, security experts question whether the Internet is ready to offer the level of security necessary to ensure the integrity of an election. Two experts assert that the Internet can never be used for secure elections, because the Internet, which was designed to facilitate information accessing and sharing, is inherently insecure. The proposals for poll-site and kiosk Internet voting, in which voting equipment is under the control of election officials, are largely motivated by the desire to avoid some of the security problems associated with remote Internet voting. Some experts believe that security mechanisms may evolve one day to the point that they could form the framework for secure Internet voting solutions. In our interviews with several Internet voting vendors, one vendor stated that its product had adequate security measures in place now that would make it possible to conduct a secure public election with remote voting over the Internet. However, some security experts dispute this statement, pointing out that security breaches are being experienced every day by the most technologically sophisticated companies in our country. Most technology experts agree that today no organization is immune from security breaches over the Internet. The vendors that we contacted are exploring solutions to these challenging security issues. Like any security system, these solutions will involve design trade-offs between the ease of voters using the system and the protection afforded by it, as well as between protection and cost. Because our nation’s election system has rigorous security requirements, the expectation is that considerable complexity and cost would be introduced by whatever solution is devised. In general, the election community agrees that remote Internet voting is not now practical; a few suggest that it may never be. Most agree that Internet voting at designated poll sites is feasible; although the security issues are still significant, technological and procedural solutions could probably be devised to allow Internet voting at poll sites. The accessibility of the polls is fundamental to the right to vote. All eligible voters, including those with disabilities, should have equal access to voting, and election systems should be easy for all voters to use. The ease of use aspect of accessibility is important not only to minimize voter confusion and thus error, but also because voting technology that is easy to use is more likely to capture the intent of the voter. Election systems should strive to minimize the opportunities for errors that invalidate or misdirect votes. In the context of Internet voting, the digital divide takes on particular importance. If access to the Internet continues to be divided along socioeconomic lines, remote Internet voting would likely benefit only the more privileged classes in American society. For voting, the need to minimize the effect of socioeconomic divisions is particularly pressing, because it is a fundamental principle of American democracy that elections should be free and fair. Any system that is perceived to offer unfair advantage to certain classes of people could undermine public confidence in elections and in the governments they produce. As we have reported, Internet voting presents increased participation opportunities for voters with disabilities as well as implementation challenges. Because Web software can be accessible to voters with disabilities, Internet voting could potentially provide voters with disabilities the convenience of voting from remote locations, such as their homes, thereby promoting voter participation. We identified the following as possible advantages of Internet voting for voters with disabilities: Voters would have more flexibility to vote when they want and from convenient locations if remote Internet voting were allowed. Blind individuals might be able to vote independently with special equipment and a Web site designed to provide universal access. However, we also reported concerns expressed about the Internet’s security and reliability, as well as the lack of widespread Internet access. Some of the disadvantages include the following: Voters who are accustomed to traditional methods might resist the Internet method. Voters who lacked a convenient connection to the Internet would not have equal access to voting. Blind voters may need special equipment to allow them to use the Internet. Some disability advocates believe that although alternative voting methods, like Internet voting, do expand options for voters with disabilities, they do not provide the same voting opportunities afforded the general public and thus should not be viewed as permanent solutions to the problem of inaccessible polling places. Moreover, although the Internet is potentially accessible to people with disabilities, they are in fact less likely to have access to the Internet than the general population. According to the Department of Commerce, people with disabilities are only half as likely to have access to the Internet as those with no disability: about 22 percent of the persons with disabilities are on-line compared to about 42 percent of the general population. And while just under 25 percent of people with no disability have never used a personal computer, close to 60 percent of people with a disability fall into that category. Different types of disabilities also lead to different rates of access. Among people with a disability, those who may require special equipment to use computers (such as those who have impaired vision and problems with manual dexterity) have lower rates of Internet access and are less likely to use a computer regularly than people who need no special equipment, such as those with hearing difficulties. According to Commerce, this difference holds in the aggregate, as well as across age groups. Because experience so far with any kind of public election using Internet technology is limited, knowledge concerning ease of use and Internet elections is similarly limited. However, information that is available suggests that problems with ease of use would arise in Internet elections as in all voting methods and technologies, and voters who are unfamiliar with computers are most likely to have difficulty. For example, in the nonbinding pilot projects on poll-site Internet voting run by a few jurisdictions in the November 2000 elections, voters chose whether or not to participate, so it is believed that most participants were already familiar with computers and the Internet. Thus, when these voters were surveyed concerning ease of use, most expressed satisfaction. One jurisdiction reported that 100 percent of voters surveyed were satisfied with the ease of the Internet voting implementation; however, another jurisdiction also reported anecdotally that two senior citizens who attempted to use the system became so frustrated with using the computer mouse that they abandoned the attempt within a minute of sitting down. Another of the jurisdictions running a pilot also reported that voters who had never used a computer had difficulties with the keyboard and mouse. Further, even voters who were familiar with computers ran into problems. One jurisdiction reported that several voters did not read directions and had difficulty performing the steps needed for authentication. Also, in one nonbinding primary in which remote Internet voting was tested, several survey respondents commented on their reluctance to download and install the security software, whose function they did not understand. In the DOD Internet absentee ballot pilot, organizers also commented that participants were not familiar with digital certificates. Removing obstacles that prevent or discourage eligible voters from voting is one aspect of accessibility; actively encouraging eligible voters to vote is another. The term generally used in discussions of this aspect of accessibility is “voter participation.” This issue may be as important to the Internet voting debate as security concerns. The goal of increasing accessibility/voter participation has been cited in arguments both for and against remote Internet voting. Some social scientists contend that remote Internet voting would improve the convenience of voting by removing the need for voters to go in person to poll sites at particular hours, and that this convenience would attract voters to exercise their right to vote. Proponents of remote Internet voting argue that Internet voting would thus increase voter participation, particularly among underrepresented groups, such as young people; people with limited mobility (such as the elderly and the physically challenged); and voters living overseas, including military personnel. On the other hand, in the long term, Internet voting could decrease voter participation, because it could undermine confidence in the security and fairness of the election process. That is, if the electorate lost confidence that Internet voting was secure or grew to believe that Internet voting unfairly favored certain classes of voters, the resulting disillusionment could discourage voters from participating. Some evidence that increased convenience could increase participation is found in the Oregon experience with mail-in voting, which resulted in significant increases in turnout. In 1995, when Oregon held the nation’s first all-vote-by-mail statewide congressional primary election, turnout in Oregon primaries rose to 52 percent, up from 43 percent previously. In the special election for U.S. Senator that followed these primaries, the turnout was 65 percent, a record for special elections. For more direct evidence that remote Internet voting could encourage participation, proponents cite the increased turnout seen in the Arizona 2000 Democratic presidential primary. In this primary, which provided for remote voting, the Democratic party saw an increase in voter participation of over 600 percent in comparison with both the 1992 and 1996 presidential primaries. This surge exceeded increases in every state that had Democratic and/or Republican primary elections during that year (although some other states, which did not provide Internet voting, also showed impressive surges: 419 percent in Rhode Island, 260 percent in Massachusetts, and 200 percent in Georgia). A study done at Northern Arizona University concluded that the availability of Internet voting contributed to Arizona’s increase in political participation, along with other factors, such as the contested primary and media attention focusing on the availability of Internet voting. The study further concluded that participation would have been greater if all technical glitches had been anticipated and corrected before voting began (some voters who ran into technical difficulties ended up by not casting any kind of ballot). Some suggest that after the novelty of Internet voting is dissipated, this increase in participation will subside. They argue that Internet voting is likely to be similar to previous election reforms (such as early voting, motor voter registration, and absentee balloting), which have had very little, if any, effect on participation. Some voting experts have suggested that information and mobilization are much more important than convenience in increasing voter participation. A slightly different argument is made about the participation of young voters in remote Internet voting. The argument here is that the 18 to 24 age group, which is least likely to vote (according to FEC), is also the age group whose access to and familiarity with the Internet is highest. Thus, that age group, it is argued, would be most likely to respond to the opportunity to use remote Internet voting. For older voters, on the other hand, particularly those with no exposure to computers, Internet voting could actually discourage participation. The Internet usage rate for people 50 and over was about 30 percent in 2000, compared to about 42 percent for the general population. Thus, poll-site Internet voting (if it were the only option) might be discouraging to such voters, as the anecdotal evidence from the pilot voting projects suggests. Remote and kiosk voting would be even less likely to attract such voters. Even if remote Internet voting did result in increased turnout, many voting experts believe that such an increase would be likely to appear in some voter groups more than others (in particular, those who have Internet- connected computers in their homes). Thus, Internet voting could serve to widen the gap that already exists in the way different socioeconomic groups are represented at the polls. Less privileged groups could be disadvantaged by the new technology. There is little suggestion that poll-site Internet voting would have a significant effect on accessibility and participation, any more than any other form of voting device. The experience with pilots shows, however, that ease of use issues arise especially for voters unfamiliar with computers and are present even for those who do use computers. Kiosk voting remains a concept only, with no real-world pilots or testing. Therefore, few have commented on the issues of accessibility, ease of use, and participation. The arguments on accessibility and participation all concentrate on remote Internet voting—both those in favor and those against. (Ease of use tends to be discussed only in terms of its effect on convenience-that is, if security requirements are too difficult or too much trouble for voters, the convenience of Internet voting is undermined.) Consensus does not exist on accessibility for those with disabilities. Although remote Internet elections could in theory be made accessible for this group and thus could increase their opportunities to vote, in practice Americans with disabilities are among those groups who have the least access to computers and the Internet. On the question of voter participation, there is little evidence, and thus consensus, that the availability of remote Internet voting would succeed in bringing substantial increases in voter turnout. However, as there is also little evidence against this proposition, most agree that further study and debate are warranted. Further, whether any increase in participation that resulted from remote Internet voting would benefit the democratic process or only the well-off is likewise in dispute. Before committing to any new technology, jurisdictions faced with multiple competing needs, investment options, and budget constraints will want to assess the technology’s potential cost and benefits. The ACE Project partners are the International Foundation for Election Systems, the International Institute for Democracy and Electoral Assistance, and the United Nations Department of Economic and Social Affairs. analysis should be performed. According to the ACE project, the analysis should incorporate the elements described in table 16. Little of the information needed for an analysis of the kind described in table 16 is currently available for Internet voting of any type. In the absence of such information, most of the Internet voting debate consists of hypotheses concerning possible outcomes and benefits. Arguments have been offered both that Internet voting would save jurisdictions money and that Internet voting would cost more than current elections. Some Internet voting proponents have said that remote Internet voting could have the benefit of increasing voter participation and thus decreasing the cost per voter. They contend that remote Internet voting would permit jurisdictions to save money by using fewer printed ballots, storage facilities, polling places, and poll workers. Others, however, have noted that substantial costs would be incurred in implementing security solutions. One security expert has said that the initial investment for Internet voting will be substantial and not affordable to many jurisdictions. Because of the different types of Internet voting being proposed (poll site, kiosk, and remote), it is unclear whether introducing Internet voting technology to the electoral process would increase or decrease costs. Some argue that the cost would depend on the voting expenses and equipment the technology replaced. However, most scenarios envision Internet voting to be used concurrently with existing voting methods. We were unable to acquire information on costs from the jurisdictions involved in the pilots, because in most cases the vendors incurred the costs, not the jurisdictions. We were able to acquire cost data on the DOD absentee ballot pilot project, but DOD warned against equating its cost with that of owning and operating an Internet voting system. Rather, the project was described as a “proof-of-concept research and development project.” DOD reported that the project cost $6.2 million. In the project, 84 electronic ballots were transmitted over the Internet, and 74 were counted (10 were not counted because paper ballots from those voters had already been delivered and deposited in sealed ballot boxes). DOD provided no cost estimates for a final operational system. Four of five vendors currently providing Internet voting solutions, however, provided us with information on costs for poll-site voting solutions; only one of these vendors provided us with a cost estimate for remote voting. One Internet vendor estimated that his organization could host a poll- site Internet voting configuration for approximately $300 to $1,500 per day (including 12 computer voting stations with all associated hardware and software); the vendor did not provide any cost estimates for support services. Moreover, the vendor stated that certain variables would affect this cost estimate, such as the length of the election, level of security, and ballot complexity. Another Internet vendor declined to give a cost estimate because any estimate would depend on a number of variables unique to a jurisdiction, such as its existing technology and networking infrastructure, number of devices required, technical proficiency of in- house staff, and other customer specifications. Two other vendors provided us with “cost per vote” estimates. One vendor stated that it could provide a poll-site Internet voting solution for approximately $3 per vote. This system would provide 4 Internet voting stations (computers) per precinct, each of which could support 300 voters. Another vendor stated that it could provide poll-site Internet voting for $1.70 per voter and remote Internet voting for 10¢ to 50¢ per voter. This vendor was the only one willing to give a cost estimate for remote Internet voting. Some of the vendors we spoke with stated that an Internet voting solution could be more cost effective if the costs could be spread out and shared. They proposed that jurisdictions could use computers used for Internet voting for other purposes (e.g., in schools) when they were not being used for election functions. However, some security experts have expressed concerns that this approach would compromise the use of the computers for elections, because they might become infected with malicious software. We could arrive at no consensus on costs from the information currently available beyond the general recognition that potentially sizable up front infrastructure costs would be incurred. Some experts acknowledge that the Internet and the associated technology are evolving so rapidly that it is difficult to reliably estimate costs at this time. There is likewise no consensus on the suggestion that jurisdictions might mitigate their costs by using equipment acquired for Internet voting for purposes other than elections. Except for DOD’s pilot project, cost information was unavailable for the pilots. As acknowledged by some experts who have commented on this topic, given that most proposals to use Internet technology for voting in the near term envision poll-site voting, and given that most suggestions for possible cost savings envision remote voting, it appears that Internet technology offers no near-term promise of significant cost savings. In addition to the major issues we have discussed, a number of other issues have been raised in discussions of Internet voting; however, extensive information for these issues is not available, and so we do not address them in detail. For some, discussion has been largely at the level of speculation. Further, some issues cannot be resolved not only because of the uncertainties about the form of Internet voting, but also because of ongoing rapid changes in information technology. For example, it has been suggested that election officials would need to find new means of communicating with voters (for instance, sending out sample ballots); providing voter assistance; recruiting and training poll workers; identifying polling places (which would have to have Internet connections); storing and maintaining equipment; and designing ballots, among other processes. The times for elections may be lengthened (to avoid network traffic problems, to allow time for voters to overcome technical difficulties, and to permit Internet voting systems to recover from disruptions such as system failures or denial of service attacks). The Internet Policy Institute also points out that “for Internet voting to gain acceptance, new ways of testing, certifying and developing standards for election systems will have to be explored.”28, 29 Election officials would also have to examine laws concerning elections for their application to Internet voting, and they may find that some need to be changed to allow implementation of such a system. For example, state laws may prescribe certain types of acceptable voting equipment or certain ratios of equipment to voters. Further, election officials might recommend new laws to address the new possibilities for election fraud and improprieties opened up by Internet voting. (Examples of such laws would be prohibitions against buying, stealing, selling, or giving away digital signatures for the purpose of fraudulent voting; hacking voting systems or individual votes; interfering with voting systems by reducing or eliminating access to the system; or invasion of privacy by attacking a ballot or Web site with the intent to examine or change votes). Report of the National Workshop on Internet Voting, March 2001. No Internet voting equipment and software standards are currently in place. However, FEC has released for comment a draft of its voting systems standards, which outline some Internet voting standards. Democratic Party made efforts to increase minority participation, and the election was allowed to proceed. Some of the issues raised are not unique to Internet voting, but rather are applicable to any kind of electronic, computer-based voting. It is suggested, for example, that the use of computers for voting requires new ways to maintain public confidence in the integrity of the ballot count; traditional confidence measures are not effective for computer-based voting. Trust in electronic voting technology depends on persuading the public to place trust in technical experts. For Internet voting, the trust issue is particularly important, because Internet security threats are significant and well known. Although the nature and significance of the challenges vary somewhat depending on the type of Internet voting in question (poll site, kiosk, or remote), broad application of Internet voting in general faces several formidable social and technological challenges. These challenges were explicitly highlighted and discussed in depth in this chapter. They include providing adequate ballot secrecy and voter privacy safeguards to protect votes from unauthorized disclosure and to protect voters from coercion; providing adequate security measures to ensure that the voting system (including related data and resources) is adequately safeguarded against intentional intrusions and inadvertent errors that could disrupt system performance or compromise votes; providing equal access to all voters, including persons with disabilities, and making the technology easy to use; and ensuring that the technology is a cost-beneficial alternative to existing voting methods, in light of the high technology costs and security requirements, as well as the associated benefits to be derived from such investments.
Events surrounding the 2000 presidential election raised concerns about the reliability of various types of voting equipment, the role of election officials, the disqualification of absentee ballots, and the accuracy of vote counts and recounts. As a result, public officials and various interest groups have proposed reforms to address perceived shortcomings. This report discusses: (1) voter registration; (2) absentee and early voting; (3) election day administration; and (4) vote counts, certification, and recounts.
The Taft Hartley Act of 1947 established terms for negotiating employee benefits in collectively bargained multiemployer plans and placed certain restrictions on the operation of these plans, including the placement of plan assets in a trust. For example, the law required a collectively bargained plan and its assets to be managed by a joint board of trustees equally representative of management and labor. It further required plan assets to be placed in a trust fund, legally distinct from the union and the employers, for the sole and exclusive benefit of the plan beneficiaries. In 1974, Congress passed ERISA to protect the interests of participants and beneficiaries covered by private sector employee benefit plans. Title IV of ERISA created PBGC as a U. S. government corporation to provide plan termination insurance for certain defined benefit pension plans that are unable to pay promised benefits. PBGC operates two distinct pension insurance programs, one for multiemployer plans and one for single- employer plans. These programs have separate insurance funds as well as different insurance coverage rules and benefit guarantees. The multiemployer insurance program and PBGC’s day-to-day operations are financed by annual premiums paid by the plans and by investment returns on PBGC’s assets. In turn, PBGC guarantees benefits, within prescribed limits, when a multiemployer plan is insolvent and unable to pay the basic PBGC-guaranteed benefits when due for the plan year. In 1980, Congress sought to protect worker pensions in multiemployer plans by enacting the Multiemployer Pension Plan Amendments Act (MPPAA). Among other things, MPPAA (1) strengthened funding requirements to help ensure plans accumulate enough assets to pay for promised benefits, and (2) made employers, unless relieved by special provisions, liable for their share of unfunded plan benefits when they withdraw from a multiemployer plan. The amount owed by a withdrawing employer is based upon a proportional share of a plan’s unfunded vested benefits. Liabilities that cannot be collected from a withdrawing employer, for example, one in bankruptcy, are to be “rolled over” and eventually funded by the plan’s remaining employers. These changes were made to discourage employer withdrawals from a plan. The Pension Protection Act of 2006 (PPA) established new funding and disclosure requirements for multiemployer plans. (See table 1.) PPA requires trustees of plans certified in endangered or critical status to take specific actions to improve the plans’ financial status, such as developing schedules to increase contributions or reduce benefits. Plans certified as endangered must adopt a funding improvement plan, which outlines steps the plan will take to increase the plan’s funded status over a 10-year period or, in some cases, longer. Plans certified as critical must adopt a rehabilitation plan, which outlines actions, to enable the plan to cease to be in critical status by the end of a 10-year rehabilitation period and may include reductions in plan expenditures (including plan mergers and consolidations), reductions in future benefit accruals or increases in contributions, if agreed to by the bargaining parties, or any combination of such actions. To assist plans in critical status, PPA amended ERISA to allow these plans to reduce or eliminate adjustable benefits, such as early retirement benefits, post-retirement death benefits, and disability benefits. In addition, critical status plans are generally exempt from the excise taxes that IRS can assess on plans with funding deficiencies. The funding requirements of PPA took effect just as the nation entered a severe economic recession in December 2007. As a result, Congress enacted the Worker, Retiree, and Employer Recovery Act of 2008 (WRERA) to provide multiemployer plans with temporary relief from some PPA requirements by allowing multiemployer plans to temporarily freeze their funded status at the previous year’s level. The freeze allows plans to delay creation of, or updates to, an existing funding improvement plan or rehabilitation plan, or postpone other steps required under PPA. WRERA also requires plans to send a notice to all participants and beneficiaries, bargaining parties, PBGC, and the Department of Labor indicating that the election to freeze the status of a plan does not mean that the funded status of the plan has improved. WRERA also provided for a 3-year extension of a plan’s funding improvement or rehabilitation period. Although both single-employer and multiemployer plans are subject to the rules outlined in Title IV of ERISA, there are several important differences between the plan types that affect the structure and stability of each type of plan. (See table 2.) The overall number of multiemployer plans insured by PBGC has decreased steadily since the 1980s as a result of plan mergers and terminations. At the same time, the aggregate number of participants— including active and inactive—has continued to rise. (See fig. 1.) The number of participants in multiemployer plans also varies by industry. While PBGC covers workers in all major industrial sectors, the construction trades consistently account for over one-third of all covered multiemployer plan participants, totaling 36 percent in 2008. Other industries, including transportation and manufacturing, account for a smaller portion of participants, roughly 15 percent in 2007. (See fig. 2.) Multiple data sources that we examined indicate that most multiemployer plans experienced steep declines in their funded status in recent years. According to PBGC, multiemployer plans in aggregate have not been fully funded—at 100 percent or above level—since 2000 and their net funded status has declined significantly through 2007, the last date for which PBGC data are available. While plans are considered “safe” if their funded status is at least 80 percent, the aggregate funded status—the percentage of benefits covered by plan assets—of multiemployer plans insured by PBGC declined from 105 percent in 2000 to 69 percent in 2007. (See fig. 3.) The funded status of multiemployer plans insured by PBGC varies significantly by industry sector within which the plan operates. According to PBGC data, while all industries generally follow the same trend in funded status, plans in the transportation industry have since 2000 reported a consistently lower funded status than other industries. For example, in 2007, the aggregate funded status for plans in the transportation industry was 63 percent in contrast to the overall average of 69 percent. Furthermore, in 2000, the last year that the aggregate funded status of all multiemployer plans was over 100 percent, the funded status of multiemployer plans in the retail trade and services industries was about 30 percent higher than the funded status of plans in the transportation industry. (See fig. 4.) The extent of underfunding in multiemployer plans also varies by industry with the construction and transportation industries accounting for 71 percent of the underfunding of all PBGC-insured multiemployer plans in 2007. Since 2007, the last year for which data are available, aggregate plan funded status has declined further as a result of investment market declines. While the rapid drop in funded status, like the economic conditions that caused it, was severe, experts said that its effect on plans was similar to what happened to plans during the market correction of 2000 to 2002. For example, experts said that some plans, learning from the downturn from 2000 to 2002, took remedial steps in the following years, such as increasing contributions, and likely fared better in the recent recession. In contrast, other plans did not change course after the 2000 to 2002 downturn in the hope that market returns would erase their deficits and are now the plans in the most critical financial condition. Although funded status was in a general decline since 2000, the economic recession that began in December 2007 had a negative impact on the funded status of multiemployer plans, according to a number of data sources. Annual actuarial certification data from IRS show that the proportion of multiemployer plans reporting in endangered or critical zone status rose significantly, from 23 percent of plans in 2008 to 68 percent of plans in 2009. (See table 3.) Data from PBGC, although incomplete, show a similar downward trend in plan funded status. According to the annual funding notices that PBGC received in the 2009 plan year, nearly all of the 484 plans that filed reported a decrease in funded status from 2008 to 2009. Similarly, PBGC received more notices of critical or endangered status from plans, from 266 plans in 2008 to 624 plans in 2009. Recent industry surveys of multiemployer plans found similar declines in funded status. For example, two industry groups surveying their multiemployer plan membership in 2009 found the same result: 80 percent of plans reported being in critical or endangered zone status, a reversal from 2008 when 80 percent of plans reported being in safe status. Similarly, another industry survey of nearly 400 plans found that the proportion of plans in the endangered or critical zone status increased from 24 percent in 2008 to 80 percent in 2009. While these surveys are not comprehensive, they provide further evidence of the negative impact that the economic downturn had on multiemployer plans. Although it did not affect their underlying funded status, many plans took advantage of the one-time freeze allowed under WRERA. According to IRS data, 745 plans elected to freeze their funded status in either 2008 or 2009, including 373 plans in critical status, 351 in endangered status, and 21 plans in safe status. According to experts, some plans took advantage of the freeze option for a variety of reasons. Plans wanted to give the markets a chance to rebound in order to recoup plan assets lost in the downturn. Others may have chosen the freeze due to timing of collective bargaining agreements, not wanting to take steps to address funding deficiencies until a new agreement was reached. Still other plans elected the freeze to avoid having to revisit or revise ongoing rehabilitation plans. However, experts also noted that the WRERA freeze option was not helpful for all plans. Specifically, some plans chose not to freeze in endangered status, preferring to go straight to critical status to give them more options to address their funding deficiencies. Multiemployer plans continue to face demographic challenges that threaten their long-term financial outlook—including an aging workforce and few opportunities to attract new employers and workers into plans. While the number of total participants in multiemployer plans has slowly increased, the proportion of active participants to retirees and separated vested participants has decreased. (See fig. 5.) For example, multiemployer plans had about 1.6 million fewer active participants in 2007 than in 1980, according to PBGC. With fewer active participants, plans have more difficulty making up funding deficiencies by increasing employers’ funding contributions. Moreover, increases in life expectancy also put pressure on plans, increasing the amount of benefits that the plan will have to pay as retirees live longer. The future growth of multiemployer plans is largely predicated on growth of collective bargaining. Yet collective bargaining has declined in the United States since the early 1950s. According to recent data from the Bureau of Labor Statistics (BLS), union membership—a proxy for collective bargaining coverage—accounted for 7.2 percent of the U.S. private-sector labor force in 2009. In contrast, in 1990, union membership in the private sector accounted for about 12 percent, and in 1980, about 20 percent. While union membership has trended downward in most industries, it has remained relatively high in the transportation sector. (See fig. 6.) Some experts told us that some industries within which multiemployer plans operate were already in decline—such as the printing and trucking industries—and that their situation was likely exacerbated by the economic downturn. They also noted that other plans, while facing short- term funding deficiencies, belonged to industries that remained strong— such as the construction and entertainment industries—and were likely to improve their funded status as the economy improved. PBGC’s ability to assist multiemployer plans is contingent upon its insurance program having sufficient funds to do so. The net position of PBGC’s multiemployer pension insurance program has steadily declined since its highest point in 1998 as program liabilities outpaced asset growth. (See fig. 7.) The program’s net position went negative in 2003 and by 2009 the multiemployer program reported an accumulated deficit of $869 million. The demographic challenges that multiemployer plans face also affect PBGC’s ability to assist them. Plans pay PBGC an annual flat rate premium per participant. Similarly, contributions by employers in a multiemployer plan are generally paid on a per work-hour basis. Consequently, declines in the number of plan participants during periods of high unemployment and long-standing reductions in collective bargaining can result in less premium income to PBGC and an increased probability of PBGC-insured multiemployer plans requiring financial assistance. While PBGC officials told us that they could benefit from having more current data than are available on the Form 5500, they prefer using Form 5500 data on multiemployer plans because these older data are the most comprehensive, the agency’s monitoring system is designed for it, the data are audited, and most private plans are required by law to file the form on an annual basis. Officials told us that, given the current Form 5500 reporting schedule, even with the data capture capabilities of the new EFAST2 system, they cannot make up for the time lag in plan filing and, as a result, its monitoring suffers. Officials told us that the time lag made it difficult to detect when a plan was in trouble and what steps could be taken to avert greater problems. PPA generally requires multiemployer plans to provide more timely financial information to PBGC. (See table 5.) In addition to Form 5500 data, PBGC-insured multiemployer plans are required to submit annual funding notices (AFN) to PBGC. The AFN must include, among other things, the plans’ identifying information and funded percentage for the plan year, a statement of the market value of the plan’s assets as of the end of the year, a statement of the number of retired, separated vested, and active participants under the plan, and whether any plan amendment, or scheduled benefit increase or reduction has a material effect on plan liabilities. PBGC officials told us they do not use the AFNs they receive to determine the overall health of the universe of multiemployer plans, but may look at the market valuation of assets on the AFN of a specific plan once it has been identified through Form 5500 data as a potential candidate for the watch list. PBGC officials also told us they do not use the AFN in developing data for model simulation, annual reports, or data books. PBGC also receives annual notices of critical or endangered status from plans within 30 days of plans certifying their funding zone status with IRS, as required by PPA. PBGC officials said they compare the information in the notices—which alert recipients of the plan’s funding zone status and the reasons for it—with the plan’s Form 5500 filings to determine whether to place a plan on its contingency list. Plans on the list are asked to provide their current actuarial valuations so PBGC can monitor plans going forward. PBGC officials stated that, while plans are not required to provide this information, they are typically willing to cooperate with the requests. In addition to providing financial assistance, PBGC can assist troubled plans with technical assistance, facilitate mergers, and partition the benefits of participants orphaned by employers who filed for bankruptcy. Generally, it is up to plans to request these kinds of assistance. Occasionally, PBGC is asked to serve as a facilitator and work with all the parties to a troubled plan to improve a plan’s financial status. Plan administrators can request PBGC’s help to improve funding status of plans or provide assistance on other issues. They may contact PBGC’s customer service representatives to obtain assistance on premiums, plan terminations, and general legal questions related to PBGC programs. PBGC has also assisted in the orderly shutdown of plans. The plans involved in these actions either merged with other multiemployer plans or purchased annuities from private-sector insurers for their beneficiaries. For example, PBGC facilitated the closeout of seven small multiemployer plans in 2010 that were receiving or expected to receive future financial assistance payments from PBGC and identified two additional plans for closeout in the future. According to PBGC, these small plan closeouts are part of an ongoing effort to reduce plan administrative costs borne by PBGC’s multiemployer program. PBGC can also facilitate mergers between two or more multiemployer plans. According to PBGC officials, PBGC has received notice of 303 mergers since 2000, 5 of which PBGC facilitated by paying $8.5 million from the multiemployer insurance program to the merged plans. Plans considering a merger must request approval from PBGC and typically involve merging a plan with a low funding level with a plan having a more favorable asset-to-liability ratio. PBGC officials told us that they carefully consider each merger request to ensure that the merger creates a stronger plan that will sustain operations indefinitely. They further noted that PBGC wanted to be sure that plans that received funds in a facilitated merger did not end up accepting the money only to become a liability to PBGC in the near future, in effect causing PBGC to make loans twice to poorly managed plans. PBGC can also partition the benefits of certain participants from a financially weak multiemployer plan under certain circumstances. Partition is a statutory mechanism that permits financially healthy employers to maintain a plan by carving out the plan liabilities attributable to participants “orphaned” by employers who filed for bankruptcy. Under ERISA, PBGC has the authority to order the partition of a plan’s orphaned participants either upon its own motion or upon application by the plan sponsor. Once a plan is partitioned, PBGC assumes the liability for paying benefits to the orphaned participants. ERISA specifies four criteria that dictate when PBGC can utilize its partitioning authority. PBGC may order a partition if: the plan experiences a substantial reduction in the amount of contributions that has resulted or will result from a case or proceeding under Chapter 11 bankruptcy with respect to an employer; the plan is likely to become insolvent; contributions will have to be increased significantly in reorganization to meet the minimum contribution requirement and prevent insolvency; and partition would significantly reduce the likelihood that the partitioned plan will become insolvent. Like all multiemployer plans, the partitioned participants are subjected to ERISA’s multiemployer guaranteed benefit limits. PBGC may order the partition of a plan after notifying plan sponsors and participants, whose vested benefits will be affected by the partition. Since the implementation of MPPAA in 1980, PBGC has partitioned two plans. In the most recent partition in July 2010, PBGC said it approved the move because, by removing 1,500 orphaned participants from the plan, PBGC was able to delay plan insolvency for at least 6 additional years and preserve full benefits for the approximately 3,700 workers and retirees of firms still contributing to the plan. Without partition, the plan would have become insolvent sooner and the federal benefit limits would have applied to all its retirees. The private pension systems in the countries we studied—the Netherlands, Denmark, the United Kingdom, and Canada—support industrywide, employer-based pension plans that share some common attributes with U.S. multiemployer plan structure. Each of the countries is a member of the Organisation of Economic Co-operation and Development (OECD) and supports a three-pillar pension system that consists of a basic state pension (e.g., similar to Social Security), private employer-based pensions (e.g., single- or multiemployer), and individual retirement savings (e.g., independent retirement accounts). While each of the countries we studied had a pension system with some unique characteristics, pension officials in some countries told us they faced common short-term and long-term challenges in securing pension benefits for participants, including plan underfunding and an aging workforce. Three of the four countries that we studied reported they had recently implemented some form of minimum funding requirements for multiemployer plans, but the levels varied by country. Officials we spoke with told us that plans that fell below these funding thresholds were required to submit recovery plans to bring the funding levels back above the minimum level. Canada, Denmark, and the Netherlands required plans to be funded at a level of 100 percent or above. The United Kingdom recently suspended its minimum funding requirements in favor of plan- specific funding levels, and officials told us regulators still sought to maintain an aggregate funding level of 110 percent. Also, plans in the Netherlands are required to build funding reserves, or buffers, commensurate to the risk associated with their investment policies. Officials at the Dutch Central Bank told us plans must develop buffers for interest rate risk, private equity exposure, and hedge fund exposure. While the reporting requirements in these countries are not so different from those in the United States, multiemployer plans in some countries submit more frequent plan funding and actuarial reports to regulators. For example, in the Netherlands and Denmark, all plans are required to submit data on a quarterly and annual basis and plans in recovery status had, in some countries, additional reporting requirements. (See table 8.) Some countries require plans to submit plan data electronically, which officials said allowed for real-time monitoring and transparency. For example, Danish plans are required to report market valuations of their assets and liabilities, which regulators said allowed them to identify plans at risk through market surveillance with minimal up-to-date information. The regulators told us they can take action as soon as a plan is in trouble and proactively notify plans of impending financial problems. In the United Kingdom, plan trustees are required to update their financial information electronically and can do so in real-time on the regulator’s information system. In the Netherlands, the Dutch Central Bank updates the aggregate funded status of plans on a quarterly basis and makes this information available on its public Web site. These countries all monitored multiemployer plans for compliance and to determine plan funding and solvency risk. While the Netherlands and Denmark monitored the solvency risks of all plans, officials in both countries told us they also plan to develop a risk-based monitoring strategy, such as that used in the United Kingdom and Canada, which would target monitoring to plans that represented the greatest risk. Officials in these countries also had varying degrees of authority to intervene in the operations of multiemployer plans. (See table 9.) Multiemployer plans in the countries we studied have a number of options to improve and maintain their funded status, and a specific length of time allotted to recovery. (See table 10.) Some of the countries allow plans to increase contributions and reduce the rate of benefit accruals. In Denmark, regulators told us that plans that fail stress tests must adjust investments to resolve funding deficiencies within 6 months. The Netherlands, United Kingdom, and Canada have longer recovery periods and the Netherlands and Canada allow plans to reduce accrued benefits, including the benefits of retirees, although this step is seen as a measure of last resort. Plans may also seek out mergers to reduce administrative costs and indirectly help preserve their funded status. Most of the countries we studied allow plan mergers, but some officials told us that they were infrequent. Canadian officials told us mergers of multiemployer plans would be difficult because plan membership is based on profession and multiemployer plans do not want to lose control of plan policy and governance, even if the plan would be financially better off after a merger. In Canada, when full mergers do occur, they said, they tend to result from a merger of unions. In the Netherlands, mergers occur, but the industry identification of multiemployer plans limits merger activity to plans in the same industry. In Denmark, single-employer plans can choose to merge with multiemployer plans even if the participants are not affiliated with the plan’s employer organization to take advantage of lower administrative fees. In the United Kingdom, there is a large trust that combines many single-employer and several multiemployer plans, benefiting all participating plans with lower costs and better investment opportunities. PPA requires multiemployer plans to file numerous notices with EBSA, IRS, and PBGC regarding their funded status. Our review of filings received by the three agencies found that plans are not all complying with these requirements. Moreover, we found that plans that did comply filed notices that varied in form and content. While current reporting requirements, if followed, would provide federal agencies with the data needed to monitor plan health, the current multiemployer plan framework requires plans to submit these data in a fractured format to three different agencies that do not share the information they receive. As a result, federal officials told us that their agencies are limited in their ability to assess the current and recent health of multiemployer plans. Plans are required to certify their funding zone status each year with IRS, but they are not required to include their current funded percentage in this report, which would be helpful to officials determining the gravity of plans’ funding deficiencies. Also, IRS officials told us that some plans provided a brief letter identifying the zone status, while other plan’s submitted lengthy reports that detailed the assumptions and calculations used to determine the plan’s zone status. IRS officials told us that, while some plans provided their funded percentage in the certification notice, the agency did not track this information nor share the list of certifying plans with any other federal agency. Within 30 days of certifying their funding zone status with IRS, PPA requires plans in critical or endangered status to submit a notice of their status to PBGC and EBSA, among others. In our review of data from 2008 and 2009 obtained from the three agencies, we found large discrepancies in the number of plans certifying with IRS and the number of plans submitting notices of critical or endangered status to PBGC and EBSA. For example, IRS data show that 461 of the 1,331 plans certified in critical status in 2009, but only 132 plans provided notices of their certified status to EBSA. Similarly, some plans that elected to freeze their current funding status did not file notices of this election with PBGC and EBSA, as required. (See table 11.) In addition, for plan years beginning after December 31, 2007, all defined benefit plans are required to provide an additional notice—an annual funding notice—to PBGC, plan participants and beneficiaries, labor organizations, and, in the case of multiemployer plans, also to each participating employer. Like the notice of critical or endangered status, this notice must be provided within 120 days following the end of each plan year. EBSA can assess a civil penalty of $110 per day per participant against the plan administrator for failure to submit the plan’s annual funding notice to participants and beneficiaries. Among other things, the AFN provides recent information on a plan’s funded status, actuarial valuations of assets and liabilities, market valuations of assets, and a plan’s asset allocation. According to PBGC officials, only half of multiemployer plans filed these notices in the 2008 plan year and many plans had failed to file notices for the 2009 plan year within the 120-day statutory timeline. PBGC officials could not explain why plans failed to file the notices with PBGC. But while EBSA can assess a civil penalty for failure to submit an annual funding notice, PBGC officials did not share any information on plans’ annual funding notices with EBSA, making it unlikely that EBSA would have the information necessary to assess such a penalty. Industry experts told us that the reporting requirements for multiemployer plans are confusing and duplicative, and that further consolidation of notices is needed. They noted that plan reporting requirements have increased significantly and become burdensome for plans to administer with each notice having a different recipient and due date. Even if participant notices were more clearly written, one expert said, there is nothing that an individual can do to address the critical or endangered status because benefits are collectively bargained. Moreover, participants do not need multiple notices each time an event occurs to change the long- term projections of their plan’s standing. The statutory and regulatory framework guiding multiemployer plans is not structured to assist troubled plans, limits the actions agencies can take, and promotes little interaction among federal agencies that bear joint responsibility for monitoring and assisting these plans and their participants. We found that EBSA, IRS, and PBGC do not work together to share information received from plans and cannot determine whether all multiemployer plans are meeting applicable legal requirements. First, PBGC’s involvement with multiemployer plans is mostly limited to the plans on its contingency list that are already insolvent and receiving financial assistance or pose a potential risk for future claims against PBGC. PBGC has authority to interact with plans on an ongoing basis, but has done so infrequently to date. For example, at a recent testimony before Congress, an EBSA official stated that one large multiemployer plan, the Central States Southeast and Southwest Pension Fund, did not meet the criteria for partition, despite having $2.1 billion in unfunded liabilities in 2009 and reportedly paying over 40 cents on every dollar to beneficiaries whose employers left the plan without covering their obligations. In fact, PBGC has only used its partition authority twice in its history and facilitated five plan mergers since 2000. Experts told us that plans could benefit from a greater level of PBGC interaction and a more flexible application of the tools available to PBGC. (See table 12.) Second, the Employee Plans Compliance Unit (EPCU) at IRS, which is responsible for verifying that all multiemployer plans file annual actuarial certifications of funded status and confirming that the certifications are filed in a complete and timely manner, does not have the capacity to identify plans that fail to file or verify that all plans submitting certifications are indeed multiemployer plans. IRS officials told us they could not determine whether all multiemployer plans filed their actuarial certifications because they did not know the universe of multiemployer plans. Specifically, they said they did not have a complete list of all multiemployer plans in part because the data they use is taken from the plans’ Form 5500 filings, which included plans that had identified themselves as multiemployer plans but, judging from the plan name, were not (e.g., dental offices or 401(k) plans). Officials told us they hoped to get a more accurate data set in the future, but it would take several years before this would happen. EPCU officials told us plan filings vary widely in scope and length. For example, some plans send a brief memo indicating their funding zone status; others send a long report detailing each of the actuarial assumptions used to determine the zone status. IRS officials told us some plans provided funded status as a percentage while others reported only zone status. IRS currently collects paper copies of the annual certifications. Officials said the annual certification notices required the same kind of information as the WRERA notices, which can be filled out and filed electronically on the IRS Web site. In March 2008, IRS proposed guidance to plans on the preferred format or content for the annual certification notices, but this guidance has not been finalized. EPCU officials told us that they did not interact with either EBSA or PBGC with regard to the filing of certification notices. They said in the past they sent a few short summaries about the funding zone status certifications to IRS headquarters, but did not interact directly with EBSA or PBGC officials regarding the annual certifications. Moreover, IRS did not make certification data available to either EBSA or PBGC so they could reconcile the critical or endangered status notices with the number of certifications to determine if plans were complying with the law. EPCU officials said it would be beneficial for them to have direct contact with other federal agencies to share information on multiemployer plans. Third, EBSA, which is responsible for assessing civil penalties for reporting violations against plans that do not file annual actuarial certifications of funded zone status, does not receive or actively seek out information from PBGC and IRS to enforce this penalty. PPA also requires plans that certify their funding zone status as either critical or endangered to send notices of endangered and critical funding status to EBSA, among others, but, unlike the annual certification of a plan’s status, there are no penalties associated with the failure to furnish endangered or critical status notices. EBSA’s Office of Participant Assistance scans the notices it receives and posts them on its Web site. Officials from EBSA’s Office of Regulations and Interpretation and the Office of Enforcement said they make no attempt to reconcile the status notices with the certifications filed with IRS. They said they had no interaction with IRS officials on these matters and noted some utility if IRS were to share certification data with EBSA. The pension experts and plan practitioners that we interviewed identified several elements of the multiemployer framework that were restrictive and had the potential to affect plans’ ability to keep the pension promise to beneficiaries. These experts noted that each of these elements had unintended consequences made evident by the recent economic downturn. (See table 13.) For decades, multiemployer plans have secured and provided an uninterrupted stream of pension benefits to millions of U.S. workers and retirees. Through collective bargaining, employers and employees worked to maintain their pension benefits despite changing economic climates and financial challenges. As a result, the vast majority of plans have remained solvent and relatively few plans have made claims for financial assistance from PBGC’s insurance program since its inception in 1980. However, the recent economic downturn revealed that multiemployer plans, like most pension plans, were vulnerable to sudden economic changes and had few options to respond to the funding challenges highlighted by these economic conditions. The result was a steep decline in the funded status of most multiemployer plans—now below 70 percent in aggregate. In the short term, the majority of plans will have to make difficult decisions to improve their funding and protect against future declines. The multiemployer plan universe represents diverse groups of employers, participants and industries some of which may be better prepared to meet their future funding obligations. While some plans may be able to improve their funded status as the economy improves, plans in the worst condition may find that the current options of increasing employer contributions or reducing benefit accruals are insufficient to overcome the funding and demographic challenges they face. For these plans, the combination of the effects of the economic downturn, the decline in collective bargaining, the withdrawal of contributing employers, and an aging workforce has likely accelerated their path to insolvency. Without additional options to address their underfunding, or new employers joining the plans to replenish the contributions, many plans may find themselves at greater risk of insolvency and more likely to need PBGC financial assistance sooner rather than later. Such a situation would put additional stress on PBGC’s insurance program that, already in deficit, it can ill afford. The current statutory and regulatory framework for multiemployer plans is not structured to assist troubled plans on an ongoing basis. PBGC, Labor and IRS are all required by law to collect various funding data from plans, and these data are often duplicative. Moreover, these agencies are not making full use of these data to mitigate the risks to participants or to enforce plan discipline. While PBGC monitors plans on an ongoing basis, it focuses on the short-term risks to the trust funds rather than outward on the long-term risks to participants or the impact on their benefits if their plans cannot pay the benefits they promised. There are other approaches to consider. While some practices in the countries we studied, such as mandatory employer participation, would not be feasible in the U.S. context; others may have more ready application for addressing some challenges that U.S. multiemployer plans face. For example, the countries that we studied had pension regulators that interacted with plans on a frequent basis, collected timely and detailed plan information, provided a range of tools to plans to address plan underfunding and made information on the funded status of plans available to the public. Yet, there is no one-size-fits-all solution. For example, some plans’ greatest challenges may be their aging workforce or vulnerability to economic volatility, while others may face challenges inherent to the industries and geographical regions they serve. Without more timely and accurate information on plan health, PBGC and other federal agencies can do little to help plans to respond to circumstances like the ones they experienced in the recent economic downturn. But collecting this information is not enough. The agencies must also incorporate this information into their monitoring and oversight efforts and use the most current data to inform their policies and risk assessments. To do this, the agencies responsible for multiemployer plans must work together to provide greater security for multiemployer plans, which for decades have limited the exposure to PBGC and the taxpayer. To provide greater transparency of the current status of multiemployer plans, assist federal monitoring efforts, and help plans address their funding deficiencies, Congress should consider: consolidating the annual funding notices and the PPA notices of critical or endangered status to eliminate duplicative reporting requirements; and requiring IRS, EBSA, and PBGC to establish a shared database containing all information received from multiemployer plans. 1. To improve the quality of information and oversight of multiemployer plans, we recommend that EBSA, IRS, and PBGC amend existing interagency memoranda of understanding to address, among other things, the agencies’ plans for sharing information they collect on multiemployer plans on an ongoing basis. Specifically, the agencies should address how they will share data: To identify the universe of multiemployer plans. To reconcile similar information received by each agency. To identify possible reporting compliance issues and take appropriate enforcement action. The agencies should revisit this agreement periodically to determine whether modifications are required to ensure that each agency is able to carry out its responsibilities. 2. To collect more useful information from plans, the Secretary of the Treasury should direct the IRS to develop a standardized electronic form for annual certifications that requires plans to submit their funded percentage. 3. To implement better and more effective oversight practices, the Director of the PBGC should develop a more proactive approach to monitoring multiemployer plans, such as assigning case managers to work with the plans that pose the greatest risk to the agency and provide non-financial assistance to troubled plans on an ongoing basis. We provided a draft of this report to the Secretary of Labor, the Secretary of the Treasury, and the Director of PBGC for review and comment. Each agency provided us with written comments, which we reprinted in appendixes II, III, and IV of this report. In responding to the draft report, the agencies acknowledged the vital role of these plans in providing retirement security to millions of U.S. workers and retirees. PBGC further noted that the agency has limited information to analyze the health of multiemployer plans, and that additional information is needed to monitor plan health. The three agencies also generally agreed with our recommendations to improve interagency information sharing and to take steps to acquire more current and accurate data on the status of multiemployer plans. The agencies noted, however, that in their view a new interagency MOU was unnecessary. The Department of the Treasury highlighted actions that the agency currently takes to coordinate with the other agencies. The Department of Labor provided an updated status of the actions that the agency has taken with regard to multiemployer plans. For example, EBSA said it recently initiated contact with IRS to begin work on reconciling certain multiemployer data. IRS and PBGC further stated that memoranda were already in place that could be amended to allow for better information sharing. While we are encouraged by these developments, we do not believe that separate arrangements among agencies will produce the kind of interagency cooperation needed to facilitate information sharing and effective ongoing monitoring of the health of multiemployer plans. Therefore, we continue to believe that, in order to foster meaningful interagency coordination, the agencies should either amend existing agreements or enter into new ones, as we are recommending. EBSA and PBGC also provided technical comments, which we incorporated in this report, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees, PBGC, the Secretary of Labor, the Secretary of the Treasury, and other interested parties. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. We were asked to answer the following research questions: (1) What is the current status of the nation’s multiemployer pension plans? (2) What steps does PBGC take to monitor the health of these plans? (3) What is the structure of multiemployer plans in other countries? (4) What statutory and regulatory changes, if any, are needed to help plans to continue to provide participants with the benefits due to them? To identify the current status of the nation’s multiemployer pension plans, we interviewed officials and analyzed data and documents from PBGC, the Department of Labor’s Employee Benefits Security Administration (EBSA) and the Department of the Treasury’s Internal Revenue Service (IRS), and reviewed relevant industry studies and literature on multiemployer plans. To determine the recent funding status of multiemployer plans, we analyzed historical summary data published in PBGC’s annual data books and summary data from IRS on the annual notices of funding status certification submitted in 2008 and 2009. To corroborate these data, we analyzed notices of critical and endangered status and WRERA notices sent to PBGC and EBSA and published on EBSA’s Web site. To identify the demographics of multiemployer plans, including the number of plans, number of participants, and industry concentration of plans, we analyzed data published in PBGC’s annual reports and data books. To determine private-sector union affiliation, we analyzed data from the Bureau of Labor Statistics. We assessed the reliability of the selected data that we used from these sources by comparing the number of plans filing reports to federal agencies. We determined that, although the data were incomplete and had certain limitations, which we present in our report, they were sufficiently reliable for the purpose of making clear which federal agencies collect data and showing how these data are similar and how they differ. To supplement this quantitative analysis, we interviewed EBSA, IRS, and PBGC officials; and a diverse range of pension experts and multiemployer plan practitioners. We selected experts based on those who had published on multiemployer plans or whose names were referred to us by other interviewees, and we spoke to 48 experts. We analyzed their responses on the current status of plans, the impact of the recent recession, and the future outlook of multiemployer plans. As appropriate, we reviewed relevant federal laws and regulations that pertain to multiemployer plans. To determine the steps PBGC takes to monitor the health of multiemployer plans, we interviewed PBGC officials and reviewed documentation on PBGC’s multiemployer plan monitoring, modeling, and assistance policies and procedures. We also reviewed relevant statutory and PBGC regulatory requirements with regard to multiemployer plans. To understand the structure of multiemployer plans in other countries, we reviewed four countries selected because of their comparable multiemployer plan frameworks—the Netherlands, Denmark, United Kingdom, and Canada—and interviewed government officials, plan administrators and trustees, employer and union representatives, and other pension experts. We selected these countries after completing an initial review of employer-sponsored pension plan designs in Organisation for Economic Co-operation and Development (OECD) countries. We focused on OECD countries in order to increase our opportunity to identify practices used in countries with well-developed capital markets and regulatory regimes comparable, if not always similar, to the United States. We acknowledge that there may be relevant plan design features from a non-OECD country that we did not address in this report. Although we did not independently analyze each country’s laws and regulations, we collected information about each country’s multiemployer plan structure and interviewed government officials and pension experts and in each country. We relied on the expertise of staff in the U.S. State Department to identify potential interviewees in these countries and to schedule the interviews. We did not review the laws or requirements of those foreign countries mentioned in this report. Rather, we relied upon the descriptions and materials furnished by officials and experts of these countries. To identify what statutory and regulatory changes, if any, are needed to help plans continue to provide participants with the benefits due to them, we reviewed pension literature and interviewed a variety of experts on multiemployer plans, including officials from EBSA, IRS, and PBGC; pension experts; and practitioners representing a range of industries and plan sizes. We selected experts based on those who had published on multiemployer plans or whose names were referred to us by other interviewees, and we spoke to 48 experts. We conducted this performance audit from September 2009 through October 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Individuals making key contributions to this report include David R. Lehrer, Assistant Director; Jonathan S. McMurray, Analyst-in-Charge; Robert Campbell; and Thanh Lu. Joseph Applebaum, Susan Aschoff, and Roger J. Thomas also provided valuable assistance.
Thirty years ago Congress enacted protections to ensure that participants in multiemployer pension plans received their promised benefits. These defined benefit plans are created by collective bargaining agreements covering more than one employer. Today, these plans provide pension coverage to over 10.4 million participants in approximately 1,500 multiemployer plans insured by the Pension Benefit Guaranty Corporation (PBGC). In this report, GAO examines (1) the current status of nation's multiemployer plans; (2) steps PBGC takes to monitor the health of these plans; (3) the structure of multiemployer plans in other countries; and (4) statutory and regulatory changes that could help plans provide participants with the benefits they are due. To address these questions, GAO analyzed government and industry data and interviewed government officials, pension experts and plan practitioners in the United States, the Netherlands, Denmark, United Kingdom, and Canada. Most multiemployer plans report large funding shortfalls and face an uncertain future. U.S. multiemployer plans have not been fully funded in aggregate since 2000 and the recent economic recession had a severely negative impact on the funded status of multiemployer plans. Annual data from the Internal Revenue Service (IRS) show that the proportion of multiemployer plans less than 80 percent funded rose from 23 percent of plans in 2008 to 68 percent of plans in 2009. While some plans may be able to improve their funded status as the economy improves, many plans will continue to face demographic challenges that threaten their long-term financial outlook--including an aging workforce and few opportunities to attract new employers and workers into plans. PBGC monitors the health of multiemployer plans, but can provide little assistance to troubled plans until they become insolvent, at which point PBGC provides loans to allow insolvent plans to continue paying participant benefits at the guaranteed level (currently $12,870 per year for 30 years of employment). PBGC receives more current information on plan status, but uses older plan data to determine which plans are at the greatest risk of insolvency, because these data are audited, comprehensive, and PBGC's monitoring system was designed for them. The private pension systems in the countries GAO studied face short-term and long-term challenges similar to those that U.S. multiemployer plans currently face, including plan funding deficiencies and an aging workforce. The plans in these countries are subject to a range of funding, reporting, and regulatory requirements that require plans to interact frequently with pension regulators. Multiemployer plans in these countries have a number of tools available to improve and maintain their funded status, such as increasing contributions and reducing the rate of benefit accruals. The statutory and regulatory framework for multiemployer plans is not structured to assist plans on an ongoing basis and promotes little interaction among the federal agencies responsible for monitoring and assisting plans and safeguarding participant benefits. The lack of timely and accurate information and interagency collaboration hampers efforts to monitor and assist plans, and to enforce plan requirements. The recent economic downturn revealed that these plans, like most pension plans, are vulnerable to rapid changes in their funded status. Plans in the worst condition may find that the options of increasing employer contributions or reducing benefits are insufficient to address their underfunding and demographic challenges. For these plans, the effects of the economic downturn, declines in collective bargaining, the withdrawal of contributing employers, and an aging workforce will likely increase their risk of insolvency. Without additional options to address plan underfunding or to attract new employers to contribute to plans, plans may be more likely to require financial assistance from PBGC. Additional claims would further strain PBGC's insurance program that, already in deficit, it can ill afford. GAO is asking Congress to consider ways to eliminate duplicative reporting requirements and establish a shared database. GAO is also recommending that PBGC, IRS, and Labor work together to improve data collection and monitoring efforts. In commenting on a draft of this report, the agencies generally agreed to improve their coordination efforts.
Advances in the use of IT and the Internet are continuing to change the way that federal agencies communicate, use, and disseminate information; deliver services; and conduct business. For example, electronic government (e-government) has the potential to help build better relationships between government and the public by facilitating timely and efficient interaction with citizens. To help the agencies more effectively manage IT, the Congress has established a statutory framework of requirements and roles and responsibilities relating to information and technology management. Nevertheless, the agencies face significant challenges in effectively planning for and managing their IT. Such challenges can be overcome through the use of a systematic and robust management approach that addresses critical elements, such as IT strategic planning and investment management. The Congress established a statutory framework to help address the information and technology management challenges that agencies face. Under this framework, agencies are accountable for effectively and efficiently developing, acquiring, and using IT in their organizations. In particular, the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 require agency heads, acting through agency CIOs, to, among other things, better link their IT planning and investment decisions to program develop and maintain a strategic IRM plan that describes how IRM activities help accomplish agency missions; develop and maintain an ongoing process to establish goals for improving IRM’s contribution to program productivity, efficiency, and effectiveness; methods for measuring progress toward these goals; and clear roles and responsibilities for achieving these goals; develop and implement a sound IT architecture; implement and enforce IT management policies, procedures, standards, establish policies and procedures for ensuring that IT systems provide reliable, consistent, and timely financial or program performance data; and implement and enforce applicable policies, procedures, standards, and guidelines on privacy, security, disclosure, and information sharing. Moreover, under the government’s current legislative framework, OMB has important responsibilities for providing direction on governmentwide information and technology management and overseeing agency activities in these areas. Among OMB’s responsibilities are ensuring agency integration of IRM plans, program plans, and budgets for the acquisition and use of IT and the efficiency and effectiveness of interagency IT initiatives; developing and maintaining a governmentwide strategic IRM plan; developing, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems; directing and overseeing the implementation of policy, principles, standards, and guidelines for the dissemination of and access to public information; encouraging agency heads to develop and use best practices in IT developing and overseeing the implementation of privacy and security policies, principles, standards, and guidelines. Further, in 2002, the Congress passed, and the President signed, legislation intended to improve the collection, use, and dissemination of government information and to strengthen information security. Specifically, Public Law 107-347, the E-Government Act of 2002, which was enacted in December 2002, includes provisions to promote the use of the Internet and other information technologies to provide government services electronically. The E-Government Act also contains the Federal Information Security Management Act (FISMA) of 2002, which replaced and strengthened the Government Information Security Reform legislative provisions (commonly referred to as “GISRA”). Among other provisions, FISMA requires each agency, including national security agencies, to (1) establish an agencywide risk-based information security program to be overseen by the agency CIO and ensure that information security is practiced throughout the life cycle of each agency system; and (2) develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. Even with the framework laid out by the Congress, the federal government faces enduring IT challenges. Specifically, in January 2003, we reported on a variety of challenges facing federal agencies in continuing to take advantage of the opportunities presented by IT. Unless and until the challenges outlined below are overcome, federal agencies are unlikely to optimize their use of IT, which can affect an organization’s ability to effectively and efficiently implement its programs and missions. Pursuing opportunities for e-government. E-government offers many opportunities to better serve the public, make government more efficient and effective, and reduce costs. Federal agencies have implemented a wide array of e-government applications, including using the Internet to collect and disseminate information and forms; buy and pay for goods and services; submit bids and proposals; and apply for licenses, grants, and benefits. Although substantial progress has been made, the government has not yet fully reached its potential in this area. Recognizing this, a key element of the President’s Management Agenda is the expansion of e-government to enhance access to information and services, particularly through the Internet. In response, OMB established a task force that selected a strategic set of initiatives to lead this expansion. Our review of the initial planning projects associated with these initiatives found that important aspects—such as collaboration and customer focus—had not been thought out for all of the projects and that major uncertainties in funding and milestones were not uncommon. Accordingly, we recommended that OMB take steps as overseer of the e-government initiatives to reduce the risk that the projects would not meet their objectives. Improving the collection, use, and dissemination of government information. The rapid evolution of IT is creating challenges in managing and preserving electronic records. Complex electronic records are increasingly being created in a decentralized environment and in volumes that make it difficult to organize them and make them accessible. Further, storage media themselves are affected by the dual problems of obsolescence and deterioration. These problems are compounded as computer hardware and application software become obsolete, since they may leave behind electronic records that can no longer be read. Overall responsibility for the government’s electronic records lies with the National Archives and Records Administration (NARA). Our past work has shown that while NARA has taken some action to respond to the challenges associated with managing and preserving electronic records, most electronic records remain unscheduled; that is, their value had not been assessed and their disposition had not been determined. In addition, records of historical value were not being identified and provided to NARA; as a result, they were at risk of being lost. We recommended that NARA develop strategies for raising agency management’s awareness of the importance of records management and for performing systematic inspections. In July 2003 we testified that although NARA has made progress in addressing these issues, more work remains to be done. The growth of electronic information—as well as the security threats facing our nation—are also highlighting privacy issues. For example, online privacy has emerged as one of the key—and most contentious— issues surrounding the continued evolution of the Internet. In addition, our survey of 25 departments and agencies about their implementation of the Privacy Act—which regulates how federal agencies may use the personal information that individuals supply when obtaining government services or fulfilling obligations—found that a key characteristic of the agencies’ 2,400 systems of records is that an estimated 70 percent contained electronic records. Our survey also found that although compliance with Privacy Act provisions and related OMB guidance was generally high in many areas, according to agency reports, it was uneven across the federal government. To improve agency compliance and address issues reported by the agencies, we made recommendations to OMB, such as to direct agencies to correct compliance deficiencies, to monitor agency compliance, and to reassess its guidance. Strengthening information security. Since September 1996, we have reported that poor information security is a high-risk area across the federal government with potentially devastating consequences. Although agencies have taken steps to redesign and strengthen their information system security programs, our analyses of information security at major federal agencies have shown that federal systems were not being adequately protected from computer-based threats. Our latest analyses of audit reports published from October 2001 through October 2002 continue to show significant weaknesses in federal computer systems that put critical operations and assets at risk. In addition, in June 2003 we testified that agencies’ fiscal year 2002 reports and evaluations required by GISRA found that many agencies have not implemented security requirements for most of their systems, such as performing risk assessments and testing controls. In addition, the usefulness of agency corrective action plans may be limited when they do not identify all weaknesses or contain realistic completion dates. One of the most serious problems currently facing the government is cyber critical infrastructure protection, which is protecting the information systems that support the nation’s critical infrastructures, such as national defense and power distribution. Since the September 11 attacks, warnings of the potential for terrorist cyber attacks against our critical infrastructures have increased. In addition, as greater amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available information technology, the likelihood increases that information attacks will threaten vital national interests. Among the critical infrastructure protection challenges the government faces are (1) developing a national critical infrastructure protection strategy, (2) improving analysis and warning capabilities, and (3) improving information sharing on threats and vulnerabilities. For each of the challenges, improvements have been made and continuing efforts are in progress, but much more is needed to address them. In particular, we have identified and made numerous recommendations over the last several years concerning critical infrastructure challenges that still need to be addressed. As a result of our concerns in this area, we have expanded our information security high-risk area to include cyber critical infrastructure protection. Constructing and enforcing sound enterprise architectures. Our experience with federal agencies has shown that attempts to modernize IT environments without blueprints—models simplifying the complexities of how agencies operate today, how they want to operate in the future, and how they will get there—often result in unconstrained investment and systems that are duplicative and ineffective. Enterprise architectures offer such blueprints. Our reports on the federal government’s use of enterprise architectures in both February 2002 and November 2003 found that agencies’ use of enterprise architectures was a work in progress, with much to be accomplished. Nevertheless, opportunities exist to significantly improve this outlook if OMB were to adopt a governmentwide, structured, and systematic approach to promoting enterprise architecture use, measuring agency progress, and identifying and pursuing governmentwide solutions to common enterprise architecture challenges that agencies face. Accordingly, we made recommendations to OMB to address these areas. Employing IT system and service management practices. Our work and other best-practice research have shown that applying rigorous practices to the acquisition or development of IT systems or the acquisition of IT services improves the likelihood of success. In other words, the quality of IT systems and services is governed largely by the quality of the processes involved in developing or acquiring each. For example, using models and methods that define and determine organizations’ software-intensive systems process maturity that were developed by Carnegie Mellon University’s Software Engineering Institute, which is recognized for its expertise in software processes, we evaluated several agencies’ software development or acquisition processes. We found that agencies are not consistently using rigorous or disciplined system management practices. We have made numerous recommendations to agencies to improve their management processes, and they have taken, or plan to take, actions to improve. Regarding IT services acquisition, we identified leading commercial practices for outsourcing IT services that government entities could use to enhance their acquisition of IT services. Using effective agency IT investment management practices. Investments in IT can have a dramatic impact on an organization’s performance. If managed effectively, these investments can vastly improve government performance and accountability. If not, however, they can result in wasteful spending and lost opportunities for improving delivery of services to the public. Using our information technology investment management maturity framework, we evaluated selected agencies and found that while some processes have been put in place to help them effectively manage their planned and ongoing IT investments, more work remains. Complicating the government’s ability to overcome these IT management challenges are these challenges’ interdependencies. As a result, the inability of an organization to successfully address one IT management area can reduce the effectiveness of its success in addressing another management function. For example, a critical aspect of implementing effective e-government solutions and developing and deploying major systems development projects is ensuring that robust information security is built into these endeavors early and is periodically revisited. The government’s many IT challenges can be addressed by the use of effective planning and execution, which can be achieved, in part, through strategic planning/performance measurement, and investment management. For example, strong strategic planning is focused on using IT to help accomplish the highest priority customer needs and mission goals, while effective performance measurement helps determine the success or failure of IT activities. Finally, IT investment management provides a systematic method for minimizing risks while maximizing the return on investments and involves a process for selecting, controlling, and evaluating investments. These processes, too, are interdependent. For example, the investment management process is a principal mechanism to ensure the effective execution of an agency’s IT strategic plan. Our objectives were to determine the extent to which federal agencies are following practices associated with key legislative and other requirements for (1) IT strategic planning/performance measurement and (2) IT investment management. To address these objectives, we identified and reviewed major legislative requirements and executive orders pertaining to IT strategic planning, performance measurement, and investment management. Specifically, we reviewed the Paperwork Reduction Act of 1995; the Clinger-Cohen Act of 1996; the E-Government Act of 2002; the Federal Information Security Management Act of 2002; Executive Order 13011, Federal Information Technology; and Executive Order 13103, Computer Software Piracy. Using these requirements and policy and guidance issued by OMB and GAO, we identified 30 IT management practices that (1) can be applied at the enterprise level and (2) were verifiable through documentation and interviews. These 30 practices focused on various critical aspects of IT strategic management, performance measurement, and investment management, including the development of IRM plans, the identification of goals and related measures, and the selection and control of IT investments, respectively. We selected 26 major departments and agencies for our review (23 entities identified in 31 U.S.C. 901 and the 3 military services). At our request, each agency completed a self-assessment on whether and how it had implemented the 30 IT management practices. We reviewed the completed agency self-assessments and accompanying documentation, including agency and IT strategic plans, agency performance plans and reports required by the Government Performance and Results Act, and IT investment management policy and guidance, and interviewed applicable agency IT officials to corroborate whether the practices were in place. We did not evaluate the effectiveness of agencies’ implementation of the practices. For example, we did not review specific IT investments to determine whether they were selected, controlled, and reviewed in accordance with agency policy and guidance. However, we reviewed applicable prior GAO and agency inspector general reports and discussed whether agency policies had been fully implemented with applicable agency IT officials. On the basis of the above information, we assessed whether the practices were in place, using the following definitions: Yes—the practice was in place. Partially—the agency has some, but not all, aspects of the practice in place. Examples of circumstances in which the agency would receive this designation include when (1) some, but not all, of the elements of the practice were in place; (2) the agency documented that it has the information or process in place but it was not in the prescribed form (e.g., in a specific document as required by law or OMB); (3) the agency’s documentation was in draft form; or (4) the agency had a policy related to the practice but evidence supported that it had not been completely or consistently implemented. No—the practice was not in place. Not applicable—the practice was not relevant to the agency’s particular circumstances. We also collected information from the Department of Homeland Security (DHS) but found that since it had been established so recently, it was too early to judge its IT strategic planning, performance measurement, and investment management. As a result, although we provided information on what DHS was doing with respect to these areas, we did not include it in our assessment. We also interviewed officials from OMB’s Office of Information and Regulatory Affairs regarding OMB’s role in establishing policies and overseeing agencies’ implementation of the identified practices. We performed our work at the agencies’ offices in greater Washington, D.C. We conducted our review between April and mid-December 2003 in accordance with generally accepted government auditing standards. The use of IT strategic planning/performance measurement practices is uneven (see fig. 1), which is of concern because a well-defined strategic planning process helps ensure that an agency’s IT goals are aligned with that agency’s strategic goals. Moreover, establishing performance measures and monitoring actual-versus-expected performance of those measures can help determine whether IT is making a difference in improving performance. Among the practices or elements of practices that agencies largely have in place were those pertaining to establishing goals and performance measures. On the other hand, agencies are less likely to have fully documented their IT strategic planning processes, developed comprehensive IRM plans, linked performance measures to their enterprisewide IT goals, or monitored actual-versus-expected performance for these enterprisewide goals. Agencies cited various reasons, such as the lack of support from agency leadership, for not having strategic practices/performance measurement practices in place. Without strong strategic management practices, it is less likely that IT is being used to maximize improvement in mission performance. Moreover, without enterprisewide performance measures that are being tracked against actual results, agencies lack critical information about whether their overall IT activities, at a governmentwide cost of billions of dollars annually, are achieving expected goals. Critical aspects of the strategic planning/performance measurement area include documenting the agency’s IT strategic planning processes, developing IRM plans, establishing goals, and measuring performance to evaluate whether goals are being met. Although the agencies often have these practices, or elements of these practices, in place, additional work remains, as demonstrated by the following examples: Strategic planning process. Strategic planning defines what an organization seeks to accomplish and identifies the strategies it will use to achieve desired results. A defined strategic planning process allows an agency to clearly articulate its strategic direction and to establish linkages among planning elements such as goals, objectives, and strategies. About half of the agencies fully documented their strategic planning processes. For example, the General Services Administration (GSA) documented an IT governance structure that addresses the roles and responsibilities of various organizations in strategic planning and investment management. In addition, in its IT strategic plan, GSA describes how it developed the plan, including its vision, business- related priorities, and goals. In contrast, the Department of Agriculture has not completely documented its IT strategic planning process or integrated its IT management operations and decisions with other agency processes. According to Agriculture IT officials, the department’s ongoing budget and performance integration initiative is expected to result in a more clearly defined and integrated IT strategic management planning process. Such a process provides the essential foundation for ensuring that IT resources are effectively managed. Strategic IRM plans. The Paperwork Reduction Act requires that agencies indicate in strategic IRM plans how they are applying information resources to improve the productivity, efficiency, and effectiveness of government programs. An important element of a strategic plan is that it presents an integrated system of high-level decisions that are reached through a formal, visible process. The plan is thus an effective tool with which to communicate the mission and direction to stakeholders. In addition, a strategic IRM plan that communicates a clear and comprehensive vision for how the agency will use information resources to improve agency performance is important because IRM encompasses virtually all aspects of an agency’s information activities. Although the Paperwork Reduction Act also requires agencies to develop IRM plans in accordance with OMB’s guidance, OMB does not provide cohesive guidance on the specific contents of IRM plans. OMB Circular A-130 directs that agencies have IRM plans that support agency strategic plans, provide a description of how IRM helps accomplish agency missions, and ensure that IRM decisions are integrated with organizational planning, budgets, procurement, financial management, human resources management, and program decisions. However, Circular A-130 does not provide overall guidance on the plan’s contents. As a result, although agencies generally provided OMB with a variety of planning documents to meet its requirement that they submit an IRM plan, these plans were generally limited to IT strategic or e-government issues and did not address other elements of IRM, as defined by the Paperwork Reduction Act. Specifically, these plans generally include individual IT projects and initiatives, security, and enterprise architecture elements but do not often address other information functions, such as information collection, records management, and privacy, or the coordinated management of all information functions. OMB IT staff agreed that the agency has not set forth guidance on the contents of agency IRM plans in a single place, stating that its focus has been on looking at agencies’ cumulative results and not on planning documents. In addition, these staff also noted that agencies account for their IRM activities through multiple documents (e.g., Information Collection Budgets and Government Paperwork Elimination Act plans). However, the OMB IT staff stated that they would look at whether more guidance is needed to help agencies in their development of IRM plans, but have not yet made a commitment to provide such guidance. Half the agencies indicated a need for OMB to provide additional guidance on the development and content of IRM plans. Strong agency strategic IRM plans could also provide valuable input to a governmentwide IRM plan, which is also required by the Paperwork Reduction Act. As we reported last year, although OMB designated the CIO Council’s strategic plan for fiscal years 2001-2002 as the governmentwide strategic IRM plan, it does not constitute an effective and comprehensive strategic vision. Accordingly, we recommended that OMB develop and implement a governmentwide strategic IRM plan that articulates a comprehensive federal vision and plan for all aspects of government information. In April 2003, we testified that OMB had taken a number of actions that demonstrate progress in fulfilling the Paperwork Reduction Act’s requirement of providing a unifying IRM vision. However, more remains to be done. In particular, we reported that although OMB’s strategies and models are promising, their ability to reduce paperwork burden and accomplish other objectives depends on how OMB implements them. One element required by the Clinger-Cohen Act to be included in agency IRM plans is the identification of a major IT acquisition program(s), or any phase or increment of that program, that significantly deviated from cost, performance, or schedule goals established by the program. However, few agencies met this requirement. In these cases, a common reason cited for not including this information was that it was not appropriate to have such detailed information in a strategic plan because such plans should be forward thinking and may not be developed every year. Agencies also identified other mechanisms that they use to track and report cost, schedule, and performance deviations. Because agencies generally do not address this Clinger-Cohen Act requirement in their IRM plans, they may benefit from additional guidance from OMB on how to address this requirement. IT goals. The Paperwork Reduction Act and the Clinger-Cohen Act require agencies to establish goals that address how IT contributes to program productivity, efficiency, effectiveness, and service delivery to the public. We have previously reported that leading organizations define specific goals, objectives, and measures, use a diversity of measure types, and describe how IT outputs and outcomes impact operational customer and agency program delivery requirements. The agencies generally have the types of goals outlined in the Paperwork Reduction Act and the Clinger-Cohen Act. For example, the Social Security Administration (SSA) set a goal of achieving an average of at least a 2 percent per year improvement in productivity, and it expects that advances in automation will be a key to achieving this goal along with process and regulation changes. In addition, the Department of Veterans Affairs’ (VA) latest departmental strategic plan has a goal that includes using business process reengineering and technology integration to speed up delivery of benefit payments, improve the quality of health care provided in its medical centers, and administer programs more efficiently. The VA goal includes strategies such as using its enterprise architecture as a continuous improvement process, implementing e-government solutions to transform paper-based electronic collections to electronic-based mechanisms, and establishing a single, high-performance wide area data network. Five agencies do not have one or more of the goals required by the Paperwork Reduction Act and the Clinger-Cohen Act. For example, the Department of Labor’s single IT strategic goal—to provide better and more secure service to citizens, businesses, government, and Labor employees to improve mission performance—which it included in its fiscal year 2004 performance plan, does not address all required goals. Further, in contrast to other agencies, Labor does not have goals in its IRM plan. It is important that agencies specify clear goals and objectives to set the focus and direction of IT performance. IT performance measures. The Paperwork Reduction Act, the Clinger- Cohen Act, and Executive Order 13103 require agencies to establish a variety of IT performance measures, such as those related to how IT contributes to program productivity, efficiency, and effectiveness, and to monitor the actual-versus-expected performance of those measures. As we have previously reported, an effective performance management system offers a variety of benefits, including serving as an early warning indicator of problems and the effectiveness of corrective actions, providing input to resource allocation and planning, and providing periodic feedback to employees, customers, stakeholders, and the general public about the quality, quantity, cost, and timeliness of products and services. Although the agencies largely have one or more of the required performance measures, these measures are not always linked to the agencies’ enterprisewide IT goals. For example, the Department of Defense (DOD), Air Force, and Navy have a variety of enterprisewide IT goals but do not have performance measures associated with these goals. Each of these organizations are in the process of developing such measures. To illustrate, the Air Force’s August 2002 information strategy includes nine goals, such as providing decision makers and all Air Force personnel with on-demand access to authoritative, relevant, and sufficient information to perform their duties efficiently and effectively, but does not have performance measures for these goals. The Air Force recognizes the importance of linking performance measures to its goals and is developing such measures, which it expects to complete by the fourth quarter of fiscal year 2004. Leading organizations use performance measures to objectively evaluate mission, business, and project outcomes. Such organizations also focus on performance measures for gauging service to key management processes and tailoring performance measures to determine whether IT is making a difference in improving performance. Few agencies monitored actual-versus-expected performance for all of their enterprisewide IT goals. Specifically, although some agencies tracked actual-versus-expected outcomes for the IT performance measures in their performance plans or accountability reports and/or for specific IT projects, they generally did not track the performance measures specified in their IRM plans. For example, although the Department of Health and Human Services’ (HHS) IT strategic plan identifies enterprisewide goals and performance measures, these measures generally do not identify quantified outcomes (e.g., the measures indicate that the outcome will be a percentage transaction increase or cost decrease in certain areas but do not provide a baseline or target). In addition, the HHS plan does not describe how the department will monitor actual-versus-expected performance for these measures. HHS’s Director of Business Operations in its IRM office reported that the department recognizes the need to develop an integrated program for monitoring performance against the enterprisewide measures in the IT strategic plan. He stated that HHS has recently begun an initiative to establish such a process. By not measuring actual-versus-expected performance, agencies lack the information to determine where to target agency resources to improve overall mission accomplishment. Benchmarking. The Clinger-Cohen Act requires agencies to quantitatively benchmark agency process performance against public- and private-sector organizations, where comparable processes and organizations exist. Benchmarking is used by entities because there may be external organizations that have more innovative or more efficient processes than their own processes. Our previous study of IT performance measurement at leading organizations found that they had spent considerable time and effort comparing their performance information with that of other organizations. Seven agencies have mechanisms—such as policies and strategies—in place related to benchmarking their IT processes. For example, DOD’s information resources and IT directive states that DOD components shall routinely and systematically benchmark their functional processes against models of excellence in the public and private sector and use these and other analyses to develop, simplify, or refine the processes before IT solutions are applied. In general, however, agencies’ benchmarking decisions are ad hoc. Few agencies have developed a mechanism to identify comparable external private- or public-sector organizations and processes and/or have policies related to benchmarking; however, all but 10 of the agencies provided examples of benchmarking that had been performed. For example, the Small Business Administration (SBA) does not have benchmarking policies in place, but the agency provided an example of a benchmarking study performed by a contractor that compared SBA’s IT operations and processes against industry cost and performance benchmarks and best practices and resulted in recommendations for improvement. Table 1 provides additional detail on each strategic planning/performance measurement practice and our evaluation of whether each agency had the practice in place. The table indicates that work remains for the agencies to have each of the practices fully in place as well as that several agencies reported that they were taking, or planned to take, actions to address the practices or elements of practices. Agency IT officials could not identify why practices were not in place in all cases, but in those instances in which reasons were identified, a variety of explanations were provided. For example, reasons cited by agency IT officials included that they lacked the support from agency leadership, that the agency had not been developing IRM plans until recently and recognized that the plan needed further refinement, that the process was being revised (in at least one case because of changes that are needed to reflect a loss of component organizations to the new DHS), and that requirements were evolving. In other cases, the agency reported that it had the information but it was not in the format required by legislation. For instance, FISMA requires agencies to include in the performance plans required by the Government Performance and Results Act the resources, including budget, staffing, and training, and time periods to implement its information security program. None of the agencies included this information in their performance plans. However, the agencies commonly reported that they had this information but that it was in another document. Nevertheless, this does not negate the need for having the agency report to the Congress in the required form. This is particularly important since, as in the example of the FISMA requirement, the reporting requirement involves a public document, whereas other reports may not be publicly available. In the case of DHS, while we did not include the department in our assessment and in table 1, the department is in the process of developing its first IT strategic plan. According to DHS, it expects to complete this plan by mid-February 2004. The use of IT investment management practices is mixed (as shown in fig. 2), which demonstrates that agencies do not have all the processes in place to effectively select, control, and evaluate investments. An IT investment management process is an integrated approach to managing investments that provides for the continuous identification, selection, control, life-cycle management, and evaluation of IT investments. Among the investment management practices that are most frequently in place are having investment management boards and requiring that projects demonstrate that they are economically beneficial. Practices less commonly in place are those requiring that IT investments be performed in a modular, or incremental, manner and that they be effectively controlled. Only by effectively and efficiently managing their IT resources through a robust investment management process can agencies gain opportunities to make better allocation decisions among many investment alternatives and further leverage their IT investments. Critical aspects of IT investment management include developing well- supported proposals, establishing investment management boards, and selecting and controlling IT investments. The agencies’ use of practices associated with these aspects of investment management is wide-ranging, as follows: IT investment proposals. Various legislative requirements, an executive order, and OMB policies provide minimum standards that govern agencies’ consideration of IT investments. In addition, we have issued guidance to agencies for selecting, controlling, and evaluating IT investments. Such processes help ensure, for example, that investments are cost-beneficial and meet mission needs and that the most appropriate development or acquisition approach is chosen. The agencies in our review have mixed results when evaluated against these various criteria. For example, the agencies almost always require that proposed investments demonstrate that they support the agency’s business needs, are cost-beneficial, address security issues, and consider alternatives. To demonstrate, the Department of Transportation requires that proposed projects complete a business case to indicate that the project (1) will meet basic requirements in areas such as mission need, affordability, technical standards, and disabled access requirements, (2) is economically beneficial, and (3) has considered alternatives. One element in this area that agencies were not as likely to have fully in place was the Clinger-Cohen Act requirement that agencies follow, to the maximum extent practicable, a modular, or incremental, approach when investing in IT projects. Incremental investment helps to mitigate the risks inherent in large IT acquisitions/developments by breaking apart a single large project into smaller, independently useful components with known and defined relationships and dependencies. An example of such an approach is DOD’s policy stating that IT acquisition decisions should be based on phased, evolutionary segments that are as brief and narrow in scope as possible and that each segment should solve a specific part of an overall mission problem and deliver a measurable net benefit independent of future segments. However, 14 agencies do not have a policy that calls for investments to be done in a modular manner. For example, although the Environmental Protection Agency (EPA) reported that it worked with program offices to try to segment work so that the scope and size of each project is manageable, it does not have a policy that calls for investments to be done in a modular manner. The absence of a policy calls into question whether EPA is implementing incremental investment in a consistent and effective manner. Investment management boards. Our investment management guide states that establishing one or more IT investment boards is a key component of the investment management process. According to our guide, the membership of this board should include key business executives and should be responsible for final project funding decisions or should provide recommendations for the projects under its scope of authority. Such executive-level boards, made up of business-unit executives, concentrate management’s attention on assessing and managing risks and regulating the trade-offs between continued funding of existing operations and developing new performance capabilities. Almost all of the agencies in our review have one or more enterprise- level investment management boards. For example, HUD’s Technology Investment Board Executive Committee and supporting boards have responsibility for selecting, controlling, and evaluating the department’s IT investments. HUD’s contractor-performed maturity audits also have helped the department validate its board structure and its related investment management processes. However, the investment management boards for six agencies are not involved, or the agency did not document the board’s involvement, in the control phase. For example, the National Science Foundation (NSF) has a CIO advisory group that addresses only the select phase of the IT investment management process. NSF’s CIO explained that the agency reviews the progress of its major information system projects through other means, such as meetings with management. In providing comments on a draft of this report, the CIO stated that he believes that NSF has a comprehensive set of management processes and review structures to select, control, and evaluate IT investments and cited various groups and committees used as part of this process. However, NSF’s summary of its investment management process and memo establishing the CIO advisory group include only general statements related to the oversight of IT investments, and NSF provided no additional documentation demonstrating that its investment management board plays a role in the control and evaluation phases. Our investment management guidance identifies having an IT investment management board(s) be responsible for project oversight as a critical process. Maintaining responsibility for oversight with the same body that selected the investment is crucial to fostering a culture of accountability by holding the investment board that initially selected an investment responsible for its ongoing success. In addition, 17 agencies do not fully address the practice that calls for processes to be in place that address the coordination and alignment of multiple investment review boards. For example, we recently reported that the Department of the Interior has established three department- level IT investment boards and begun to take steps to ensure that investment boards are established at the bureau level. However, at the time of our review, the department (1) could not assert that department-level board members exhibited core competencies in using Interior’s IT investment approach and (2) had limited ability to oversee investments in its bureaus. We made recommendations to Interior to strengthen both the activities of the department-level boards and the department’s ability to oversee investment management activities at the bureaus. Selection of IT investments. During the selection phase of an IT investment management process, the organization (1) selects projects that will best support its mission needs and (2) identifies and analyzes each project’s risks and returns before committing significant funds. To achieve desired results, it is important that agencies have a selection process that, for example, uses selection criteria to choose the IT investments that best support the organization’s mission and prioritizes proposals. Twenty-two agencies use selection criteria in choosing their IT investments. In addition, about half the agencies use scoring models to help choose their investments. For example, the working group and CIO office officials that support the Department of Education’s investment review board used a scoring model as part of deciding which IT investments to recommend for the board’s consideration and approval. This model contained two main categories of criteria: (1) value criteria that measured the impact and significance of the initiative, given project goals and the strategic objectives of the department; and (2) health criteria that measured the potential for the success of the initiative and helped to assess both the performance and the associated risks that are involved in project and contract management. In the case of DOD, in February 2003 we reported that it had established some, and was establishing other IT investment criteria, but these criteria had not been finalized. Accordingly, we recommended, and DOD concurred, that DOD establish a standard set of criteria. In September we reported that this recommendation had not been implemented. DOD officials stated that the department was developing the criteria but that the proposed governance structure had not yet been adopted. Control over IT investments. During the control phase of the IT investment management process, the organization ensures that, as projects develop and as funds are spent, the project is continuing to meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems have arisen, steps are quickly taken to address the deficiencies. Executive level oversight of project- level management activities provides the organization with increased assurance that each investment will achieve the desired cost, benefit, and schedule results. Although no agencies had the practices associated with the control phase fully in place, some have implemented important aspects of this phase. For example, Labor requires project managers to prepare a control status report based on a review schedule established during the selection phase, which is reviewed by the Office of the CIO and its technical review board as part of determining whether to continue, modify, or cancel the initiative. For initiatives meeting certain criteria, the technical review board makes recommendations to the management council, which serves as the department’s top tier executive investment review council, is chaired by the Assistant Secretary of Administration and Management, and consists of component agency heads. Nevertheless, in general, the agencies are weaker in the practices pertaining to the control phase of the investment management process than in the selection phase. In particular, the agencies did not always have important mechanisms in place for agencywide investment management boards to effectively control investments, including decision-making rules for project oversight, early warning mechanisms, and/or requirements that corrective actions for under-performing projects be agreed upon and tracked. For example, the Department of the Treasury does not have a department-level control process; instead, each bureau may conduct its own reviews that address the performance of its IT investments and corrective actions for under- performing projects. In a multitiered organization like Treasury, the department is responsible for providing leadership and oversight for foundational critical processes by ensuring that written policies and procedures are established, repositories of information are created that support IT investment decision making, resources are allocated, responsibilities are assigned, and all of the activities are properly carried out where they may be most effectively executed. In such an organization, the CIO is specifically responsible for ensuring that the organization is effectively managing its IT investments at every level. Treasury IT officials recognize the department’s weaknesses in this area and informed us that they are working on developing a new capital planning and investment control process that is expected to address these weaknesses. Similarly, the Department of Energy is planning on implementing the investment control process outlined in its September 2003 capital planning and investment control guide in fiscal year 2004, which addresses important elements such as corrective action plans. However, this guide does not document the role of Energy’s investment management boards in this process. Table 2 provides additional detail on each investment management practice and our evaluation of whether each agency had the practice in place. The table indicates those practices in which improvement is needed as well as which agencies reported that they were taking, or planned to take, actions to address the practices or elements of practices. Among the variety of reasons cited for practices not being fully in place were that the CIO position had been vacant, that not including a requirement in the IT investment management guide was an oversight, and that the process was being revised. However, in some cases the agencies could not identify why certain practices were not in place. Regarding DHS, although we did not include the department in our assessment or table 2, the department has investment management processes that it has put in place or is in the process of putting in place. Federal agencies did not always have in place important practices associated with IT laws, policies, and guidance. At the governmentwide level, agencies generally have IT strategic plans or information resources management (IRM) plans that address IT elements, such as security and enterprise architecture, but do not cover other aspects of IRM that are part of the Paperwork Reduction Act, such as information collection, records management, and privacy. This may be attributed, in part, to OMB not establishing comprehensive guidance for the agencies detailing the elements that should be included in such a plan. There were also numerous instances of individual agencies that do not have specific IT strategic planning, performance measurement, or investment management practices fully in place. Agencies cited a variety of reasons for not having these practices in place, such as that the CIO position had been vacant, not including a requirement in guidance was an oversight, or that the process was being revised. Nevertheless, not only are these practices based on law, executive orders, OMB policies, and our guidance, but they are also important ingredients for ensuring effective strategic planning, performance measurement, and investment management, which, in turn, make it more likely that the billions of dollars in government IT investments will be wisely spent. Accordingly, we believe that it is important that they be expeditiously implemented by individual agencies. To help agencies in developing strategic IRM plans that fully comply with the Paperwork Reduction Act of 1995, we recommend that the Director, OMB, develop and disseminate to agencies guidance on developing such plans. At a minimum, such guidance should address all elements of IRM, as defined by the Paperwork Reduction Act. As part of this guidance, OMB should also consider the most effective means for agencies to communicate information about any major IT acquisition program(s) or phase or increment of that program that significantly deviated from cost, performance, or schedule goals established by the program. One option for communicating this information, for example, could be through the annual agency performance reports that are required by the Government Performance and Results Act. We are also generally making recommendations to the agencies in our review regarding those practices that are not fully in place unless, for example, (1) we have outstanding recommendations related to the practice, (2) the agency has a draft document addressing the practice, or (3) implementation of the practice was ongoing. Appendix I contains these recommendations. We received written or oral comments on a draft of this report from OMB and 25 of the agencies in our review. We also requested comments from the Department of Homeland Security and the Office of Personnel Management, but none were provided. Regarding OMB, in oral comments on a draft of this report, representatives from OMB’s Office of Information and Regulatory Affairs and Office of the General Counsel questioned the need for additional IRM plan guidance because they do not want to be prescriptive in terms of what agencies include in their plans. We continue to believe that agencies need additional guidance from OMB on the development and content of their IRM plans because OMB Circular A-130 does not provide overall guidance on the contents of agency IRM plans and half the agencies indicated a need for OMB to provide additional guidance on the development and content of IRM plans. Further, additional guidance would help to ensure that agency plans address all elements of IRM, as defined by the Paperwork Reduction Act. A strategic IRM plan that communicates a clear and comprehensive vision for how the agency will use information resources to improve agency performance is important because IRM encompasses virtually all aspects of an agency’s information activities. In commenting on a draft of the report, most of the agencies in our review generally agreed with our findings and recommendations. The agencies’ specific comments are as follows: Agriculture’s CIO stated that the department concurred with the findings in this report and provided information on action it was taking, or planned to take, to implement the recommendations. Agriculture’s written comments are reproduced in appendix II. The Secretary of Commerce concurred with the recommendations in this report and stated that, in response, the department is updating its policies and procedures. Commerce’s written comments are reproduced in appendix III. DOD submitted a single letter that included comments from the Departments of the Air Force, Army, and Navy. recommendations in this report. DOD also provided additional documentation and information on actions that it is taking, or planned to take, to address these recommendations. We modified our report based on these comments and documentation, as appropriate. DOD’s written comments, along with our responses, are reproduced in appendix IV. Education’s Assistant Secretary for Management/CIO stated that the agency generally agreed with our assessment of the department’s use of IT strategic planning/performance measurement and investment management practices. Education provided additional comments and documentation related to two of our practices. We modified our report on the basis of these comments and documentation, as appropriate. Education’s written comments, along with our responses, are reproduced in appendix V. Energy’s Director of Architecture and Standards provided e-mail comments stating that the department believes that GAO fairly depicted where the department currently stands in the IT investment management process. The director also provided other comments that were technical in nature and that we addressed, as appropriate. EPA’s Assistant Administrator/CIO generally agreed with our findings and recommendations on the need to complete work currently under way to formalize the documentation of IT management practices. However, EPA questioned our characterization of the agency’s IT management and strategic planning and provided other comments, which we addressed, as appropriate. EPA’s written comments, along with our responses, are reproduced in appendix VI. GSA’s CIO stated that the agency generally agreed with the findings and recommendations in the report. GSA provided suggested changes and additional information and documentation related to nine of our practices and two recommendations. We modified our report on the basis of these comments and documentation, as appropriate. GSA’s written comments, along with our responses, are reproduced in appendix VII. HHS’s Acting Principal Deputy Inspector General stated that the department concurred with the findings and recommendations of the report. HHS’s written comments are reproduced in appendix VIII. HUD’s Assistant Secretary for Administration/CIO stated that the department was in agreement with the recommendations in this report. HUD’s written comments are reproduced in appendix IX. Interior’s Acting Assistant Secretary for Policy, Management and Budget stated that the recommendations in our report would further improve the department’s IT investment management. Interior’s written comments are reproduced in appendix X. Justice’s CIO stated that, overall, the department concurred with the findings and recommendations in this report, noting that our recommendations will assist in further defining IT strategic planning, performance measurement, and investment management practices. Justice’s written comments, along with our response, are reproduced in appendix XI. Labor’s Assistant Secretary for Administration and Management/CIO reported that the department generally concurred with this report and provided suggested changes in two areas, which we addressed, as appropriate. Labor’s written comments, along with our responses, are reproduced in appendix XII. NASA’s Deputy Administrator reported that the agency generally concurred with the recommendations in this report and provided additional information on actions that it is taking, or planned to take, to address these recommendations. NASA’s written comments, along with our response, are reproduced in appendix XIII. NSF’s CIO provided e-mail comments disagreeing with three areas of this report. First, NSF did not agree with our assessment of practice 1.1, stating that the agency has a comprehensive agency-level planning framework that includes a suite of planning documents and internal and external oversight activities that it believes addresses IT planning requirements. However, our review of the planning documents cited by NSF in its self-assessment found that it did not address the elements of the practice. In particular, the agency did not describe the responsibility and accountability for IT resources or the method that it uses to define program information needs and how such needs will be met. Moreover, in our exit conference with NSF officials, the CIO indicated agreement with our assessment. Since NSF provided no additional documentation, we did not modify the report. Second, the CIO disagreed with our characterization of the agency’s enterprisewide investment management board. We modified the report to reflect the CIO’s comments; however, we did not change our overall assessment of the role of the board because NSF’s summary of its investment management process and memo establishing the CIO advisory group include only general statements related to the oversight of IT investments, and NSF provided no additional documentation demonstrating that its investment management board plays a role in the control and evaluation phases. Third, the CIO stated that NSF has established processes, management, and oversight controls over IT investments. However, NSF provided limited documentation on the control phase of its investment management process. In particular, NSF’s summary of its investment management process and memo establishing the CIO advisory group include only general statements related to the oversight of IT investments, and NSF provided no additional documentation demonstrating that its investment management board plays a role in the control and evaluation phases. Accordingly, we did not modify the report. NRC’s Executive Director for Operations stated that this report provides useful information and agreed that the practices are important for ensuring effective use of government IT investments but had no specific comments. NRC’s written comments are reproduced in appendix XIV. SBA’s GAO liaison provided e-mail comments questioning the need to have its enterprise investment management board have final decision- making authority over IT investments. Our IT investment management guidance states that enterprise-level IT investment boards be capable of reviewing lower-level board actions and invoking final decision-making authority over all IT investments. In particular, if disputes or disagreements arise over decision-making jurisdiction about a specific IT investment project, the enterprise board must be able to resolve the issue. Accordingly, we did not modify the report. SBA also provided technical comments that we incorporated, as appropriate. SSA’s Commissioner generally agreed with the recommendations in the report and provided comments on each recommendation that we addressed, as appropriate. SSA’s written comments, along with our responses, are reproduced in appendix XV. State’s Assistant Secretary/Chief Financial Officer stated that the findings in the report are consistent with discussions held with its IT staff and provided additional information on four practices. On the basis of this additional information, we modified our report, as appropriate. State’s written comments, along with our response, are reproduced in appendix XVI. A program analyst in the Department of Transportation’s Office of the CIO provided oral comments that were technical in nature that we addressed, as appropriate. The Acting Director, Budget and Administrative Management in Treasury's Office of the CIO, provided oral comments stating that the department concurred with our findings and recommendations. The official further stated that the department recognized its shortcomings and was working to correct them. USAID’s Assistant Administrator, Bureau for Management, did not address whether the agency agreed or disagreed with our overall findings or recommendations but commented on our evaluation of two practices, which we addressed, as appropriate. USAID’s written comments, along with our response, are reproduced in appendix XVII. The Secretary of VA stated that the department concurred with the recommendations in the report and provided comments on actions that it has taken, or planned to take, in response. We modified the report based on these comments, as appropriate. VA’s written comments, along with our responses, are reproduced in appendix XVIII. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the secretaries of the Departments of Agriculture, the Air Force, the Army, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, the Navy, State, Transportation, the Treasury, and Veterans Affairs; the administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and U.S. Agency for International Development; the commissioners of the Nuclear Regulatory Commission and the Social Security Administration; and the directors of the National Science Foundation, Office of Management and Budget, and Office of Personnel Management. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or Linda J. Lambert, Assistant Director, at (202) 512-9556. We can also be reached by e-mail at [email protected] and [email protected], respectively. Other contacts and key contributors to this report are listed in appendix XIX. To improve the department’s information technology (IT) strategic planning/performance measurement processes, we recommend that the Secretary of Agriculture take the following six actions: document the department’s IT strategic management processes and how they are integrated with other major departmental processes, such as the budget and human resources management; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by the Federal Information Security Management Act (FISMA) and include a description of major IT acquisitions contained in its capital asset plan that bear significantly on its performance goals; implement a process for assigning roles and responsibilities for achieving the department’s IT goals; develop performance measures related to the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for the department’s enterprisewide IT performance measures in its information resources management (IRM) plan; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of Agriculture take the following four actions: include a description of the relationship between the IT investment management process and the department’s enterprise architecture in its IT capital planning and investment control guide and require that IT investments be in compliance with the agency’s enterprise architecture; document the alignment and coordination of responsibilities of the department’s various IT investment management boards for decision making related to IT investments, including cross-cutting investments; establish a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of commercial-off- the-shelf (COTS) software; and establish a policy requiring modularized IT investments. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of the Air Force take the following two actions: establish a documented process for measuring progress against the department’s IT goals and assign roles and responsibilities for achieving these goals; and develop IT performance measures related to the IT goals in the department’s information strategy, including measures such as those contained in practice 1.9 in our report, and track actual-versus-expected performance. To improve the department’s IT investment management processes, we recommend that the Secretary of the Air Force take the following four actions: include a description of the relationship between the IT investment management process and the department’s enterprise architecture, and an identification of external and environmental factors in its portfolio management guide; include costs, benefits, schedule, and risk elements as well as measures such as net benefits, net risks, and risk-adjusted return-on-investment in the department’s project selection criteria; implement a scoring model and develop a prioritized list of IT investments as part of its project selection process; and document the role, responsibility, and authority of its IT investment management boards, including work processes, alignment, and coordination of decision making among its various boards, and document processes for controlling and evaluating IT investments, such as those outlined in practices 2.15, 2.16, 2.17, and 2.18. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of the Army take the following action: complete the development of IT performance measures related to the Army’s enterprisewide IT goals, including measures such as those in practice 1.9 in our report, and track actual-versus-expected performance. To improve the department’s IT investment management processes, we recommend that the Secretary of the Army take the following four actions: include a description of the relationship between the IT investment management process and the department’s enterprise architecture in the department’s IT capital planning and investment control guide; document the alignment and coordination of responsibilities of its various IT investment management boards for decision making related to IT investments; include costs, benefits, schedule, and risk elements as well as measures such as net benefits, net risks, and risk-adjusted return-on-investment in the department’s project selection criteria; and involve the department’s IT investment management boards in controlling and evaluating IT investments, including the development and documentation of oversight processes such as those in practices 2.15, 2.16, 2.17, and 2.18. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Commerce take the following four actions: include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; document its process of developing IT goals in support of agency needs, measuring progress against these goals, and assigning roles and responsibilities for achieving these goals; develop performance measures related to the department’s IT goals in its IRM plan, and track actual-versus-expected performance for these IT performance measures; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of Commerce take the following eight actions: document the alignment and coordination of responsibilities of the department’s various IT investment management boards for decision making related to IT investments; establish a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of COTS software; include net risks and risk-adjusted return-on-investment in the department’s project selection criteria; establish a policy requiring modularized IT investments; develop decision-making rules to help guide the investment management board’s oversight of IT investments during the control phase; require that reports of deviations in systems capability in a project be submitted to the IT investment management board; develop an early warning mechanism that enables the investment management board to take corrective action at the first sign of cost, schedule, or performance slippages; and require postimplementation reviews be completed and the results reported to its investment management board. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Defense take the following three actions: include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA, align its performance measures with the goals in the plan, and include a description of major IT acquisitions contained in its capital asset plan that bear significantly on its performance goals; establish a documented process for measuring progress against the department’s IT goals; develop IT performance measures related to its IT goals, including, for example, the measures contained in practice 1.9 in our report and track actual-versus-expected performance. To improve the department’s IT investment management processes, we recommend that the Secretary of Defense take the following action: document, as part of its planned IT portfolio management process, how this process relates to other departmental processes and the department’s enterprise architecture, and document the external and environmental factors that influence the process. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Education take the following four actions: include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; establish and document a process for measuring progress against the department’s IT goals in its IRM plan and for assigning roles and responsibilities for achieving these goals; develop performance measures related to how IT contributes to program productivity, the effectiveness and efficiency of agency operations, and the effectiveness of controls to prevent software piracy; and track actual-versus-expected performance for the department’s enterprisewide IT performance measures in its IRM plan. To improve the department’s IT investment management processes, we recommend that the Secretary of Education take the following five actions: document the alignment and coordination of responsibilities of the department’s various IT investment management boards for decision making related to IT investments; establish a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs; include net risks and risk-adjusted return-on-investment in the department’s project selection criteria; develop a process to use independent verification and validation reviews, when appropriate; and track the resolution of corrective actions for under-performing projects and report the results to the investment management board. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Energy take the following six actions: document how its IT management operations and decisions are integrated with human resources management; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a goal related to how IT contributes to program productivity; develop performance measures related to how IT contributes to program productivity and the effectiveness of controls to prevent software piracy; develop and link performance measures to the department’s enterprisewide goals in its IRM plan and track actual-versus-expected performance for these measures; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of Energy take the following four actions: include interfaces in its inventory of the agency’s major information systems, implement a standard, documented procedure to maintain this inventory, and develop a mechanism to use the inventory as part of managerial decision making; prioritize the department’s IT proposals; establish a policy requiring modularized IT investments; and document the role, responsibility, and authority of its IT investment management boards, including work processes, alignment, and coordination of decision making among its various boards, and document the processes for controlling and evaluating IT investments, such as those in practices 2.15, 2.16, 2.17, and 2.18. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Administrator of the Environmental Protection Agency take the following six actions: document the agency’s IT strategic management processes and how they are integrated with other major departmental processes, such as the budget and human resources management; include in the agency’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a documented process to develop IT goals in support of agency needs, measure progress against these goals, and assign roles and responsibilities for achieving these goals; develop performance measures related to the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for the agency’s measures associated with the IT goals in its IRM plan; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Administrator of the Environmental Protection Agency take the following three actions: include net risks, risk-adjusted return-on-investment, and qualitative criteria in the agency’s project selection criteria; establish a policy requiring modularized IT investments; and fully implement an IT investment management control phase, including the elements contained in practices 2.15, 2.16, and 2.17. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Administrator of the General Services Administration take the following four actions: include in the agency’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop performance measures related to the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for each of the agency’s measures associated with the IT goals in its IRM plan; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Administrator of the General Services Administration take the following four actions: develop work processes and decision-making processes for the agency’s investment management boards; establish a policy requiring modularized IT investments; help guide the oversight of IT investments by developing clear decision- making rules for its IT investment management board and by requiring that IT projects report on deviations in system capability; and track the resolution of corrective actions for under-performing projects and report the results to the investment management board. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Health and Human Services take the following six actions: document the department’s IT strategic management processes and how they are integrated with its budget processes; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA and include a description of major IT acquisitions contained in its capital asset plan that bear significantly on its performance goals; establish a documented process for measuring progress against the department’s IT goals; develop performance measures related to the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for its enterprisewide IT performance measures in its IRM plan; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of Health and Human Services take the following 10 actions: revise the department’s IT investment management policy to include (1) how this process relates to other agency processes, (2) an identification of external and environmental factors, (3) a description of the relationship between the process and the department’s enterprise architecture, and (4) the use of independent verification and validation reviews, when appropriate. develop procedures for the department’s enterprisewide investment management board to document and review IT investments; document the alignment and coordination of responsibilities of the department’s various IT investment management boards for decision making related to IT investments; implement a standard, documented procedure to maintain the department’s inventory of major information systems and develop a mechanism to use the inventory as part of managerial decision making; establish a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness; implement a structured IT selection process that includes processes and criteria such as those in practices 2.12 and 2.13; develop decision-making rules to help guide the investment management board’s oversight of IT investments during the control phase; require the investment management board to review projects at major track the resolution of corrective actions for under-performing projects and report the results to the investment management board; and revise the department’s investment management policy to require postimplementation reviews to address validating benefits and costs, and conduct such reviews. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Housing and Urban Development take the following six actions: document the roles and responsibilities of the chief financial officer and program managers in IT strategic planning and how the department’s IT management operations and decisions are integrated with human resources management; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a documented process to develop IT goals in support of agency needs, measure progress against these goals, and assign roles and responsibilities for achieving these goals; develop performance measures related to how IT contributes to program productivity and the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for the department’s enterprisewide IT performance measures in its IRM plan; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of Housing and Urban Development take the following five actions: establish a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of COTS software; include net risks and risk-adjusted return-on-investment in the department’s project selection criteria; establish a policy requiring modularized IT investments; require IT projects to report on deviations in system capability and monitor IT projects at key milestones; and develop a process to use independent verification and validation reviews, when appropriate. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of the Interior take the following six actions: document the department’s IT strategic management processes and how they are integrated with other major departmental processes, including organizational planning, budget, financial management, human resources management, and program decisions; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA and include a description of major IT acquisitions contained in its capital asset plan that bear significantly on its performance goals; develop a documented process to develop IT goals in support of agency needs, measure progress against these goals, and assign roles and responsibilities for achieving these goals; develop performance measures related to the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for the department’s enterprisewide IT performance measures in its IRM plan; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of the Interior take the following five actions: establish a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness; include cost and schedule in the department’s project selection criteria and prioritize its IT proposals; establish a policy requiring modularized IT investments; require that corrective actions be undertaken, tracked, and reported to the investment management board for under-performing projects; and implement an evaluation process for IT investments that addresses the elements of practice 2.18. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Attorney General take the following six actions: document the department’s IT strategic management processes; document how the department’s IT management operations and decisions are integrated with human resources management processes; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a documented process to develop IT goals in support of agency needs, measure progress against these goals, and assign roles and responsibilities for achieving these goals; develop performance measures related to the department’s IT goals in its IRM plan, and track actual-versus-expected performance for these IT performance measures; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Attorney General take the following five actions: develop work processes and procedures for the department’s investment management boards, including aligning and coordinating IT investment decision making among its various boards; establish a policy requiring that IT investments be in compliance with the agency’s enterprise architecture; include net risks and risk-adjusted return-on-investment in the department’s project selection criteria; implement a scoring model and develop a prioritized list of investments as part of the department’s project selection process; and require that corrective actions be undertaken, tracked, and reported to the investment management board for under-performing projects. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Labor take the following five actions: include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a documented process to develop IT goals in support of agency needs, measure progress against these goals, and assign roles and responsibilities for achieving these goals; develop a goal related to how IT contributes to program productivity; develop performance measures related to how IT contributes to program productivity, efficiency, and the effectiveness of controls to prevent software piracy, and track actual-versus-expected performance; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of Labor take the following five actions: include a description of the relationship between the IT investment management process and the department’s enterprise architecture in the department’s IT capital planning and investment control guide; include net risks and risk-adjusted return-on-investment in its project selection criteria; establish a policy requiring modularized IT investments; develop decision-making rules to help guide the investment management board’s oversight of IT investments during the control phase; and develop an early warning mechanism that enables the investment management board to take corrective action at the first sign of cost, schedule, or performance slippages. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Administrator of the National Aeronautics and Space Administration take the following seven actions: document the agency’s IT strategic management processes; document how the agency’s IT management operations and decisions are integrated with human resources management processes; include in the agency’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a documented process to develop IT goals in support of agency needs, measure progress against these goals, and assign roles and responsibilities for achieving these goals; develop performance measures related to the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for the agency’s enterprisewide IT performance measures in its IRM plan; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Administrator of the National Aeronautics and Space Administration take the following four actions: revise the agency’s IT investment management policy and guidance to describe the relationship of this process to the agency’s enterprise architecture; include interfaces in its inventory of the agency’s major information systems, implement a standard, documented procedure to maintain this inventory, and develop a mechanism to use the inventory as part of managerial decision making; within the agency’s IT investment selection process, implement a mechanism to identify possible conflicting, overlapping, strategically unlinked, or redundant proposals; implement a scoring model; and develop a prioritized list of investments; and document the role, responsibility, and authority of its IT investment management boards, including work processes, alignment, and coordination of decision making among its various boards, and document the processes for controlling and evaluating IT investments, such as those in practices 2.15, 2.16, 2.17, and 2.18. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Director of the National Science Foundation take the following five actions: document the agency’s IT strategic management processes; include in the agency’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; implement a process for assigning roles and responsibilities for achieving its IT goals; develop performance measures related to the effectiveness of controls to prevent software piracy; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Director of the National Science Foundation take the following four actions: develop an IT investment management guide that includes a description of the relationship between the IT investment management process and the agency’s other organizational plans and processes and its enterprise architecture, and identify external and environmental factors that influence the process in the agency’s IT capital planning and investment control policy; implement a structured IT selection process that includes the elements of practices 2.12 and 2.13; involve the department’s IT investment management board in controlling and evaluating IT investments, including the development and documentation of oversight processes such as those in practices 2.15, 2.16, 2.17, and 2.18; and define and document the elements of the agency’s postimplementation reviews. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of the Navy take the following three actions: develop a documented process to measure progress against the department’s enterprisewide IT goals and assign roles and responsibilities for achieving these goals; develop an IT goal related to service delivery to the public; and develop IT performance measures related to the department’s IT goals, including, at a minimum, measures contained in practice 1.9 in our report, and track actual-versus-expected performance. To improve the department’s IT investment management processes, we recommend that the Secretary of the Navy take the following four actions: include net risks and risk-adjusted return-on-investment in the department’s project selection criteria; implement a structured IT selection process that includes the elements involve all elements of the department’s IT investment management board governance process in selecting, controlling, and evaluating IT investments; and document the role, responsibility, and authority of its IT investment management boards, including work processes, alignment, and coordination of decision making among its various boards, and document the processes for controlling and evaluating IT investments, such as those outlined in practices 2.15, 2.16, 2.17, and 2.18. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Commissioner of the Nuclear Regulatory Commission take the following five actions: document the agency’s roles and responsibilities for its IT strategic management processes and how IT planning is integrated with its budget and human resources planning; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a documented process to assign roles and responsibilities for achieving its enterprisewide IT goals; develop performance measures related to the effectiveness of controls to prevent software piracy; and develop performance measures for the agency’s enterprisewide goals in its IRM plan, and track actual-versus-expected performance for these measures. To improve the agency’s IT investment management processes, we recommend that the Commissioner of the Nuclear Regulatory Commission take the following five actions: include a description of the relationship between the IT investment management process and the department’s other organizational plans and processes and its enterprise architecture, and identify external and environmental factors that influence the process in the agency’s IT capital planning and investment control policy; develop work processes and procedures for the agency’s investment implement a standard, documented procedure to maintain its IT asset inventory, and develop a mechanism to use the inventory as part of managerial decision making; develop a structured IT investment management selection process that includes project selection criteria, a scoring model, and prioritization of proposed investments; and document the role, responsibility, and authority of its IT investment management boards, including work processes and control, and evaluate processes that address the oversight of IT investments, such as what is outlined in practices 2.15, 2.16, 2.17, and 2.18. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Director of the Office of Personnel Management take the following four actions: include in the agency’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop performance measures related to the effectiveness of controls to prevent software piracy; track actual-versus-expected performance for the agency’s enterprisewide IT performance measures in its IRM plan; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Director of the Office of Personnel Management take the following four actions: develop work processes and procedures for the agency’s investment management board, including establishing criteria for defining major systems and documenting a process for handling cross-functional investments; implement a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of COTS software; establish a policy requiring modularized IT investments; and require that corrective actions be undertaken, tracked, and reported to the investment management board for under-performing projects. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Administrator of the Small Business Administration take the following five actions: document the agency’s IT strategic management processes; include in the agency’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a documented process to develop IT goals in support of agency needs, measure progress against these goals, and assign roles and responsibilities for achieving these goals; develop performance measures related to the agency’s IT goals in its IRM plan, including, at a minimum, measures related to how IT contributes to program productivity, efficiency, effectiveness, the overall performance of its IT programs, and the effectiveness of controls to prevent software piracy, and track actual-versus-expected performance for these IT performance measures; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Administrator of the Small Business Administration take the following two actions: document a process that the investment management board can invoke final decision-making authority over IT investments addressed by lower- level boards; and implement a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Commissioner of the Social Security Administration take the following three actions: include in its annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop performance measures related to the performance of the agency’s IT programs and the effectiveness of controls to prevent software piracy; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Commissioner of the Social Security Administration take the following four actions: develop work processes and procedures for the agency’s investment establish a policy requiring modularized IT investments; document the role, responsibility, and authority of its IT investment management board for the oversight of IT investments, such as what is outlined in practices 2.15, 2.16, and 2.18; and require that corrective actions be tracked and reported to the investment management board for under-performing projects. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of State take the following two actions: include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of State take the following five actions: implement a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of COTS software; establish a policy requiring modularized IT investments; include risk-adjusted return-on-investment in the department’s project revise the department’s draft IT investment management policy to include reviewing projects at major milestones; and fully implement an IT investment management control phase, including the elements contained in practices 2.16 and 2.17. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Transportation take the following five actions: document its IT strategic planning process; include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop a goal related to how IT contributes to program productivity; develop performance measures related to the department’s IT goals in its IRM plan, and track actual-versus-expected performance for these IT performance measures; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of Transportation take the following six actions: document the alignment and coordination of responsibilities of the department’s various IT investment management boards for decision making related to IT investments; implement a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of COTS software; prioritize the department’s IT proposals; establish a policy requiring modularized IT investments; develop and document decision-making rules to help guide the investment management board’s oversight of IT investments during the control phase; and as part of the department’s control phase, employ an early warning mechanism, and use independent verification and validation reviews, when appropriate. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of the Treasury take the following four actions: include in the department’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; develop performance measures related to the effectiveness of controls to prevent software piracy; develop performance measures related to the department’s IT goals in its IRM plan, and track actual-versus-expected performance for these IT performance measures; and develop a mechanism for benchmarking the department’s IT management processes, when appropriate. To improve the department’s IT investment management processes, we recommend that the Secretary of the Treasury take the following eight actions: develop a capital planning and investment control guide that includes, for example, the elements of practice 2.1; develop work processes and procedures for the agency’s IT investment management board, and document the alignment and coordination of responsibilities of its various boards for decision making related to investments, including the criteria for which investments—including cross-cutting investments—will be reviewed by the enterprisewide board; use the department’s IT asset inventory as part of managerial decision making, including using it to identify the potential for asset duplication; establish a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of COTS software; implement a structured IT selection process that includes the elements of practices 2.12 and 2.13; establish a policy requiring modularized IT investments; implement an IT investment management process that includes a control phase that addresses, for example, the elements of practices 2.15, 2.16, and 2.17; and implement an IT investment management process that includes an evaluation phase that addresses, for example, the elements of practice 2.18. To improve the agency’s IT strategic planning/performance measurement processes, we recommend that the Administrator of the U.S. Agency for International Development take the following two actions: include in the agency’s annual performance plan the resources and time periods required to implement the information security program plan required by FISMA; and develop a mechanism for benchmarking the agency’s IT management processes, when appropriate. To improve the agency’s IT investment management processes, we recommend that the Administrator of the U.S. Agency for International Development take the following nine actions: develop work processes and procedures for the agency’s IT investment establish a policy requiring that IT investments be in compliance with the agency’s enterprise architecture; develop a policy requiring that proposed IT investments support work processes that have been simplified or redesigned to reduce costs and improve effectiveness and that makes maximum use of COTS software; include net risks, risk-adjusted return-on-investment, and qualitative criteria in the agency’s project selection criteria; within the agency’s IT investment selection process, implement a mechanism to identify possible conflicting, overlapping, strategically unlinked, or redundant proposals; develop a policy requiring modularized IT investments; develop decision-making rules, review projects at major milestones, and require projects to report on deviations in system capability to help guide the oversight of IT investments by the agency’s investment management board during the control phase; as part of the agency’s control phase, employ an early warning mechanism, and use independent verification and validation reviews, when appropriate; and require that corrective actions be undertaken, tracked, and reported to the investment management board for under-performing projects. To improve the department’s IT strategic planning/performance measurement processes, we recommend that the Secretary of Veterans Affairs take the following four actions: include in the department’s annual performance plan the resources required to implement the information security program plan required by FISMA; develop a documented process to measure progress against the department’s IT goals, and assign roles and responsibilities for achieving these goals; develop performance measures related to the effectiveness of controls to prevent software piracy; and track actual-versus-expected performance for the department’s enterprisewide IT performance measures in its IRM plan. To improve the department’s IT investment management processes, we recommend that the Secretary of Veterans Affairs take the following two actions: document the alignment and coordination of responsibilities of the department’s various IT investment management boards for decision making related to IT investments, including cross-cutting investments; and within the agency’s IT investment selection process, implement a mechanism to identify possible conflicting, overlapping, strategically unlinked, or redundant proposals, and prioritize its IT investments. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated December 5, 2003. 1. DOD provided its annual report to the President and the Congress, which included its fiscal year 2004 performance plan. Based on a review of this plan, we modified our report. 2. We disagree that the cited objective fully addresses this issue. Specifically, although this objective addresses e-government, the wording of the objective, its description, and the discussion of related initiatives do not explicitly address service delivery to the public. Accordingly, we did not modify our report. 3. Our review of the acquisition management process documentation provided by the Navy did not support that the department’s selection criteria include net risks and risk-adjusted return-on-investment. Accordingly, we did not modify our report. The following are GAO’s comments on the Department of Education’s letter dated December 10, 2003. 1. We agree that Education requires IT investments to have performance measures. However, our practice dealt with enterprise-level measures, such as those found in the department’s IRM plan, not project-specific measures. Education reported that the performance measures in its IRM plan do not measure how IT contributes to program productivity and the efficiency and effectiveness of agency operations. Accordingly, we did not modify our report. 2. We modified our assessment of practice 2.6 in this report and deleted the related recommendation based on our evaluation of additional documentation provided by Education. The following are GAO’s comments on the Environmental Protection Agency’s (EPA) letter dated December 9, 2003. 1. As we reported and EPA acknowledged, its documentation on IT strategic planning and investment management was not complete or finalized. For example, the partial rating we gave EPA for its IT management and strategic planning practices—practices 1.1 and 1.2— matched the agency’s own self-assessment in these areas. Specifically, our review of planning documents cited by EPA in its self-assessment found that while the agency had documented agencywide roles and responsibilities for planning and managing IT resources and had documented its process to integrate the IT investment management process with the budget, EPA had not addressed other key elements of the practices. As an example, EPA had not fully documented the method by which it defines program information needs and develops strategies, systems, and capabilities to meet those needs. Since EPA provided no additional documentation, our practice assessment and our related recommendations remain unchanged. 2. As stated in our report, practice 1.7 refers to the documentation of the process used to develop IT goals and measures and the responsibility for achieving them. As EPA states in its comments, it is currently working on documenting this process. Accordingly, we did not modify our report. The following are GAO’s comments on the General Services Administration’s (GSA) letter dated December 9, 2003. 1. We based our evaluation on the agency’s self-assessment and comments made by GSA’s Director, Office of Policy and Plans. However, based on GSA’s representation in commenting on our draft, we changed our evaluation of the referenced practice. 2. The Clinger-Cohen Act requires agencies to include in its information resources management (IRM) plan the identification of a major IT acquisition program(s), or any phase or increment of that program, that significantly deviated from cost, performance, or schedule goals established by the program. As we acknowledge in this report, agencies, which would include GSA, identified other mechanisms that they use to track and report cost, schedule, and performance deviations. Moreover, we evaluated agencies as a “partially” instead of a “no” in this practice to take into account that the agency had the required information, although it was not in the prescribed format. Accordingly, we did not modify our report. 3. The Federal Information Security Management Act of 2002 requires agencies to include in the performance plans required by the Government Performance and Results Act the resources and time periods to implement their information security program. As we noted in this report, agencies, which would include GSA, commonly stated that they had this information but that it was in another document. Nevertheless, this does not negate the need for having the agency report to the Congress in the form that it requires. This is particularly important since performance plans are public documents. Accordingly, we did not modify our report. 4. GSA’s new documentation illustrates that it has performance measures for each of the IT goals in its IRM plan. However, GSA did not provide evidence that it was tracking actual versus expected performance for measures associated with one of its goals. We revised our report to reflect GSA’s new documentation and our evaluation. 5. We revised our report on the basis of this new documentation. 6. GSA’s highest-level IT investment management board is its Executive Committee. GSA did not provide a charter or any other evidence of policies and procedures for this committee. We therefore did not modify our report. 7. The additional documentation provided by GSA (1) does not address decision-making rules and (2) illustrates that GSA uses a monthly project control report on cost, schedule, and performance status, but the report does not explicitly address deviations in system capability. In addition, according to GSA’s capital planning and investment control order, the format of the report is left to the applicable organization, thereby making it less likely that the investment management boards are obtaining consistent information. We therefore did not modify our report. 8. We agree that GSA’s capital planning and investment control order requires that projects that have significant variances are to provide “get well” plans and that monthly control reports are used to report on project cost, schedule, and performance status. However, it is not clear that these status reports can be used to systemically track corrective actions. Moreover, according to GSA’s capital planning and investment control order, the format of the monthly control report is left to the applicable organization, thereby making it less likely that the status of corrective actions is being consistently reported. We therefore did not modify our report. 9. See comment 8. 10. We modified our recommendations based on our evaluation of GSA’s documentation. See comment 4 for our assessment. 11. Executive Order 13103 requires agencies to use software piracy performance measures that comply with guidance issued by the federal CIO Council. The Council, in turn, called on the agencies to develop such measures. The additional documentation that GSA provided was an order requiring agency employees to use properly licensed software, but it does not include performance measures that would demonstrate that this requirement is being honored. Measuring how well agencies are combating software piracy is important because it can verify that the controls that they have put in place are working. Accordingly, we did not change this part of the recommendation. 12. We modified our recommendation to reflect that GSA requires projects that have significant variances to develop corrective action plans. However, the other elements of the recommendation pertaining to the tracking and reporting on corrective actions remain outstanding. See comment 8 for additional information. The following are GAO’s comments on the Department of Justice’s letter dated December 2, 2003. 1. GAO has ongoing work looking at OMB’s initiative. However, the Federal Information Security Management Act of 2002 requires agencies to include in the performance plans required by the Government Performance and Results Act the resources and time periods to implement its information security program. Accordingly, we did not change the recommendation. The following are GAO’s comments on the Department of Labor’s letter dated December 2, 2003. 1. Because Labor did not disagree with our characterization of its IT goal, no changes were made to our report. 2. We agree with Labor’s characterization of its IT strategic goal and order 3-2003. Nevertheless, the recommendation, and related practice 1.7, refers to the documentation of the process used to develop IT goals and measures and the responsibility for achieving them. Labor neither provided documentation of such a process nor took issue with our assessment of practice 1.7, in which we stated that the agency did not have this practice in place. Moreover, Labor’s self-assessment referenced a draft performance measurement guidebook and quarterly review process in support of this practice. However, these mechanisms relate to performance measures associated with IT projects, not Labor’s enterprisewide IT goal. Finally, as we noted in our report, unlike other agencies in our review, Labor does not have goals in its IRM plan. Accordingly, we did not change this recommendation. The following are GAO’s comments on the National Aeronautics and Space Administration’s (NASA) letter dated December 8, 2003. 1. Our practice dealt with enterprise-level measures, not project-specific measures. In addition, although we agree that NASA’s IRM plan included performance measures, the agency generally does not track actual-versus-expected performance for these enterprisewide measures. The following are GAO’s comments on the Social Security Administration’s (SSA) letter dated December 3, 2003. 1. We agree that SSA needs to consider the level of detail that is appropriate to include in its performance plans so as not to compromise security. 2. We requested documentation to support SSA’s assertion that it has performance measures associated with the performance of IT programs (e.g., the percentage of IT projects that are meeting cost, schedule, and performance goals), but none were provided. Accordingly, we did not modify our report. 3. We agree that it is not appropriate to include measures related to the effectiveness of controls to prevent software piracy in agency performance plans. Neither our practice nor our recommendation specifies the document or process that should be used to address software piracy. 4. As we noted in this report, SSA performs benchmarking in an ad hoc manner. We believe that taking a more systematic approach is necessary to ensure that benchmarking is performed at suitable times using an appropriate methodology. Without a systematic approach, it is not possible to validate that the agency performs benchmarking “when appropriate.” Accordingly, we did not modify our report. 5. References to OMB’s Circular A-11 in agency policy documentation alone do not ensure that these practices are met. In particular, we believe that agency policies related to modularized IT investments should be explicit and that it is neither prudent nor practical to rely on users of SSA’s documentation of its capital planning and investment control process to review a secondary source. The following are GAO’s comments on the Department of State’s letter dated December 9, 2003. 1. We based our evaluation on the agency’s draft Capital Planning and Investment Control Program Guide that was provided during our review. However, based on State’s newly finalized Capital Planning and Investment Control Program Guide, we changed this evaluation in our report. 2. We based our evaluation on the agency’s draft Capital Planning and Investment Control Program Guide that was provided at the time of our review. Based on the final version of the Capital Planning and Investment Control Program Guide provided by State in its response, we modified the language in our report, as appropriate. 3. See comment 2. 4. See comment 2. The following are GAO’s comments on the U.S. Agency for International Development’s (USAID) letter dated December 9, 2003. 1. References to OMB’s Circular A-11 in agency policy documentation alone do not ensure that these practices are met. In particular, we believe that agency policies related to practices 2.11 and 2.14 should be explicit and that it is neither prudent nor practical to rely on users of USAID’s directives to review a secondary source. Regarding USAID’s comments that it uses the criteria in practices 2.11 and 2.14 as part of its evaluation and scoring of investments, we agree that the agency does ask some questions on the use of commercial-off-the-shelf software and whether the agency uses “successive chunks” within its proposed IT investment scoring model. However, addressing these criteria as part of a scoring model does not address our practice because scoring projects on the basis of the questions asked does not necessarily preclude projects from continuing if they do not fully meet the criteria. Additionally, the questions asked as part of the scoring model do not fully meet the requirements of the practices. Accordingly, we did not modify our report. The following are GAO’s comments on the Department of Veterans Affairs’ (VA) letter dated December 5, 2003. 1. VA’s response indicates that the department will address this recommendation in the future and, therefore, we did not remove this recommendation. 2. See comment 1. 3. See comment 1. 4. VA’s monthly performance reports track project-specific measures, not enterprisewide IT performance measures. VA’s draft IRM plan states that it will establish metrics to measure performance for IT strategic initiatives. However, progress toward doing so was not addressed by VA in its comments. Therefore, we do not believe this recommendation has been fully addressed. 5. See comment 1. 6. Although VA describes a process followed for reviewing investment proposals, it did not provide evidence to support that this practice was actually followed. In addition, VA did not address the element of our recommendation related to prioritizing its IT investments. Therefore, we did not remove this recommendation. 7. On the basis of the additional information provided, we agree that the recommendation has been implemented and modified our report accordingly. Joseph P. Cruz, Lester P. Diamond, Laurence P. Gill, David B. Hinchman, Robert G. Kershaw, David F. Plocher, Susan S. Sato, and Patricia D. Slocum made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Over the years, the Congress has promulgated laws and the Office of Management and Budget and GAO have issued policies and guidance, respectively, on (1) information technology (IT) strategic planning/performance measurement (which defines what an organization seeks to accomplish, identifies the strategies it will use to achieve desired results, and then determines how well it is succeeding in reaching resultsoriented goals and achieving objectives) and (2) investment management (which involves selecting, controlling, and evaluating investments). To obtain an understanding of the government's implementation of these key IT management policies, congressional requesters asked GAO to determine the extent to which 26 major agencies have in place practices associated with key legislative and other requirements for (1) IT strategic planning/ performance measurement and (2) IT investment management. Agencies' use of 12 IT strategic planning/performance measurement practices--identified based on legislation, policy, and guidance--is uneven. For example, agencies generally have IT strategic plans and goals, but these goals are not always linked to specific performance measures that are tracked. Without enterprisewide performance measures that are tracked against actual results, agencies lack critical information about whether their overall IT activities are achieving expected goals. Agencies' use of 18 IT investment management practices that GAO identified is also mixed. For example, the agencies largely have IT investment management boards, but no agency had the practices associated with the control phase fully in place. Executive-level oversight of project-level management activities provides organizations with increased assurance that each investment will achieve the desired cost, benefit, and schedule results. Agencies cited a variety of reasons for not having practices fully in place, such as that the chief information officer position had been vacant, that not including a requirement in guidance was an oversight, and that the process was being revised, although they could not always provide an explanation. Regardless of the reason, these practices are important ingredients for ensuring effective strategic planning, performance measurement, and investment management, which, in turn, make it more likely that the billions of dollars in government IT investments are wisely spent.
Millions of current and future retirees rely on private or public DB pension plans, which promise to pay retirement benefits that are generally based on an employee’s salary and years of service. The financial condition of these plans—and hence their ability to pay promised retirement benefits when such benefits are due—depends on adequate contributions from employers and, in some cases, employees, as well as prudent investments that preserve principal and yield an adequate rate of return over time. The plan sponsor must make required contributions to the plan that are intended to ensure it is adequately funded to pay promised benefits. To maintain and increase plan assets, fiduciaries of public and private sector pension plans invest in assets that are expected to grow in value or yield income. In making investments, DB plan managers consider a plan’s benefit payment requirements and balance the desire to maximize return on investment and the desire to limit the overall risk to the investment portfolio to an acceptable level. In doing so, plan fiduciaries invest in various categories of assets classes, which traditionally have consisted mainly of stocks and bonds. Stocks offer relatively high expected long- term returns at the risk of considerable volatility, that is, the likelihood of significant short-term losses or gains. On the other hand, bonds and other fixed income investments offer a steady income stream and relatively low volatility, but lower expected long-term returns. Different proportions of these two asset classes will, therefore, provide different degrees of risk and expected return on investment. Pension fiduciaries may also invest in other asset classes or trading strategies, such as hedge funds and private equity, which are generally considered to be riskier investments, so long as such investments are prudent. Private sector pension plan investment decisions must comply with the provisions of ERISA, which stipulates fiduciary standards based on the principle of a prudent man standard. Under ERISA, plan sponsors and other fiduciaries must (1) act solely in the interest of the plan participants and beneficiaries and in accordance with plan documents; (2) invest with the care, skill, and diligence of a prudent person with knowledge of such matters; and (3) diversify plan investments to minimize the risk of large losses. Under ERISA, the prudence of any individual investment is considered in the context of the total plan portfolio, rather than in isolation. Hence, a relatively risky investment may be considered prudent, if it is part of a broader strategy to balance the risk and expected return to the portfolio. In addition to plan sponsors, under the ERISA definition of a fiduciary, any other person that has discretionary authority or control over a plan asset is subject to ERISA’s fiduciary standards. The Employee Benefit Security Administration (EBSA) at Labor is responsible for enforcing these provisions of ERISA, as well as educating and assisting retired workers and plan sponsors. Another federal agency, the Pension Benefit Guaranty Corporation (PBGC), collects premiums from federally insured plans in order to insure the benefits of retirees if a plan terminates without sufficient assets to pay promised benefits. In the public sector, governments have established pension plans at state, county, and municipal levels, as well as for particular categories of employees, such as police officers, fire fighters, and teachers. The structure of public pension plan systems can differ considerably from state to state. In some states, most or all public employees are covered by a single consolidated DB retirement plan, while in other states many retirement plans exist for various units of government and employee groups. Public sector DB plans are not subject to funding, vesting and most other requirements applicable to private sector DB plans under ERISA, but must follow requirements established for them under applicable state law. While states generally have adopted standards essentially identical to the ERISA prudent man standard, specific provisions of law and regulation vary from state to state. Public plans are also not insured by the PBGC, but could call upon state or local taxpayers in the event of a funding shortfall. Although there is no statutory or universally accepted definition of hedge funds, the term is commonly used to describe pooled investment vehicles that are privately organized and administered by professional managers and that often engage in active trading of various types of securities, commodity futures, options contracts, and other investment vehicles. In recent years, hedge funds have grown rapidly. As we reported in January 2008, according to industry estimates, from 1998 to early 2007, the number of funds grew from more than 3,000 to more than 9,000 and assets under management from more than $200 billion to more than $2 trillion globally. Hedge funds also have received considerable media attention as a result of the high-profile collapse of several hedge funds, and consequent losses suffered by investors in these funds. Although hedge funds have the reputation of being risky investment vehicles that seek exceptional returns on investment, this was not their original purpose, and is not true of all hedge funds today. Founded in the 1940s, one of the first hedge funds invested in equities and used leverage and short selling to protect or “hedge” the portfolio from its exposure to movements in the stock market. Over time, hedge funds diversified their investment portfolios and engaged in a wider variety of investment strategies. Because hedge funds are typically exempt from registration under the Investment Company Act of 1940, they are generally not subject to the same federal securities regulations as mutual funds. They may invest in a wide variety of financial instruments, including stocks and bonds, currencies, futures contracts, and other assets. Hedge funds tend to be opportunistic in seeking positive returns while avoiding loss of principal, and retaining considerable strategic flexibility. Unlike a mutual fund, which must strictly abide by the detailed investment policy and other limitations specified in its prospectus, most hedge funds specify broad objectives and authorize multiple strategies. As a result, most hedge fund trading strategies are dynamic, often changing rapidly to adjust to market conditions. Hedge funds are typically structured and operated as limited partnerships or limited liability companies exempt from certain registration, disclosure and other requirements under the Securities Act of 1933, Securities Exchange Act of 1934, Investment Company Act of 1940, and Investment Advisers Act of 1940 that apply in connection to other investment pools, such as mutual funds. For example, to allow them to qualify for various exemptions under such laws, hedge funds usually limit the number of investors, refrain from advertising to the general public, and solicit fund participation only from large institutions and wealthy individuals. The presumption is that investors in hedge funds have the sophistication to understand the risks involved in investing in them and the resources to absorb any losses they may suffer. Although many workers may be impacted by any losses resulting from pension fund investment in hedge funds, a pension plan counts as a single investor that does not prevent a hedge fund from qualifying for the various statutory exemptions. Individuals and institutions may also invest in hedge funds through funds of hedge funds, which are investment funds that buy shares of multiple underlying hedge funds. Fund of funds managers invest in other hedge funds rather than trade directly in the financial markets, and thus offer investors broader exposure to different hedge fund managers and strategies. Like hedge funds, funds of funds may be exempt from various aspects of federal securities and investment law and regulation. Like hedge funds, there is no legal or commonly accepted definition of private equity funds, but the term generally includes privately managed pools of capital that invest in companies, many of which are not listed on a stock exchange. Although there are some similarities in the structure of hedge funds and private equity funds, the investment strategies employed are different. Unlike many hedge funds, private equity funds typically make longer-term investments in private companies and seek to obtain financial returns not through particular trading strategies and techniques, but through long-term appreciation based on corporate stewardship, improved operating processes and financial restructuring of those companies, which may involve a merger or acquisition of companies. Private equity is generally considered to involve a substantially higher degree of risk than traditional investments, such as stocks and bonds, for a higher return. While strategies of private equity funds vary, most funds target either venture capital or buy-out opportunities. Venture capital funds invest in young companies often developing a new product or technology. Private equity fund managers may provide expertise to a fledgling company to help it advance toward a position suitable for an initial public offering. Buyout funds generally invest in larger established companies in order to add value, in part, by increasing efficiencies and, in some cases, consolidating resources by merging complementary businesses or technologies. For both venture capital and buy-out strategies, investors hope to profit when the company is eventually sold, either when offered to the public or when sold to another investor or company. Each private equity fund generally focuses on only one type of investment opportunity, usually specializing in either venture capital or buyout and often specializing further in terms of industry or geographical area. Investment in private equity has grown considerably over recent decades. According to a venture capital industry organization, the amount of capital raised by private equity funds grew from just over $2 billion in 1980 to about $207 billion in 2007; while the number of private equity funds grew from 56 to 432 funds over the same time period. As with hedge funds, private equity funds operate as privately managed investment pools and have generally not been subject to Securities and Exchange Commission (SEC) examinations. Pension plans typically invest in private equity through limited partnerships in which the general partner develops an investment strategy and limited partners provide the large majority of the capital. After creating a new fund and raising capital from the limited partners, the general partner begins to invest in companies that will make up the fund portfolio (see fig. 1). Limited partners have both limited control over the underlying investments and also limited liability for potential debts incurred by the general partners through the fund. Similar to hedge funds, private equity funds may be structured to qualify for exemptions from certain registration and disclosure requirements of federal securities laws; for example, by refraining from advertising to the general public. The majority of investments in private equity funds come from wealthy individuals and institutional investors, such as endowments, banks, corporations, and pension plans. According to several recent surveys, investments in hedge funds and private equity are typically a small portion of total plan assets—about 4 to 5 percent on average—but a considerable and growing number of plans invest in them. While investment in hedge funds is less common than private equity, the number of plans with investments in hedge funds has experienced greater growth in recent years. Furthermore, survey data show that larger plans, measured by total plan assets, are more likely to invest in hedge funds and private equity compared to mid-size plans. Survey data on plans with less than $200 million in assets are unavailable and, thus, the extent to which small plans invest in hedge funds and private equity is unknown. Individual plans’ hedge fund or private equity investments typically comprise a small share of total plan assets. According to a Pensions & Investments survey of large plans (as measured by total plan assets), the average allocation to hedge funds among plans with such investments was about 4 percent in 2007. Similarly, among plans with investments in private equity, the average allocation was about 5 percent. An earlier survey by Pyramis Global Advisors, which included mid- to large-size plans, found an average allocation of 7 percent for hedge funds and 5 percent for private equity in 2006. Although the majority of plans with investments in hedge funds or private equity have small allocations to these assets, a few plans have relatively large allocations, according to the Pensions & Investments survey (see fig. 2). Of the 62 plans that reported investments in hedge funds in 2007, 12 plans had allocations of 10 percent or more and, of those, 3 plans had allocations of 20 percent or more. The highest reported hedge fund allocation was 30 percent of total assets. Large allocations to private equity were even less common. A total of 106 surveyed plans reported investments in private equity in 2007, of which 11 plans had allocations of 10 percent or more and, of those, 1 plan had an allocation of about 20 percent. Two recent surveys of pension plans indicate that a considerable number of plans invest in hedge funds or private equity. As seen in table 1, from about 21 to 27 percent of all plans surveyed, which included mid- to large- size plans, held investments in hedge funds as of 2006, according to data from Greenwich Associates and Pyramis. Both surveys reveal that a greater share of private sector plans invested in hedge funds compared to public sector plans. The Greenwich survey also found that hedge fund investment was most common among collectively bargained plans, although the number of these plans surveyed was substantially smaller as there are relatively few of these plans in operation. Nearly half—8 out of 17—of collectively bargained plans surveyed invested in hedge funds. Investment in private equity is much more prevalent than investment in hedge funds, among plans surveyed. The Greenwich survey found that about 43 percent of plans held investments in private equity in 2006, while the Pyramis survey found that 41 percent of plans had such investments. Both surveys also show that a larger percentage of public sector plans are invested in private equity compared to private sector plans. As with hedge funds, the Greenwich survey found that investment in private equity was most common among collectively bargained plans. More than two-thirds— 12 out of 17—of collectively bargained plans surveyed invested in private equity. While pension plan investment in hedge funds is less prevalent than investment in private equity, hedge fund investment has increased much more in recent years. According to Greenwich Associates, from 2004 to 2006, the percent of plans with investments in hedge funds grew from just under 20 percent to almost 27 percent. Meanwhile, the percent of plans with investments in private equity increased at a lesser rate, from about 39 percent in 2004 to 43 percent in 2006. A survey by Pensions & Investments found that this comparison was more pronounced over a 6-year period (see fig. 3). Among larger plans surveyed by Pensions & Investments, the percent of plans with investments in hedge funds grew from about 11 percent in 2001 to nearly 47 percent in 2007. Over the same time period, investments in private equity remained more prevalent, but grew much more slowly. While pension plan investment in hedge funds has experienced greater growth in recent years, pension plan investment in private equity increased markedly following a 1979 Labor clarification that plans may make some investments in riskier assets, such as venture capital and buyout funds. Prior to 1979, such investments were generally viewed as a potential violation of ERISA. Labor clarified that ERISA’s prudent man standard applies to investment decisions in the context of the entire portfolio rather than in isolation. Following the Labor guidance, pension plan investments in venture capital and buy-out funds experienced rapid growth. One study reported that pension plans’ share of venture capital investments grew from 15 percent in 1978 to 50 percent in 1986, during which time overall investment in venture capital increased more than 10- fold from $427 million to $4.4 billion. More recently, the National Venture Capital Association estimates that pension plans held 42 percent of the approximately $20 billion invested in domestic venture capital funds in 2004. Survey data show that larger plans, measured by total plan assets, are more likely to invest in hedge funds and private equity compared to mid- size plans. Greenwich found that only 16 percent of mid-size plans—those with $250 to $500 million in total assets—were invested in hedge funds, compared to about 31 percent of the largest plans—those with $5 billion or more in assets (see fig. 4). Similarly, only about 16 percent of mid-size plans held investments in private equity, whereas slightly over 71 percent of the largest plans held such investments. Pensions & Investments survey of large plans corroborates this pattern—about 47 percent of plans held investments in hedge funds and nearly 80 percent held investments in private equity in 2007 (see fig. 3). Survey data on plans with less than $200 million in assets are unavailable and, in the absence of this information, it is unclear to what extent these plans invest in hedge funds and private equity. Representatives of investment consulting firms and industry experts told us that they suspect few small plans have such investments, but they could not provide data to confirm this. A representative of a large investment consulting firm explained that smaller plans face inherent restrictions on investing in hedge funds and private equity funds because the required minimum investments for these funds are often too high to allow small plans to make such investments while remaining sufficiently diversified. While pension plans seek important benefits through investments in hedge funds, hedge funds also pose challenges that demand greater expertise and effort than investments in more traditional assets. Pension plans told us that they invest in hedge funds to achieve one or more of several goals, including lessening the volatility of returns, obtaining returns greater than those expected in the stock market, and/or diversifying the portfolio by investing in a vehicle that will not be correlated with other asset classes in the portfolio. While all the pension plans we contacted that had invested in hedge funds expressed general satisfaction with these investments, hedge fund investments nonetheless pose significant challenges to pension plan fiduciaries, beyond the inherent challenges of investing in more familiar asset classes such as stocks and bonds. Plan officials and others outlined steps to limit these and other challenges, such as conducting in-depth due diligence reviews or investing through funds of funds, which can mitigate some of the main difficulties of hedge funds. Such steps entail greater expense, effort, or expertise than is required for more traditional investments, and some pension plans may not be equipped to meet these demands. Pension plans’ investments in hedge funds resulted in part from stock market declines and disenchantment with traditional investment management in recent years. Most pension plan officials we contacted cited the steep declines in the public equity market early in this decade as a reason for initiating or expanding hedge fund investments. From August 2000 to February 2003, the stock market, as measured by the Standard and Poor’s 500 index, declined in value by about 45 percent, and according to plan sponsors and others, this massive market decline severely affected pension plans that were deeply invested in the U.S. stock market. For example, representatives of one public pension plan told us that this market decline led to largest annual loss in its history and resulted in the plan’s first hedge fund investments 2003. A representative of another large public pension plan told us that the main motive for initially investing in hedge funds was the weak equity markets early in this decade, and the perceived need for greater exposure to alternative assets that relied less on the stock market for returns. At the same time, some plan officials also cited disenchantment with traditional “long-only” investment managers, and questioned whether such managers delivered returns that justified the fees the managers’ charge. Officials with most of the plans we contacted indicated that they invested in hedge funds, at least in part, to reduce the volatility of returns. According to a representative of an investment consulting firm, this is a common objective of pension plans that invest in hedge funds. One plan official explained the importance of reducing volatility by noting that even in periods of relatively good stock returns, volatility can eat away at the compounding effect of returns over time, and substantially reduce long- term growth. Another plan official said that in trying to reduce volatility through hedge funds, the plan expected that certain hedge fund returns may lag behind stock market indices during bull (rising) markets, but also expected that it would not suffer nearly the same declines during bear (falling) markets. Officials of several pension plans told us that they sought to obtain returns greater than the returns of the overall stock market through at least some of their hedge fund investments. For example, officials of one pension plan explained that one of the overall goals of its hedge fund portfolio strategy was to obtain an annual return of 2.5 percentage points greater than returns in the stock market, as measured by the S&P 500 stock index. Officials of pension plans that we contacted also stated that hedge funds are used to help diversify their overall portfolio and provide a vehicle that will, to some degree, be uncorrelated with the other investments in their portfolio. This reduced correlation has a number of benefits, including reduction in overall portfolio volatility and risk. For example, officials of one pension plan told us that hedge funds are attractive because they are not solely dependent on equity and fixed income markets for their returns, thus reduce the overall risk of the investment portfolio. At the time of our contacts with pension plans in 2007, the 15 pension plans with hedge fund investments indicated mixed but generally positive results. Among officials of these plans, all said that their hedge fund investments had generally met or exceeded expectations, although some noted mixed experiences. For example, one plan explained that it had dropped some hedge fund investments because they had not performed at or above the S&P 500 benchmark. Also, this plan redeemed its investment from other funds because they began to deviate from their initial trading strategy. Further, officials of several plans noted that their venture into hedge funds was only a few years old, and, at the time of our contact, their investment had not yet been tested by trying economic conditions or financial events, such as a significant stock market decline. Nonetheless, representatives of all of the plans with hedge fund investments indicated that they planned to maintain or increase their portfolio allocation to hedge funds in the foreseeable future. Pension plans face a number of challenges in hedge fund investing beyond those of more traditional investing, including specific investment risks, limited transparency and liquidity, and risks related to the operations of the hedge fund. While any plan investment may fail to deliver expected returns over time, hedge fund investments pose investment challenges beyond those posed by traditional investments. These include (1) reliance on the skill of hedge fund managers, who often have broad latitude to engage in complex investment techniques that can involve various financial instruments in various financial markets; (2) use of leverage, which amplifies both potential gains and losses; and (3) higher fees, which require a plan to earn a higher gross return to achieve a higher net return. Hedge funds are among the most actively managed investments, and thus returns are often dependent not on broad market movements, but on smaller moves in the markets they invest in and the skills and abilities of the hedge fund manager. For example, hedge fund managers may seek to profit through complex and simultaneous positions in stocks, bonds, options contracts, futures contracts, currencies, and other vehicles, and can abruptly change their positions and trading tactics in order to achieve desired return as changing market conditions warrant. Representatives of some pension plans that had not invested in hedge funds, cited concerns about the ability of hedge fund managers to accomplish this over the long- term. One plan official said the plan had avoided hedge funds in part because of doubt that the managers’ skills could generate an acceptable return over time. Instead, this plan seeks to capture the increase in the overall stock market. Regulatory officials and plan sponsors also said that, given the growth of the hedge fund industry in recent years, the market inefficiencies from which hedge funds profit may diminish. For example, SEC noted in a 2004 regulatory proposal that the capacity of hedge fund advisers to generate large returns is limited because the use of similar financial strategies by other hedge funds narrows spreads and decreases profitability. Hedge fund managers may use leverage—that is, use borrowed money or other techniques—to potentially increase an investments value or return without increasing the amount invested. While registered investment companies are subject to leverage limits, hedge funds can make relatively unrestricted use of leverage to magnify the expected returns of an investment. At the same time that leverage can magnify profits, it can also magnify losses to the hedge fund if the market goes against the fund’s expectations. Concerns about leverage were cited by several pension plans either as an important consideration in selecting a hedge fund, or as a reason for avoiding them altogether. For example, one public pension plan told us that it has avoided hedge funds because when hedge funds hit “potholes,” the potholes are deep because of high amounts of leverage used. The challenge of relying on manager skill for a desired rate of return is compounded by the costly fee structure that is typical of the hedge fund industry. Whereas mutual fund managers reportedly charge a fee of about 1 percent of assets under management, hedge fund managers often charge a flat fee of 2 percent of total assets under management, plus a performance fee, of about 20 percent of the fund’s annual profits. The impact of such fees can be considerable. As figure 5 illustrates, an annual return of 12 percent falls to about 7.6 percent after fees are deducted. Several pension plans cited the costly fee structure fees as a major drawback to hedge fund investing. For example, representatives of one plan that had not invested in hedge funds said that they are focused on minimizing transaction costs of their investment program, and the hedge fund fee structure would likely not be worth the expense. On the other hand, an official of another plan noted that, as long as hedge funds add value net of fees, they found the higher fees acceptable. Because many hedge funds may own thinly traded securities and derivatives whose valuation can be complex, and in some cases subjective, a plan may not be able to obtain timely information on the value of assets owned by a hedge fund. Further, hedge fund managers may decline to disclose information on asset holdings and the net value of individual assets largely because release of such information could compromise their trading advantage. In addition, even if hedge fund managers were to provide detailed positions, plan sponsors might be unable to fully analyze and assess the prospective return and risk of a hedge fund. As a consequence, a plan may not be able to independently ascertain the value of its hedge fund investment or fully assess the degree of investment risk posed by its hedge fund investment. Although we noted in January 2008 that hedge funds have improved disclosure and transparency about their operations due to the demands of institutional investors, several pension plans cited limited transparency as a prime reason they had chosen not to invest in hedge funds. For example, representatives of one plan told us that they had considered investing in hedge funds several years ago, but that most of the hedge funds it contacted would not provide position-level information, and that they were reluctant to make such an investment without this information. Hedge funds offer investors relatively limited liquidity, that is, investors may not be able to redeem a hedge fund investment on demand because of a hedge fund’s redemption policy. Hedge funds often require an initial “lockup” of a year or more, during which an investor cannot cash out of the hedge fund. After the initial lockup period, hedge funds offer only occasional liquidity, sometimes with a pre-notification requirement. While some pension plans told us that liquidity limitations are not a significant concern because the plan has other liquid assets to pay benefits, they nonetheless can pose certain disadvantages. For example, liquidity limitations can inhibit a plan’s ability to minimize a hedge fund investment loss. As one state official noted after a state fund had suffered losses in the wake of the 2006 collapse of Amaranth, even when a plan learns that a hedge fund is losing value, various lockup provisions often make it difficult to promptly withdraw from the investment. Further, an investor’s rights with regard to cashing out may not be entirely clear from the written contract. According to an investigative study by a Grand Jury of one pension plan’s experience with a failed hedge fund, the contracts can be dense with legal language, which may make understanding of basic terms and conditions difficult, especially with regard to withdrawal provisions. Further, the study noted that contracts can delegate immense discretionary authority to the hedge fund manager to change conditions and rules. Pension plans investing in hedge funds are also exposed to operational risk—that is, the risk of investment loss due not to a faulty investment strategy, but from inadequate or failed internal processes, people, and systems, or problems with external service providers. Operational problems can arise from a number of sources, including inexperienced operations personnel, inadequate internal controls, lack of compliance standards and enforcement, errors in analyzing, trading, or recording positions, or outright fraud. According to a report by an investment consulting firm, because many hedge funds engage in active, complex, and sometimes heavily leveraged trading, a failure of operational functions such as processing or clearing one or more trades may have grave consequences for the overall position of the hedge fund. Concerns about some operational issues were noted by SEC in a 2003 report on the implications of the growth of hedge funds. For example, the 2003 report noted that SEC had instituted a significant and growing number of enforcement actions involving hedge fund fraud in the preceding 5 years. Further, SEC noted that while some hedge funds had adopted sound internal controls and compliance practices, in many other cases, controls may be very informal, and may not be adequate for the amount of assets under management. Similarly, a recent Bank of New York paper noted that the type and quality of operational environments can vary widely among hedge funds, and investors cannot simply assume that a hedge fund has an operational infrastructure sufficient to protect shareholder assets. Several pension plans we contacted also expressed concerns about operational risk. For example, one plan official noted that the consequences of operational failure are larger in hedge fund investing than in conventional investing. For example, the official said a failed long trade in conventional investing has relatively limited consequences, but a failed trade that is leveraged five times is much more consequential. Representatives of another plan noted that back office and operational issues became deal breakers in some cases. For example, they said one fund of funds looked like a very good investment, but concerns were raised during the due diligence process. These officials noted, for example, the importance of a clear separation of the investment functions and the operations and compliance functions of the fund. One official added that some hedge funds and funds of funds are focused on investment ideas at the expense of important operations components of the fund. Pension plans that invest in hedge funds take various steps to mitigate the risks and challenges posed by hedge fund investing, including developing a specific investment purpose and strategy, negotiating important investment terms, conducting due diligence, and investing through funds of funds. Such steps require greater effort, expertise and expense than required for more traditional investments. As a result, some plans, especially smaller plans, may not have the resources to take the steps necessary to address these challenges. Discussions with pension plan officials revealed the importance of defining a clear purpose and strategy for their hedge fund investments. As one pension fiduciary noted, plan managers should define exactly why they want to invest in hedge funds. He added that there are many different possible hedge fund strategies, and wanting to invest in hedge funds to obtain the large returns that other investors have reportedly obtained is not a sufficient reason. Most of the 15 pension plans with hedge fund investments that we contacted described one or more strategies for their hedge funds investments. For example, an official of one state plan told us that the plan invested only in long-short hedge fund strategies while other plans use multiple strategies. Our contacts with plan officials and others also highlighted the importance of diversification. All of the plans having hedge fund investments that we contacted invested in either multiple individual hedge funds, or through funds of funds, which are designed to provide diversification across many underlying funds. Some plans described specific diversification requirements, and spread their hedge fund investment across many funds to limit exposure to one or a small number of hedge funds. For example, one plan determined that no more than 15 percent of its hedge fund portfolio would be with a single hedge fund manager and that no more than 40 percent in a particular hedge fund investment strategy. Our contacts with plan officials and others also highlighted the importance of identifying specific investment terms to guide hedge fund investing and ensuring that the hedge fund investment contract complies with these criteria. These can include fee structure and conditions, degree of transparency, valuation procedures, redemption provisions, and degree of leverage employed. For example, pension plans may want to ensure that they will not pay a performance fee unless the value of the investment passes a previous peak value of the fund shares—known as a high water mark. Some plans we contacted also specified leverage limits for their hedge funds. For example, one public plan that we contacted has established specific leverage limits for each of 10 hedge fund strategies employed by its funds of funds—ranging from an upper limit of 2 times invested capital for one strategy, to 20 times invested capital for another. Once decided upon, these and other terms of the investment can be used as criteria in the hedge fund search, and if necessary, negotiated with the hedge fund or fund of funds manager. Pension plans take steps to mitigate the challenges of hedge fund investing through an in-depth due diligence and ongoing monitoring process. While plans conduct due diligence reviews of other investments as well, such reviews are especially important when making hedge fund investments, because of hedge funds’ complex investment strategies, the often small size of hedge funds, and their more lightly regulated nature, among other reasons. Due diligence can be a wide-ranging process that includes a review and study of the hedge fund’s investment process, valuation, and risk management. The due diligence process can also include a review of back office operations, including a review of key staff roles and responsibilities, the background of operations staff, the adequacy of computer and telecommunications systems, and a review of compliance policies and procedures. Representatives of several plans told us they mitigate several of the major hedge fund challenges by investing through funds of funds, which are investment funds that buy shares of multiple underlying hedge funds. Funds of hedge funds provide plan investors diversification across multiple hedge funds, thereby having the potential to mitigate investment risk. For example, one plan fiduciary told us the plan reduces investment risk by investing in a fund of funds that diversifies their hedge fund investments into at least 40 underlying hedge funds. Further, by investing in a fund of funds, a pension plan relies on the fund of funds’ manager to conduct negotiations, due diligence, and monitoring of the underlying hedge funds. According to pension plan officials, funds of funds can be appropriate if a plan does not have the necessary skills to manage its own portfolio of hedge funds. According to a hedge fund industry organization, investing through a fund of funds may provide a plan better access to hedge funds than a plan would be able to obtain directly. Nonetheless, investing through funds of funds has some drawbacks. Funds of funds’ managers also charge fees—for example, they may charge a 1 percent flat fee and a performance fee of between 5 and 10 percent of profits—on top of the substantial fees that the fund of funds manager pays to the underlying hedge funds. Funds of funds also pose some of the same challenges as hedge funds, such as limited transparency and liquidity, and the need for a due diligence review of the fund of funds firm. According to plan officials, state and federal regulators, and others, some pension plans, especially smaller plans, may not be equipped to address the various demands of hedge fund investing. For example, an official of a national organization representing state securities regulators told us that medium- and small-size plans are probably not equipped with the expertise to oversee the trading and investment practices of hedge funds. This official said that smaller plans may have only one or two person staff, or may lack the resources to hire outside consulting expertise. A labor union official made similar comments, noting that smaller pension plans lack the internal capacity to assess hedge fund investments, and noted that such plans may be locked out of top-performing hedge funds. Some plans may also lack the ability to conduct the necessary due diligence and monitoring of hedge fund investments. One hedge fund consultant told us that certain types of plans, such as plans that are not actively overseen by an investment committee and plans that do not have a sufficient in-house dedicated staff, should not invest in hedge funds. Similarly, a representative of a firm specializing in fiduciary education and support noted the special relationship of trust and legal responsibility that plan fiduciaries carry and concluded that the challenges of hedge fund investing are too high for most pension plans. While such plans might often be smaller plans, larger plans may also lack sufficient expertise. A representative of one pension plan with more than $32 billion in total assets noted that before investing in hedge funds, the plan would have to build up its staff in order to conduct the necessary due diligence during the fund selection process. According to plan representatives, investment consultants, and other experts we interviewed, pension plans invest in private equity primarily to attain returns superior to those attained in the stock market in exchange for greater risk, but such investments pose several distinct challenges. Generally, these plan representatives based their comments on significant experience investing in private equity—in some cases over 20 years—and said they had experienced returns in excess of the stock market. Nonetheless, private equity funds can require longer-term commitments of 10 years or more, and during that time, a plan may not be able to redeem its investments. In addition, plan representatives described extensive and ongoing management of private equity investments beyond that required for traditional investments and that, like hedge fund investments, may be difficult for plans with relatively limited resources. Unlike hedge funds, pension plan investment in private equity is not a recent phenomenon. The majority of plans included in our review began investing in private equity more than 5 years before the economic downturn of 2000 to 2001, and some of these plans have been investing in private equity for 20 years or more. According to a pension investment consultant we interviewed, due to the longer history of pensions’ investment in private equity, it is generally regarded as a more well- established and proven asset class compared to other alternative investments, such as hedge funds. Pension plans invest in private equity primarily to attain returns in excess of returns from the stock market over time in exchange for the greater risk associated with these investments. Officials of each plan we interviewed said these investments had provided the expected returns. Plan representatives and investment consultants said that attaining returns superior to stocks was a primary reason for investing in private equity. Among the plan representatives we interviewed the most commonly reported benchmark for private equity funds ranged from 3 to 5 percentage points above the S&P 500 stock index, net of fees. At the time of our interviews with plans about private equity investments, between October 2007 and January 2008, plan representatives indicated their private equity investments had met their expectations for relatively high returns and many said they planned to maintain or increase their allocation in the future. Further, representatives of some plans told us that private equity has been their best performing asset class over time despite some individual investments that resulted in considerable losses. For example, according to documentation provided by one private sector plan, the plan had earned a net return of slightly more than 16 percent on its private equity investments over the 10-year period ending September 30, 2007, which was their highest return for any asset class over that time period. To a lesser degree, pension plans also invest in private equity to further diversify their portfolios. To the extent that private equity is not closely correlated with the stock market, these investments can reduce the volatility of the overall portfolio. However, some plan representatives cautioned that the diversification benefits are limited because the performance of private equity funds is still strongly, although not perfectly, linked to the stock market. Pension plans investing in private equity face several challenges and risks, which include the concentration of underlying holdings, use of leverage, and wide variation in performance among funds. In addition, the value of the underlying holdings is difficult to estimate prior to their sale and private equity investments entail long-term commitments, often of 10 years or more. Pension plans that invest in private equity funds face a number of investment risks, beyond the risks of traditional investments. Unlike a traditional fund manager who diversifies by investing in many stocks or bonds, a private equity fund manager’s strategy typically involves holding a limited number of underlying companies in their portfolio. A single private equity fund generally invests in only about 10 to 15 companies, often in the same sector. The risks associated with such concentrated, undiversified funds may be compounded by particular aspects of the buyout and venture capital sectors. Fund managers in the buyout sector generally invest using leverage to seek greater returns but such investments also increase investment risks. In the venture capital sector, fund managers typically make smaller investments in companies that may have a limited track record and rely on technological development and growing the company’s commercial capacity for success. In light of this, some plan officials noted that some of these companies will fail, but the success of one or more of the portfolio firms is often large enough to more than compensate for the losses of other investments. Like other investments, the returns to private equity funds are susceptible to market conditions when investments are bought or sold. When competition among private equity fund managers is intense, research has shown that a fund manager may pay more for an investment opportunity that leads to lower net returns. In addition, the returns of a private equity fund are also affected by the condition of the market when the underlying investments are sold. For example, a private equity fund may have lower returns if its underlying holdings are sold through an initial public offering made during a period of low stock values. An official from one plan told us that private equity funds that sold investments around 2000 had lower returns because of the overall decline in the stock market. However, a representative of another plan noted that, while market conditions have some effect on the performance of a private equity fund, the effect may be mitigated by the ability of the fund managers to enact sound business plans and thereby add value to the underlying companies. Further, the challenge of meeting the high performance goals for private equity investments is compounded by the relatively high fees that private equity funds charge. Similar to hedge funds, private equity funds typically charge an annual fee of 2 percent of invested capital and 20 percent of returns, whereas mutual fund managers typically charge a fee of about 1 percent or less of assets under management. If the gross returns from a private equity fund are not sufficiently high, net returns to investors will not meet the commonly cited goal of exceeding the return of the stock market. Another risk from investing in private equity is the variation of performance among private equity funds. Officials of an investment consulting firm, a state regulatory agency, and several pension plans noted that, compared to other asset classes, private equity has greater variation in performance among funds and cited research to support this view. For example, one study found that the difference in returns between the median and top quartile funds is much greater for private equity, particularly among venture capital investments, than it is for domestic stocks. Another study found that returns of private equity funds at the 75th percentile were more than seven times greater than the returns of funds at the 25th percentile. A further challenge of investing with private equity funds—regardless of how they perform—is that they often require commitments of 10 years or more during which a plan may not be able to redeem its investment. The longer-term commitment of private equity funds contrasts with stock and bond investments, which can be bought and sold daily, and hedge fund investments, which can be redeemed episodically. Plans must provide committed capital when called upon by the fund manager, and may not redeem invested capital or typically see any return on the investment, for at least several years. However, several plan representatives and other experts we interviewed stated that the nature of private equity funds necessitates long commitments as returns are generated through longer- term growth strategies, rather than short-term gains. A private equity fund cycle typically follows a pattern known as the “J-curve,” which reflects an initial period of negative returns during which investors provide the fund with capital and then obtain returns over time as investments mature (see fig. 6). Representatives of several plans noted that they expect higher returns from private equity in exchange for the long-term commitment. An additional challenge of private equity investments is the uncertain valuation during the fund cycle. Unlike stocks and bonds, which are traded and priced in public markets, plans have limited information on the value of private equity investments until the underlying holdings are sold. Some plan representatives we interviewed explained that fund managers often value underlying holdings at their initial cost until they are sold through an initial public offering or other type of sale. In some cases private equity funds estimate the value of the fund by comparing companies in their portfolio to the value of comparable publicly-traded assets. However, an investment consultant explained that such periodic valuations have limited utility. Prior to the sale of underlying investments, it is difficult to assess the value a private equity fund manager has generated. While plan officials we interviewed acknowledged the difficulty of valuing private equity investments, they generally accepted it as a trade-off for the potential benefits of the investment. Plan representatives said that they take several key steps to address the challenges of investing in private equity funds. Plan representatives and industry experts emphasized the importance of investing with top- performing funds to mitigate the wide variation in fund performance; however, they noted that access to these top-performing funds is very limited, particularly for new investors. Furthermore, due diligence and ongoing monitoring of private equity investments requires substantial effort and expertise, which may be too complex or costly for plans with more limited resources. The majority of plan representatives we interviewed told us that, because of the wide variation in performance among private equity funds, they must invest with top-performing funds in order to achieve long-term returns in excess of the stock market. In addition to identifying the top- performing fund managers, plan officials explained that the selection process involves a thorough assessment of the fund manager’s investment strategy. For example, an official from one state plan told us that their assessment includes a review of a fund manager’s strategy for improving the operations and efficiency of its proposed investments and they invest with managers that have a persuasive business model. Plan officials stressed the importance of these steps, and some noted that investing in private equity is only worthwhile if they can invest with funds in the top quartile of performance. For example, one plan official said that if a plan does not invest with a top quartile fund, it may not obtain returns in excess of stock market returns and, thus, will not have earned a premium for assuming the risks and fees inherent in private equity fund investments. While many plans we interviewed noted the importance of investing with top-performing funds, the competition to gain access to these funds may make it difficult or impossible for some plans, especially smaller plans, to do so. Several of the plan representatives we interviewed noted that investment opportunities with top-performing funds are limited, and the demand for such opportunities is high. According to representatives of a venture capital trade association, there is greater demand to invest in venture capital funds than can be absorbed, because the venture capital sector is relatively small in size. Plan officials also noted that access to private equity funds can be limited, because fund managers prefer to deal with larger, more sophisticated investors or investors who have invested in the fund manager’s previous private equity funds. For example, one state official told us that the largest public plan in the state has the clout to gain access to top-performing funds, but smaller public funds in the state do not. He added that top-performing funds are very selective, and generally will not respond to solicitation by smaller public funds. Plan representatives told us they further mitigate the challenges of investing in private equity funds by diversifying their investments. Plan representatives we interviewed said they invest with multiple fund managers to mitigate the risk that some managers may have mediocre or poor performance. For example, a representative of one plan said they would be comfortable investing about 5 percent of their private equity allocation with one carefully vetted fund manager, but investing 20 percent with one manager would be overly risky. The director of another plan told us the plan aims to ensure diversification by investing with over 130 different private equity funds, encompassing more than 80 fund managers. Plans also stagger investments over several years to ensure their private equity fund investments are ready to sell their underlying investments at different times. Staggering investments over time helps mitigate the risk of fund managers selling funds’ underlying holdings during a time of poor market conditions, which may reduce the funds’ returns to investors. For example, one plan official noted they have investments in funds that were established in many different years, dating back to 1994. In addition, some plan officials told us they further diversify their private equity investments among funds concentrated in different industries and regions. Plan representatives said that they mitigate the long-term commitments of private equity investments by limiting the size their allocation. Officials we interviewed at several plans noted that their allocation to private equity is only about 5 percent of the portfolio and benefit obligations can be paid from more liquid assets. They said it is important to estimate a plan’s benefit obligations and determine the need for liquid investments to ensure the plan can pay benefits when they are due. They also noted that once liquidity needs are determined, a plan can more safely invest in an illiquid asset that cannot be used to pay benefits in the near term. Plans attempt to negotiate key terms of the investment contract to further manage the risks of investing in private equity, but, as one large public plan noted, their ability to negotiate favorable contract provisions is limited when investing with top-performing funds because investing in these funds is highly competitive. Like hedge fund investments, these contract terms may include the fee structure and valuation procedures of the fund. In addition, many plan representatives we interviewed said they can redeem their investments before the end of the originally agreed investment period if staff that are considered key to the success of the fund leave prematurely. Similar to hedge fund investments, plans take additional steps to mitigate challenges of investing in private equity through extensive and ongoing management, beyond those required for traditional investments. Plan representatives we interviewed said these steps include regularly reviewing reports on the performance of the underlying investments of the private equity fund and having periodic meetings with fund managers. In some cases, plans participate on the advisory board of a private equity fund, which provides a greater opportunity for oversight of the fund’s operations and new investments; however this involves a significant time commitment and may not be feasible for every private equity fund investment. Plan representatives and investment consultants noted that, as with hedge funds, private equity investments entail considerably greater due diligence and ongoing monitoring than traditional investments and some plan representatives said they needed to hire an external investment consultant because the plan lacked sufficient internal resources. Funds of private equity funds, like funds of hedge funds, enable plans to address several challenges of investing in private equity, for an additional cost. Benefits of investing in funds of funds can include diversification across fund managers, industry, geographic region, and year of initial investment. Through funds of funds, plans can also gain access to top- performing fund managers that may otherwise be unavailable to them. One plan representative stated that, due to the competition among investors, funds of funds are their best option for accessing top-performing funds. In addition, several plan representatives said that they invest in funds of funds to benefit from the expertise of the fund manager. For example, officials of two large plans said they generally limit their use of funds of funds to private equity investments in emerging markets and small funds because the plan prefers not to devote resources to maintaining expertise in these areas. Nonetheless, fund of funds’ managers charge their own fees in addition to the fees the fund of funds pays the underlying private equity fund managers. According to a plan official and an investment consulting firm, a fund of funds manager typically charges a fee of 1 percent of invested capital over the fees it pays to the underlying funds. The federal government does not specifically limit or monitor private sector pension investments in hedge funds or private equity, and state approaches for public plans vary. ERISA requires that plan fiduciaries meet general standards of prudent investing but does not impose specific limits on investments in hedge funds or private equity. Further, while Labor has conducted enforcement actions that have involved hedge fund or private equity funds, it does not specifically monitor these investments. While states generally impose a prudent man standard, similar to ERISA’s, on plan fiduciaries, some states still have policies that restrict or prohibit pension plan investment in hedge funds or private equity. Although ERISA governs the investment practices of private sector pension plans, neither federal law nor regulation specifically limit pension investment in hedge funds or private equity. Instead, ERISA requires that plan fiduciaries apply a prudent man standard, including diversifying assets and minimizing the risk of large losses. The prudent man standard does not explicitly prohibit investment in any specific category of investment. Further, an unsuccessful individual investment is not considered a per se violation of the prudent man standard, as it is the plan fiduciary’s overall management of the plan’s portfolio that is evaluated under the standard. In addition, the standard focuses on the process for making investment decisions, requiring documentation of the investment decisions, due diligence, and ongoing monitoring of any managers hired to invest plan assets. Although there are no specific federal limitations on pension plan investments in hedge funds, two federal advisory committees have, in recent years, highlighted the importance of developing best practices in hedge fund investing. In November 2006, the ERISA Advisory Council recommended that Labor publish guidance describing the unique features of hedge funds, and matters for consideration in their adoption for use by qualified pension plans. To date, Labor has not acted on this recommendation. According to Labor officials, an effort to address these recommendations was postponed while Labor focused on implementing various aspects of the Pension Protection Act of 2006. However, in April 2008, the Investors’ Committee established by the President’s Working Group on Financial Markets, composed of representatives of public and private pension plans, endowments and foundations, organized labor, non- U.S. institutions, funds of hedge funds, and the consulting community, released draft best practices for investors in hedge funds. These best practices discuss the major challenges of hedge fund investing, and provide an in-depth discussion of specific considerations and practices that investors in hedge funds should take. While this guidance should serve as an additional tool for pension plan fiduciaries and investors to use when assessing whether and to what degree hedge funds would be a wise investment, it may not fully address the investing challenges unique to pension plans leaving some vulnerable to inappropriate investments in hedge funds. Although many private sector plans are insured by the PBGC, which guarantees most benefits when an underfunded plan terminates, public sector plans are not insured and may call upon state or local taxpayers to overcome funding shortfalls. Labor does not specifically monitor pension investment in hedge funds or private equity. Labor annually collects information on private sector pension plan investments via the Form 5500, on which plan sponsors report information such as the plan’s operation, funding, assets, and investments. However, the Form 5500 includes no category for hedge funds or private equity funds, and plan sponsors may record these investments in various categories on the form’s Schedule H. In addition, because there is no universal definition of hedge funds or private equity and their strategies vary, their holdings can fall within many asset classes. While EBSA officials analyze Form 5500 data for reporting compliance issues—including looking for assets that are “hard to value”—they have not focused on hedge fund or private equity investments specifically. According to EBSA officials, there have been several investigations and enforcement actions in recent years that involved investments in hedge funds and private equity, but these investments have not raised significant concerns. Our state pension plan contacts indicated that, in recent years, state regulation of public pension plan investments has become generally more flexible. According to a NASRA official, state regulation of public pension plan investments has gradually become less restrictive and more reliant on fiduciary prudence standards. This official noted that, for example, blanket prohibitions on investments such as international stocks or real estate have given way to permission for a wider range of investments. Some of our state contacts described this shift over time from a prescriptive list of authorized investments (“legal lists”) and asset allocation limits to a more flexible approach, such as adoption of the prudent man standard. Of the state pension plan officials we contacted in 11 states, officials in 7 states indicated that applicable state law imposes restrictions on the ability of public pension plans to invest in hedge funds and/or private equity, as seen in table 2. Among these seven states, the restriction may be in the form of (i) a provision applicable to investments in hedge funds or private equity funds specifically, (ii) an exclusive list of permissible of investments that is not likely to capture hedge funds or private equity investments, or (iii) a provision that restricts investments in certain categories of assets that, because of the typical structure or investment strategy of hedge funds or private equity funds, are likely to apply to investments in such funds. Some of the selected states have, through statute or regulation, established explicit limitations on the amount that pension plans can invest in hedge funds or private equity. For example, under Texas law, the Teacher Retirement System of Texas (TRS)—the largest public pension plan in Texas—is statutorily limited to investing no more than 5 percent of the plan’s total assets in hedge funds. According to a Texas Pension Review Board official, the statute codified TRS’s ability to invest in hedge funds while at the same time limiting the amount TRS can invest in hedge funds. According to a TRS official, this law was a compromise between TRS’s desire to invest more broadly in hedge funds and some state legislators who were concerned about the possible risks of hedge funds. Other states we reviewed have comparable limitations for public plans. The Commonwealth of Massachusetts’ Public Employee Retirement Administration Commission (PERAC) has established a detailed set of limitations and guidance, with particular limitations on smaller public plans. In Massachusetts, public plans with less than $250 million in assets may not invest in hedge funds directly, but they may invest through a state- managed hedge fund investment pool (see table 3). According to a PERAC official, this limitation exists because hedge funds are relatively new investments for pension plans and because they require high levels of due diligence and expertise that may be excessive for smaller plans. PERAC also limits and offers guidance to larger public plans, emphasizing diversification, to help limit a plan’s exposure to potential losses from hedge fund failures. According to a PERAC official, the group is less strict about private equity investments because private equity is a more familiar asset class among the state’s public plans. Public plans with less than $25 million in assets may invest up to three percent of assets in private equity and plans with more than $25 million may invest up to 5 percent of assets in private equity. PERAC requires plans of either size to obtain PERAC permission before investing in private equity above those levels. Some of the selected states have instituted “legal lists” of authorized investments for pension plans that do not specifically include investments in hedge funds or private equity funds as authorized assets. According to a NASRA official, this was the dominant regulatory approach of state pension investment 40 years ago, and while some states have moved away from this approach, others have continued to maintain legal lists. Illinois has established a legal list of assets that does not include interests in hedge funds or private equity funds, in which certain smaller plans that cover police officers and fire fighters are authorized to invest. Large statewide plans, such those managed by the Illinois State Board of Investment, are governed by a prudent man standard, which does not explicitly restrict investment of pension assets in any particular investment. In some instances, states allow a certain percentage of plan assets to be invested in assets that do not qualify under one of the authorized categories on the legal list. For example, the New York State Common Retirement Fund is governed by a legal list, but the state allows the plan to invest up to 25 percent of its assets in investments not otherwise permitted by the legal list. Finally, public pension plan investments in hedge funds are prohibited or limited in some states by laws restricting pension plan investment in certain investment vehicles or trading strategies. For example, the North Carolina Retirement system can not invest more than 10 percent of plan assets in limited partnerships or limited liability corporations. Similarly, before new legislation broadening investment authority went into effect in April 2008, the Wisconsin Retirement System could not invest assets in vehicles that trade options or engage in short selling, two techniques commonly used by hedge funds. However, with the new statutory authority, the Wisconsin Retirement System may use any investment strategy that meets its prudent investor standard. States we contacted take a variety of approaches to overseeing and monitoring public pension plan investment. In Massachusetts, before conducting a hedge fund manager search, public plans must first obtain PERAC approval and provide the agency with a summary of the plan’s objectives, strategies, and goals in hedge fund investing. PERAC requires pension plans to document the major due diligence steps taken in the hedge fund manager selection process. In addition, prospective hedge fund managers must submit detailed information to PERAC regarding their key personnel, assets under management, investment strategy and process, risk controls, past performance, and organizational structure. Finally, hedge fund managers must also submit quarterly performance and strategy review reports directly to PERAC. Officials in other states we contacted may review hedge fund and private equity investments as part of a broader oversight approach. For example, the Ohio Retirement Study Council reviews the five large statewide public retirement funds semiannually to evaluate a plan’s investment policies and objectives, asset allocations decisions, and risk and return assumptions. In California, individual pension boards have sole and exclusive authority over investment decisions; however, they ensure public information on investment decisions and fund performance, including detailed reports of alternative investments, are publicly available. Available data indicate that pension plans have increasingly invested in hedge funds and have continued to invest in private equity to complement their traditional investments in stocks and bonds. Further, these data indicate that individual plans’ hedge fund or private equity investments typically comprise a small share of total plan assets. However, data are generally not available on the extent to which smaller pension plans have made such investments. Because such investments require a degree of fiduciary effort well beyond that required by more traditional investments, this can be a difficult challenge for plans, especially smaller plans. Smaller plans may not have the expertise or financial resources to be fully aware of these challenges, or have the ability to address them through negotiations, due diligence, and monitoring. In light of this, such investments may not be appropriate for some pension plans. Although plans are responsible for making prudent choices when investing in any asset, EBSA also has a role in helping to ensure that pension plan sponsors fulfill their fiduciary duties in managing pension plans that are subject to ERISA. This can include educating employers and service providers about their fiduciary responsibilities under ERISA. Many private sector plans are insured by the PBGC, which guarantees most benefits when an underfunded plan terminates; however, public sector plans are not insured and may call upon state or local taxpayers to overcome funding shortfalls. The importance of educating investors about the special challenges presented by hedge funds has been recognized by a number of organizations. For example, in 2006, the ERISA Advisory Council recommended that Labor publish guidance about the unique features of hedge funds and matters for consideration in their use by qualified plans. To date, EBSA has not acted on this recommendation. More recently, in April 2008, the Investors’ Committee formed by the President’s Working Group on Financial Markets published draft best practices for investors in hedge funds. This guidance will be applicable to a broad range of investors, such as public and private pension plans, endowments, foundations, and wealthy individuals. EBSA can further enhance the usefulness of this document by ensuring that the guidance is interpreted in light of the fiduciary responsibilities that ERISA places on private sector plans. For example, EBSA could outline the implications of a hedge fund’s or fund of funds’ limited transparency on the fiduciary duty of prudent oversight. EBSA can also reflect on the implications of these best practices for some plans—especially smaller plans—that might not have the resources to take actions consistent with the best practices, and thus would be at risk of making imprudent investments in hedge funds. While EBSA is not tasked with offering guidance to public sector plans, such plans may nonetheless benefit from such guidance. To ensure that all plan fiduciaries can better assess their ability to invest in hedge funds and private equity, and to ensure that those that choose to make such investments are better prepared to meet these challenges, we recommend that the Secretary of Labor provide guidance specifically designed for qualified plans under ERISA. This guidance should include such things as (1) an outline of the unique challenges of investing in hedge funds and private equity; (2) a description of steps that plans should take to address these challenges and help meet ERISA requirements; and (3) an explanation of the implications of these challenges and steps for smaller plans. In doing so, the Secretary may be able to draw extensively from existing sources, such as the finalized best practices document that will be published in 2008 by the Investors’ Committee formed by the President’s Working Group on Financial Markets. We provided a draft copy of this report to the Department of Labor, PBGC, the Department of the Treasury, the SEC, and the Federal Reserve Bank for their review and comment. Labor generally agreed with our findings and recommendation. With regard to our recommendation, Labor stated that providing more specific guidance on investments in hedge funds and private equity may present challenges. Specifically, Labor noted that given the lack of uniformity among hedge funds, private equity funds, and their underlying investments, it may prove difficult to develop comprehensive and useful guidance for plan fiduciaries. Nonetheless, Labor agreed to consider the feasibility of developing such guidance. Labor’s formal comments are reproduced in appendix III. We agree that the lack of uniformity among hedge funds or private equity funds may pose challenges to Labor. However, we do not believe it will be an insurmountable obstacle to developing guidance for plan sponsors. Indeed, the lack of uniformity among hedge funds and private equity funds is itself an important issue to convey to fiduciaries, and highlights the need for an extensive due diligence process preceding any investment. Additionally, as we state in the recommendation, Labor’s efforts can be facilitated through use of existing best practices documents, such as the best practices for investors in hedge funds document that will be published in the summer of 2008 by the Investors’ Committee formed by the President’s Working Group on Financial Markets. The PBGC also provided formal comments, which are reproduced in appendix IV. PBGC generally concurred with our findings. Labor, PBGC, the Department of the Treasury, and the Federal Reserve Bank also provided technical comments and corrections, which we have incorporated where appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees and members, federal agencies, and other interested parties. We will also make copies available to others upon request. If you or your staff has any questions concerning this report, please contact Barbara Bovbjerg on (202) 512-7215 or Orice Williams on (202) 512-8678. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Key contributors are listed in appendix V. Our objectives were to address the following questions: 1. To what extent do public and private sector pension plans invest in hedge funds and private equity funds? 2. What are the potential benefits, risks, and challenges pension plans face in making hedge fund investments, and how do plans address the risks and challenges? 3. What are the potential benefits, risks, and challenges pension plans face in making private equity fund investments, and how do plans address the risks and challenges? 4. What mechanisms regulate and monitor pension plan investments in hedge funds and private equity funds? To answer the first question, we obtained and analyzed survey data of private and public sector defined benefit (DB) plans on the extent of plan investments in hedge funds and private equity from three private organizations: Greenwich Associates, Pensions & Investments, and Pyramis Global Advisors. We identified the three surveys through our literature review and interviews with plan representatives and industry experts. As seen in table 4, the surveys varied in the number and size of plans surveyed. Using the available survey data, we determined the percentage of plans surveyed that reported investments in hedge funds or private equity. Using data from Greenwich Associates, we also determined the percentage of plans surveyed that invested in hedge funds or private equity by category of plan size, measured by total plan assets. We further examined data from each survey on the size of allocations to hedge funds or private equity as a share of total plan assets. Using the Pensions & Investments data, we analyzed allocations to these investments for individual plans and calculated the average allocation for hedge funds and private equity, separately, among all plans surveyed that reported these investments. The Greenwich Associates and Pyramis data reported the size of allocations to hedge funds or private equity as an average for all plans surveyed. Through our research and interviews, we were not able to identify any relevant surveys that included plans with less than $200 million in total assets. While the information collected by each of the surveys is limited in some ways, we conducted a data reliability assessment of each survey and determined that the data were sufficiently reliable for purposes of this study. These surveys did not specifically define the terms hedge fund and private equity; rather, respondents reported allocations based on their own classifications. Pensions & Investments reported private equity in three mutually-exclusive categories—buyout, venture capital, and an “other” private equity category, which includes investments such as mezzanine financing and private equity investments traded on the secondary market. Data from all three surveys are reflective only of the plans surveyed and cannot be generalized to all plans. To answer the second and third questions, we conducted in-depth interviews with representatives of 26 private and public sector DB plans, listed in table 5, from June 2007 to January 2008 and, where possible, obtained and reviewed supporting documentation. Interviews related to hedge fund investments were conducted from June 2007 to December 2007. Interviews related to private equity investments were conducted from October 2007 to January 2008. The interviews with plan representatives were conducted using a semi-structured interview format, which included open-ended questions on the following topics, asked separately about hedge funds or private equity: the plan’s history of investment in hedge funds or private equity; the plan’s experiences with these investments to date; the plan’s expected benefits from these investments; challenges the plan has faced with these investments; and steps the plan has taken to mitigate these challenges, including due diligence and ongoing monitoring. We interviewed five plans that did not invest in hedge funds to discuss the reasons the plan decided not have such investments. We also interviewed officials of government agencies, relevant industry organizations, investment consulting firms, and other national experts listed in appendix II. In addition, we interviewed officials from the Arizona State Retirement System and Missouri Local Government Employees’ Retirement System to discuss the recent decision of these plans to invest in private equity. The plans we interviewed were selected based on several criteria. We attempted to select plans that varied in the size of allocations to hedge funds and private equity as a share of total plan assets. We also attempted to select plans with a range of total plan assets, as outlined in table 6. We identified these plans using data from the Pensions & Investments 2006 survey and through our interviews with industry experts. To identify and analyze the regulation of public DB pension investments by states we consulted officials at the Department of Labor and representatives of relevant agencies in selected states, and reviewed relevant policy documents. The states we selected included the 10 states with the largest public pension assets according to our review of the National Association of State Retirement Administrators (NASRA) Public Funds Survey data listed in table 7. We also included Massachusetts because our previous contact with that state produced valuable information for this objective. Those states chosen based on the size of plan assets were: California, New York, Texas, Ohio, Florida, Illinois, Pennsylvania, New Jersey, Wisconsin, and North Carolina. In 9 of 10 states we spoke with the offices of the State Auditor, the State Treasurer, and the State Comptroller or equivalent offices. North Carolina’s Chief Investment Officer of the State Treasurer’s Office affirmed our analysis through e- mail. Finally we informed each of these states of our analysis and gave them the opportunity to comment on our description of regulations in their state. We conducted this performance audit from June 2007 to July 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Department of Treasury Department of Labor, Employee Benefit Security Administration Board of Governors of the Federal Reserve System Pension Benefit Guaranty Corporation Securities and Exchange Commission Hedge fund and private equity industry organizations Managed Funds Association National Venture Capital Association (NVCA) Private Equity Council (PEC) Cambridge Associates Cliffwater, LLC Fiduciary Counselors McCarter & English, LLP Mercer Associates Offices of Wilkie, Farr, and Gallagher, LLP Pension Governance, LLC American Benefits Council (ABC) American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) American Federation of State, County, and Municipal Employees (AFSCME) Committee on the Investment of Employee Benefit Assets (CIEBA) Financial Policy Forum National Association of State Retirement Administrators (NASRA) North American Securities Administrators Association (NASAA) National Conference of State Legislatures (NCSL) roundtable: National Association of Police Organizations (NAPO) National Conference on Public Employee Retirement Systems (NCPERS) National Association of State Treasurers National Association of Counties (NACo) Grand Lodge Fraternal Order of Police National Association of State Auditors, Comptrollers, and Treasurers (NASACT) National Education Association (NEA) National Council on Teacher Retirement (NCTR) and California Public National Conference of State Legislatures (NCSL) National Association of State Retirement Administrators (NASRA) David Lehrer, Assistant Director, and Michael Hartnett managed this report. Sharon Hermes, Angela Jacobs, and Ryan Siegel made important contributions throughout this assignment. Joseph A. Applebaum, Joe Hunter, Ashley McCall, Jay Smale Jr., Jena Sinkfield, Frank S. Synowiec, Karen Tremba, Rich Tsuhara, Charlie Willson, and Craig Winslow also provided key support.
Millions of retired Americans rely on defined benefit pension plans for their financial well-being. Recent reports have noted that some plans are investing in 'alternative' investments such as hedge funds and private equity funds. This has raised concerns, given that these two types of investments have qualified for exemptions from federal regulations, and could present more risk to retirement assets than traditional investments. To better understand this trend and its implications, GAO was asked to examine (1) the extent to which plans invest in hedge funds and private equity; (2) the potential benefits and challenges of hedge fund investments; (3) the potential benefits and challenges of private equity investments; and (4) what mechanisms regulate and monitor pension plan investments in hedge funds and private equity. To answer these questions GAO interviewed relevant federal agencies, public and private pension plans, industry groups and investment professionals, and analyzed available survey data. According to several recent surveys of private and public sector plans, investments in hedge funds and private equity generally comprise a small share of total plan assets, but a considerable and growing number of plans have such investments. Available survey data of mid to large-size plans indicate that between 21 and 27 percent invest in hedge funds while over 40 percent invest in private equity; such investments are more prevalent among larger plans, as shown below. The extent of investment in hedge funds and private equity by plans with less than $200 million in total assets is unknown. Pension plans invest in hedge funds to obtain a number of potential benefits, such as returns greater than the stock market and stable returns on investment. However, hedge funds also pose challenges and risks beyond those posed by traditional investments. For example, some investors may have little information on funds' underlying assets and their values, which limits the opportunity for oversight. Plan representatives said they take steps to mitigate these and other challenges, but doing so requires resourcesbeyond the means of some plans. Pension plans primarily invest in private equity funds to attain returns superior to the stock market. Pension plan officials GAO spoke with generally had a long history of investing in private equity and said such investments have met expectations for returns. However, these investments present several challenges, such as wide variation in performance among funds, and the resources required to mitigate these challenges may be too substantial for some plans. The federal government does not specifically limit or monitor private sector plan investment in hedge funds or private equity, and state approaches to public plans vary. Under federal law, fiduciaries must comply with a standard of prudence, but no explicit restrictions on hedge funds or private equity exist. Although a federal advisory council recommended that the Department of Labor (Labor) develop guidance for plans to use in investing in hedge funds, Labor has not yet done so. While most states also rely on a standard of investor prudence, some also have legislation that restricts or prohibits plan investment in hedge funds or private equity. For example, one state prohibits plans below a certain size from investing directly in hedge funds.
Various entities and groups develop and distribute video content. Content producers, such as Sony Pictures Entertainment and CBS Television Studios, sell the right to use their content to a variety of users, such as broadcast networks, cable networks, and local television stations. The financial compensation received by content producers for the use of their copyright-protected content is a licensing fee or royalty. Broadcast and cable networks produce and aggregate programming from other content producers for distribution to the public. Broadcast networks consist mainly of four major networks (ABC, CBS, FOX, and NBC), and several smaller networks, such as the CW Television Network, MyNetworkTV, and ION Television. Content is produced by the major networks’ affiliated production companies, which can include movie and television studios, and independent producers. Cable networks aggregate programming from content producers and some also produce programming, which can include niche programming—that is, programming that targets specific demographics. For instance, Lifetime Television offers programming that specifically targets women, while MTV offers programming that targets the 18-to-34 age group. Video content is distributed to households by local television stations, cable and satellite companies, and most recently, OVDs. Each of the four major broadcast networks owns and operates some local television stations; other stations may be independently owned but affiliated with one of the major networks or, as is the case with noncommercial educational television, unaffiliated with any major network. FCC licenses local television stations, which have the right to transmit a video broadcast signal on a specific radio frequency in a particular area and at a particular strength. Local television stations that are affiliated with a broadcast network negotiate licensing agreements with their network for the right to air network-furnished content, including prime time shows, afternoon soap operas, and national news programs. MVPDs obtain a variety of programming from both local stations and cable networks. Time Warner Cable, DISH Network, and Verizon are examples of cable, satellite, and telephone MVPDs, respectively, that license and distribute content to subscribers. Figure 1 illustrates how television programming is distributed through broadcast and traditional subscription video service. Consumers can watch movies and television programs through computers, set top boxes, game consoles, and of course televisions. Some may also have the option of using tablets, smartphones, and other mobile devices to view content via the Internet, either through a MVPD service or an OVD, such as Netflix. Typically, the general public views television programming through broadcast or subscription video service. Local television stations provide free over-the-air programming to the public. In contrast, consumers pay fees to providers of subscription video services, including cable companies, satellite providers, or telephone companies. According to the National Cable and Telecommunications Association (NCTA), the trade association for cable companies, in 2012, over 85 percent of U.S. households had a MVPD subscription, with the remainder accessing television through an antenna. Industry participants receive revenue from a variety of sources. Companies that create programming receive the majority of their revenue from license fees. Broadcast networks receive the majority of their revenue from advertising. Cable networks receive revenue from both monthly subscriber fees, paid by MVPDs, and advertising. MVPDs— which own, operate, and maintain their cable and satellite networks— receive the majority of their revenues from monthly subscription fees paid by consumers, supplemented with advertising. Many MVPDs also provide broadband Internet and telephone services over their networks or in partnership with other companies. Pub. L. No. 98-549, 98 Stat. 2780 (1984). ensuring reasonable rates for both the basic cable service and the cable programming service tier (CPST), commonly referred to as expanded basic, for cable systems not subject to effective competition as defined by the Act; Congress passed the 1992 Act in response to increasing rates. In addition, the 1992 Act required cable companies to carry all local television stations that requested carriage—known as must carry—or negotiate with television stations seeking compensation—known as retransmission consent. Cable companies that also produced content were required to provide their content to unaffiliated MVPDs at nondiscriminatory rates—known as program access. The Telecommunications Act of 1996 phased out regulation of rates for the CPST and included provisions that allowed for the growth of telephone companies in the video distribution marketplace. For example, the 1996 Act eliminated the restriction on telephone companies providing video service directly to subscribers in areas where they provided telephone service. Several large media and entertainment companies continue to produce much of the content watched by consumers. According to a 2012 report cited by FCC, seven companies’ broadcast and cable networks accounted for about 95 percent of all television viewing hours in the United States. These seven companies hold some combination of television and movie production studios, broadcast networks, and cable networks. The seven companies and some of their holdings include: CBS: CBS (broadcast network), CBS Television Studios, Showtime; Discovery Communications: Discovery Channel, TLC, A&E, Animal Disney: ABC (broadcast network), ESPN, Disney Channel, Walt NBCUniversal: NBC (broadcast network), Universal Pictures, USA Network, Telemundo Television Studios, The Weather Channel; News Corporation: FOX (broadcast network), FOX News Channel, 20th Century Fox, 20th Century Fox Television; Time Warner: The CW Network (broadcast network), CNN, HBO, TBS, Warner Brothers Studios; and Viacom: MTV, Comedy Central, Nickelodeon, Paramount Pictures. We previously reported that the major broadcast networks (ABC, CBS, FOX, and NBC) and their affiliated studios produced from 76 to 86 percent of prime-time programming hours in 2002, 2005, 2008, and 2009, with the remaining hours coming from independent producers.similarly reported that the production studios of major media and entertainment companies, which also hold broadcast and cable networks, often create and license television programs and movies. This pattern does not hold for all companies. For example, Discovery Communications does not own a major television or movie studio and Sony Corporation, another large media and entertainment company, operates a television and movie studio, but does not operate a broadcast or cable network. The concentration of content production among a handful of large media and entertainment companies has changed little in recent years. We compared the ownership of major broadcast and cable networks from 2005 through 2012, and found little change in the pattern of ownership and concentration of production. For example, of the top 20 cable networks by subscribership in 2005, more than half experienced no change in ownership from 2005 through 2012. However, some ownership change did occur during this period. In 2005, the former Viacom split into two companies—CBS and Viacom. In 2009, the Time Warner Cable distribution business was spun off from the Time Warner Inc. content business. Lastly, Comcast, the largest distribution company in the United States, merged with NBCUniversal;NBCUniversal’s content and networks to Comcast’s existing, more limited media holdings, which include the Golf Channel and E! Entertainment. Since 2005, the introduction of telephone-based video service has brought additional MVPD competition to some areas. Traditional telephone companies AT&T and Verizon—through their respective U- verse and FiOS products—have led this change, introducing video services in various areas across the country and competing with cable and satellite companies. Verizon first introduced its FiOS TV service in 2005 and as of year-end 2012, reported having 4.7 million subscribers with service available to 17.6 million households. As of year-end 2012, AT&T’s U-verse service had 4.5 million subscribers with service available to more than 24.5 million households. In addition to AT&T and Verizon, other competition has emerged in a limited number of areas. For example, Google introduced Google Fiber in the Kansas City metropolitan area. Like cable companies, AT&T, and Verizon, Google Fiber includes broadband Internet and television service. Google Fiber is a pilot project, and it is unknown to what extent Google will expand its deployment to other cities; although, in April 2013, Google announced that it would introduce Google Fiber in Austin, Texas, and Provo, Utah. In addition, we have previously reported that the Universal Service Fund managed by FCC, which provides subsidies to telephone companies that serve rural and other remote areas with high costs, enables some companies to upgrade their telephone networks, including upgrading to fiber optic cable and extending it closer to their customers. The upgraded networks enable these companies to provide video and broadband service in some rural and remote areas. With the new entry in some areas, roughly 1 in 3 households had access to at least 4 MVPDs at year-end 2010. As a result, the nationwide market shares have shifted among MVPDs since 2005 (see fig. 2). In particular, cable companies have seen their nationwide market share drop, continuing a longer-term decline. For example, NCTA estimated that cable companies’ share of MVPD subscribers has dropped from 98 percent in 1992 to 57 percent in 2012. Satellite services have continued to grow, although more slowly in recent years. Financial analysts and other experts report that satellite companies could face increasing competitive challenges from cable and telephone companies going forward. In particular, as consumers increasingly purchase a bundle of video, broadband Internet, and telephone services, satellite’s slower Internet service could dissuade consumers from purchasing satellite For example, DirecTV reported that various telephone and service.broadband companies also sell its service as part of a bundle with their voice and data services, and these companies could focus less effort and resources selling DirecTV’s service or decline to sell it at all as they deploy networks with the capability of providing video, voice, and data services. Although more households have access to at least 4 MVPDs than in the past, roughly 2 in 3 households still had access to 3 or fewer MVPDs at year-end 2010, 2 of which were the satellite providers. While the entry of telephone companies into the video marketplace offers some households more options, representatives from AT&T and Verizon were uncertain about the scope of future expansion. AT&T announced that the company will expand its U-verse service to be available to 33 million households— an increase from 24.5 million—but the company may also discontinue service to other areas simultaneously. Verizon officials reported that the company has no current plans to expand FiOS beyond its goal of making service available to 18 million households. In addition, according to FCC, at year-end 2010, about 1.5 percent of households had access to just 2 MVPDs, which are the two major satellite companies. In our analysis of MVPD service in 20 zip codes, one zip code—encompassing Limon, Colorado—was not served by a cable company and relied solely on satellite service. Technological advances increasingly enable distribution of video online. Internet speeds have increased as companies deploy new, high-speed technologies, such as fiber optic cable, to the neighborhood or the residence. These new technologies enable many U.S. households to stream video—that is, access and view video content via online sources. Watching video online generally requires an Internet connection with a speed of .7 to 4 megabits per second, depending on the quality of the video; for example, high-definition video requires higher Internet speeds than standard definition video. In August 2012, FCC reported that over 40 percent of U.S. households had adopted broadband speeds of at least 3 megabits per second. A variety of business models supporting online video have emerged; some online video is available free, while other content is available with payment. Online sites such as YouTube aggregate user-created and other content and make this content available to viewers free with an Internet connection. Increasingly, professional content is appearing on YouTube. For example, ABC News has segments from ABC World News and Good Morning America available on YouTube. Other services, such as Netflix and Amazon Prime Instant Video, entail one-time or monthly rental fees to access content, including television programs and movies. Still other models exist, where content owners sell their content directly to consumers. In particular, Hulu—a joint venture that includes News Corporation, NBCUniversal, and Disney—offers a free advertiser- supported service and a monthly subscription service with fewer commercials and access on a wide variety of devices. While the Internet has emerged as a new source for viewing video, online viewing and revenues represents a small portion of overall media activity, particularly as compared to traditional television. In September 2012, Nielsen reported that 162 million Americans watched online video, consuming on average nearly 7 hours of content over that month. In contrast, Americans watch over 34 hours of live television per week. Additionally, several financial analysts and experts whom we interviewed described Internet advertising as still in its infancy, with viewership and advertising still developing and companies exploring successful business models. For example, FCC, citing data from Investor’s Business Daily, reported that in 2009 advertisers spent $908 million on U.S. online video advertising compared to $68.9 billion spent on U.S. television advertising during that same period. In general, MVPDs provide video content by packaging together a large number of channels in different programming tiers—often the basic, expanded basic, and premium tiers. In our analysis of MVPD services in 20 zip codes in 2013, all MVPDs reported requiring consumers to purchase tiered packages of channels. We found that the basic tier of these MVPDs consisted of a minimum of 13 channels, with local broadcast and informational channels sometimes dominating this tier; the price of the basic tier ranged from $9.95 to $40 per month. The expanded basic tier usually included the channels in the basic tier and additional cable networks, such as ESPN, Nickelodeon, USA Network, MTV, and A&E. Higher end, premium tiers usually included more than 100 channels and the monthly subscription price for this tier ranged from $53 to $200.49. In all 20 zip codes we analyzed, the MVPD included HBO, Showtime, and Cinemax in its premium tier. Because subscribers must receive all of the channels offered on a tier that they choose to purchase, they have little choice regarding the individual channels that they receive. À la carte service—where consumers purchase content on a channel-by-channel basis—is generally not provided by MVPDs. None of the MVPDs we interviewed, or any MVPDs included in our analysis of 20 zip codes, provided à la carte service; the only exceptions were premium channels and pay-per view services, which were often available on a stand-alone basis.$6.00 to $26.95 per month in the 20 zip codes we analyzed. For example, HBO was available for an additional Contractual and economic factors lead MVPDs to package channels into tiers rather than providing à la carte service. Contractually, content companies generally seek to have their networks carried on the largest tier, typically the expanded basic tier. These companies have an economic incentive to pursue this strategy; content providers typically receive both a monthly fee for each customer that subscribes to the tier on which their network appears and advertising revenues, which are based in part on the number of potential viewers (e.g., subscribers) to the tier on which their network appears. Content companies and others reported that they might need to charge more for certain content under an à la carte system because of potential revenue losses and that the price of a single channel could be significantly higher with an à la carte system compared to the current tiered system. Consumer groups expressed concerns that à la carte service could diminish diversity and local aspects of existing programming if lower demand networks cease operation because of a lack of subscribers. Some experts with whom we spoke also questioned whether consumers would necessarily be better off with à la carte pricing of channels, given the potential for reduced quantity and quality, and higher prices for individual channels. Some MVPDs are providing increased service options to consumers. For example, some MVPDs are making content available through TV Everywhere services. These services allow MVPD subscribers to view some content on mobile devices, typically smartphones or tablet computers, and from various locations within and, depending on the service, outside the residence. Some MVPDs charge for these additional features as a stand-alone service or include them as part of a digital package, and some require that the customer subscribe to both the company’s MVPD and broadband services. In addition, some MVPDs are bundling telephone, Internet, and video services. These bundles have a higher aggregate price, but the consumer’s total cost can be less than if the consumer purchased these services separately. For example, one MVPD included in our zip code analysis provided a tier of channels for $39.95, Internet service for $39.95 per month, and telephone service for $34.95 per month. Alternately, a consumer could receive these three services in a bundle for a monthly price of $84.95, a savings of $29.90. Such bundling appears to be widespread; of the 20 zip codes in our analysis, consumers in all 20 had access to one or more bundles. FCC reported that as the number of video subscribers has fallen, cable companies have prospered by increasing sales of other services, such as phone and Internet access, to their remaining customers. OVDs and related companies provide consumers with increased flexibility in selecting content. Services like Hulu and Netflix allow consumers to select content based on a program, or even an episode basis. Some experts and consumer groups with whom we spoke said that these new online options constituted a programmatic à la carte, rendering debates over whether or not consumers should have the ability to purchase specific channels less relevant. OVDs’ libraries are limited, however, and OVDs do not have the rights to display certain television programs and movies. To maximize the return on investment from producing video content, where costs can be quite high, content owners generally distribute content through a series of outlets over time, through a process known as windowing. Content is distributed in the most lucrative outlets first, and depending on the type of content, the windowing process can take months or years to fully play out. Distribution of content through an OVD is often last in the series of outlets, as content providers first distribute television programs through broadcast or cable networks and feature films through movie theatres. Because of this, OVDs typically are not able to obtain first run television programs and movies; for example, a television program may have to finish its entire season before a single episode becomes available online. Thus, although the same content is available in earlier windows through other outlets, windowing can limit the value of OVDs’ services for some consumers. As such, most of the industry representatives and experts with whom we spoke stated that, at this time, OVD services are generally seen as a complement to MVPD services, rather than a substitute. Nonetheless, some content companies are providing content directly to consumers online. Sports leagues are one such example. Major League Baseball provides its MLB.TV service, where subscribers can watch baseball games live online or at a later time, as well as recaps and other baseball news. The National Basketball Association provides a League Pass service where subscribers can watch some games live, have access to live game statistics, and review an archive of content for the season. In addition, other content aggregators, like cable networks, have made content available online. For example, AMC has an online service where viewers can watch shows such as Mad Men. In another example, HBO has an online version of its network, HBOGO, where viewers can watch all of the movies and other content on this premium network. Much of this and other online content can also be accessed through electronic portable devices. Indeed, some of the aforementioned sport content services provide mobile options for subscribers. These prices are for a basic Internet connection; some MVPDs provide higher-speed options for a higher price. These prices also do not include satellite companies, which do not provide a stand-alone Internet service. option where a consumer can pay a set amount (usually $1.99 to $4.99) to view a movie in a 30-day window. Services like Netflix are subscription- based with recurring, monthly fees for continued access to their content library. Content viewed directly from content companies also generally requires a monthly or yearly fee. For example, the price for Major League Baseball’s MLB.TV services is $94.99 for a year or $19.99 per month. Despite entrants providing new choices, the price of MVPD video service continues to increase. In its most recent annual report on cable rates, FCC reported that over the course of 2010, cable rates rose 5.4 percent. From 2005 through 2011, cable rates rose more than 33.5 percent for both basic and expanded service tiers (see table 1). This increase outpaced inflation captured in the Consumer Price Index, which rose 15.5 percent over the same period. Besides cable service, other MVPDs’ prices have increased faster than inflation. For example, in January 2013, Dish Network raised the price for its services between 7 to 20 percent; this included a price increase of $5 for its basic service package—from $24.99 to $29.99 per month. DirecTV also increased its prices in February 2013, raising its average price by 4.5 percent. Representatives from MVPDs and content companies, consumer groups, and other stakeholders identified a variety of causes for continued rate increases. MVPD and content companies cited the cost of content production as one factor. The cost of acquiring “must have” content— content that is very much in demand by consumers, such as live sports— has become increasingly expensive. Sport leagues, such as the National Football League and Major League Baseball, are seeking higher fees from broadcast and cable networks to carry their sporting events. For example, in its latest contract with ESPN, Major League Baseball will receive approximately $5.6 billion for the years 2014 through 2021; this represents a 100 percent increase above the previous agreement. Broadcast and cable networks may in turn pass along these higher prices to MVPDs, which ultimately contribute to higher consumer prices for MVPD service. Infrastructure investment costs, such as cable companies continuing to roll out broadband Internet service to new communities and locations, may also play a role. For example, NCTA reported that the cable industry’s capital expenditures for 2011 were $12.9 billion. Advances in digital technologies and increased Internet capacity could help lower the cost to develop and distribute some video content. Some high-quality digital video cameras and editing equipment can be purchased for less than a thousand dollars, enabling individuals and small startups to create content at relatively low cost. As a result, individuals and startups can produce web series of low-budget programs and develop dedicated online channels to carry content. In addition, crowdfunding—the practice of funding a project or venture by raising small amounts of money from a large number of people, often online— provides a mechanism for startups to acquire the financial resources to develop content. Furthermore, technological advances can lower the costs to distribute content through online platforms like YouTube, which provides free content posting. OVDs can distribute content online at relatively lower costs than traditional MVPDs, which own physical networks and tend to distribute more costly professionally produced movies, sports, and television programs. For example, whereas OVDs have limited distribution costs as consumers pay subscription fees to access the Internet and online video, traditional MVPDs built, operate, and maintain the networks through which Internet bandwidth and online video is provided. Increased spectrum for wireless broadband could facilitate greater distribution and viewing of video content wirelessly. According to industry stakeholders and experts with whom we spoke, today’s terrestrial wireless networks are unable to support widespread, large-scale viewing of video content; these networks’ capacities are much less than existing wired and satellite networks deployed by cable, telephone, and satellite companies. However, increased spectrum for wireless broadband, combined with compression technologies that allow more efficient use of spectrum, could allow for additional viewing of video content wirelessly. According to FCC, the Commission’s 2008 auction of spectrum licenses resulted in some mobile wireless service providers beginning to offer mobile broadband services for laptop computers, tablets, smartphones, and other mobile devices. FCC is planning another spectrum auction for wireless services in 2014. According to industry stakeholders and experts with whom we spoke, more spectrum for wireless services could spur additional competition, with more companies entering the marketplace to provide online video services accessible via smartphones and tablets. However, as we have previously reported, most spectrum has already been allocated and assigned to other users, including federal agencies such as the Department of Defense, and reallocating spectrum from other uses can be time-consuming, costly, and contentious. The high costs to license professionally produced content could hinder the competiveness of entrants and small distributors. Some professionally produced and time-sensitive programming, like sports and popular prime time shows, are highly valued by many viewers. As the supply of those who are involved in the production of such popular programming—well regarded athletes, writers, actors, and directors—is limited, their talents command premium compensation, often in the millions of dollars. While large MVPDs that have subscription and advertising revenues can pay the license fees for this content, smaller MVPDs and new OVDs that are not as established in the marketplace may not be able or willing to do so. In addition, OVDs told us that competition could be hindered by the fact that content providers will license their content to them, but only on similar contractual prices and terms that they offer to traditional MVPDs. In particular, OVDs told us that licensing contracts have “most-favored nation” (MFN) clauses that guarantee a customer will receive prices and terms that are at least as favorable as those provided to other customers of the same seller. Because of MFN clauses, OVDs assert that content providers will not enter into agreements with OVDs that are different from agreements that they have with MVPDs. As a result, OVDs said that MFN clauses inhibit their ability to compete, as they cannot offer consumers different programming choices and prices than MVPDs, thus making it difficult to attract customers. DOJ and the Federal Trade Commission (FTC), which investigate certain proposed mergers and potential antitrust violations, jointly sponsored a workshop in September 2012 on MFN clauses and their benefits and risks to competition. According to DOJ’s press release announcing the workshop, MFN clauses, though at times employed for benign purposes, can under certain circumstances present competitive concerns. DOJ noted that MFN clauses might, especially when used by a dominant buyer of intermediate goods, raise other buyers’ costs or foreclose would-be competitors from accessing the market. While high prices and contracting terms could hinder entry, some larger OVDs with subscription services have taken steps to overcome these challenges. For example, Netflix signed a multiyear billion-dollar agreement with Disney Corporation to license its content beginning in 2016. In addition, some OVDs have begun producing their own original content. For example, Netflix created its first original series, House of Cards, which debuted on February 1, 2013, and Amazon currently has 12 pilots in development that will be available on its Prime Instant Video service. Based on our discussions with an array of industry stakeholders and experts, the prospect of any new wire-based providers entering the video market appears unlikely. As previously discussed, the two telephone providers that expanded their service offerings to compete with incumbent cable companies—AT&T and Verizon—appear to be curtailing further expansion of their video and broadband Internet services. Both companies made very large investments to upgrade their networks—an expected $23 billion in the case of Verizon and billions of dollars, according to AT&T—to provide new video and broadband services, despite the fact that the companies were established telephone companies with existing telecommunications infrastructure in place. The high costs to provide these services create a substantial barrier to entry. In particular, not only are the overall costs of entry into the video distribution market high, but many of these costs are fixed, meaning that much of the infrastructure needs to be in place before the provider can initiate service. High fixed costs can render entry difficult because an established company with a large customer base will generally enjoy a significant cost advantage over a new entrant. The costs involved in entering the video distribution market fall into several categories including: (1) physical infrastructure, (2) regulatory authorizations, (3) securing access to broadcast and cable programming, and (4) marketing. Physical infrastructure for providing video. Providing wire-based video service requires an extensive physical network. The provider needs to be able to capture video signals from various sources—broadcast transmitters, fiber optic cable, satellites—each source requiring specific communications infrastructure. Once captured at the provider’s facilities, video signals are transported to households. Transmission requires a wired network from the provider’s facility travelling across highways and byways, into neighborhoods, and ultimately linked to every subscribing household. Installation of such a network is expensive: the provider generally needs to dig trenches along roads and into neighborhoods in order to install equipment and wire. This fairly extensive and expensive network of communications reception and transmission equipment needs to be in place before any service can be provided, so much of the video capture infrastructure, trench digging, and wire installation costs are fixed. Regulatory authorizations and coordination with private party facilities owners. A new provider must work with local jurisdictions to obtain authorizations to undertake various activities. Cable providers are required to obtain a franchise authorization for each jurisdiction they serve. Franchise areas vary in size, but they typically cover a town or relatively small jurisdiction, so gaining franchise authorization to enter any significant geographic area can be time-consuming. In addition, to deploy cable for the network, a provider requires access to public rights- of-way. The governmental entities that grant franchises and access to rights-of ways vary by state, which again may mean that an entrant needs to obtain a grant to access rights-of-ways from multiple jurisdictions.Once the grant is received, a video provider must work with either the local telephone or utility companies to undertake the necessary installation work, because these companies generally are the owners of the poles or conduits over or through which wires are deployed along the rights-of-way. As with the physical installation, obtaining regulatory authorizations and coordinating installations generally needs to take place before the provider can begin to service customers. Acquisition of Programming. A new provider in the video market needs to secure access to a large portfolio of broadcast and cable networks to compete for customers. The cost of the programming itself is paid monthly based on the number of subscribers the provider serves, and according to providers with whom we spoke, prices for programming are high and continue to increase. Several of the providers and experts with whom we spoke also told us that networks generally offer significant discounts based on the number of subscribers a provider has. Thus, a substantial disadvantage that an entrant has relative to a large provider is that it will likely have higher programming costs, making entry challenging. Marketing. A new entrant needs to make the public aware of the new services it is offering and attempt to convince potential customers to buy its services. This can pose many challenges for an entrant. As mentioned earlier, over 85 percent of households already subscribe to a video service, so most potential customers are already buying a service. Thus, most customers need to be persuaded to switch their provider. Additionally, many households buy a bundle of services from their provider—video, broadband Internet, and sometimes telephone—and the inconvenience of switching over several services to a new provider is greater than it might be for a stand-alone service. Such challenges that an entrant would face to gain subscribers can also act as a barrier to entry. While OVDs present a new and exciting venue through which consumers can enjoy video services, we found that OVDs do not yet offer a package of programming that is substantial enough to induce households to drop their subscription to a traditional video service in favor of an OVD’s services. OVD providers have a variety of business models, but fundamentally, they are dependent on two established industries— developers of video content and providers of broadband Internet access—and this dependence could hinder any significant maturation of the OVD business model. Content. OVDs purchase content from the same content providers as do traditional MVPDs, and content providers and MVPDs have long-standing and lucrative business relationships. As discussed above, high-valued content, such as professionally produced movies and television programs, is costly to produce. Therefore, even if some OVDs are successful in developing some independent content, they will remain largely dependent on traditional sources of content. Content providers enjoy a stable and secure business model distributing programs on MVPD systems. In particular, content providers benefit from MVPDs packaging many channels because it ensures that most households purchase a large set of channels. Similarly, MVPDs benefit from their purchase of high-quality content, which most households value enough to induce them to purchase a large tier of MVPD programming.there is a symbiotic benefit in the business relationship between the content producers and MVPDs. At the same time, content providers are also interested in selling their programs to OVDs. In particular, representatives of several such companies, as well as experts, said that through these new outlets, content providers are able to monetize their products in new ways. For example, OVDs can distribute content separate from the bundles of content offered by broadcast and cable networks, which may have a unique commercial appeal and attract new consumers to content providers’ programs. However, content providers are also wary about the extent to which they contract for OVD distribution of their programs. If OVD offerings become attractive enough that households begin to drop MVPD subscriptions and rely solely on online viewing, revenues earned through traditional subscription service will decline, affecting both content providers and MVPDs. The issue is the extent to which this happens: the impact of a small number of households doing so may not be of concern to content providers, but if a substantial number of households choose to “cut the cord,” revenues of both the content providers and the MVPDs could be reduced enough to be worrisome for these companies. Thus, while content providers are interested in providing some content to OVDs, their incentive to do so is somewhat constrained by the potential effect on subscriptions to traditional MVPDs. Some stakeholders with whom we spoke stated that the critical challenge for the OVD business model is access to quality content and that as long as content providers do not see OVDs as a viable outlet for the highest quality content, the growth of the OVD business model will be limited. Broadband. Most households with broadband purchase that service from either a cable or a telephone company. Thus, the companies that provide a large portion of the broadband access in the country are the same companies that OVDs are attempting to compete against in the video marketplace. Users who view video provided by OVDs are often large consumers of broadband bandwidth, and heavy use may place stress on the broadband infrastructure. Some MVPDs have created pricing structures for bandwidth that, in one manner or another, extract higher fees from heavy users. OVDs and other experts have expressed concern that, because MVPD providers are also competitors of OVDs in the video market, MVPDs may have an incentive to charge for bandwidth in such a way as to raise the costs to consumers for using OVD service. Some of the industry groups and experts with whom we spoke stated that some form of usage-based pricing was probably inevitable and reasonable based on the costs of maintaining the infrastructure, but that they would be concerned if such pricing was used in any way that could stall the growth of the nascent OVD market. For example, they would be concerned if there were any differential treatment of broadband use for accessing the content of the MVPD versus that of OVD providers. The 1992 Act was written over 20 years ago, and for a variety of reasons, the majority of stakeholders with whom we spoke stated that some provisions of the laws and associated regulations do not reflect the current marketplace. Stakeholders told us that there have been significant changes in competition in the video marketplace since 1992. For example, cable companies were often the only choice of a video distributor for most consumers in 1992. Since then, satellite and telephone companies have entered the marketplace, and consumers have more choices in selecting a video distributor. In addition, the 1992 Act was written before the commercialization of the Internet and other technological advances, such as tablets and smartphones, which allow for online video and wireless distribution. As previously discussed, online video viewing is a small portion of overall viewership. However, experts with whom we spoke said that the trend is for growth of online video with the expansion of WiFi and 4G infrastructure and the greater use of tablets and smartphones. Furthermore, since 1992, MVPDs have digitalized their systems and the number of channels carried on these systems has risen dramatically, an increase that has led to more content being developed and more content options available for consumers. Due to these marketplace changes, the majority of stakeholders we interviewed stated that some provisions in the 1992 Act should be revisited; however, other stakeholders disagreed, stating that given the quickly developing, dynamic, and technology-oriented nature of the industry, it is difficult for laws to keep up with changes. These stakeholders noted that, as was the case in 1992, it is hard to predict what the marketplace will be like in the future and therefore difficult to envision appropriate laws and regulations. Most of the stakeholders who told us that some provisions of the 1992 Act should be revisited had varying opinions, which were often based on their position in the marketplace. In general, stakeholders identified three issues related to the 1992 Act that they believe should be addressed: (1) retransmission consent, (2) program access, and (3) the definition of MVPD and OVD. In addition, stakeholders also had varying opinions on FCC’s Open Internet regulation. Retransmission Consent. One of the concerns expressed by stakeholders and experts with whom we spoke was the manner in which retransmission consent is functioning in the market today. Policies to support localism have long been a focus of communications laws related to television broadcasting. In 1992, Congress took action to help ensure that the local benefits of over-the-air broadcast television stations were protected as more households began to migrate to pay video services over cable systems. The 1992 Act thus set forth a paradigm under which commercial, local television stations can choose to be carried by cable companies under a must carry status—meaning that cable companies in their market are obliged to carry the station’s signal—or can elect to The purpose negotiate for carriage under retransmission consent.behind the dual policies of must carry and retransmission consent was, in part, to support the development of local news, emergency weather information, and other local public interest content. While a must carry station does not receive any compensation for the carriage of its signal, a station electing retransmission consent can negotiate with cable companies for compensation in return for the cable company’s right to carry the station’s signal. Thus, must carry provisions were designed to ensure that all local commercial stations would be carried by cable companies, which may not have occurred for some stations that do not have a significant commercial appeal. At the same time, retransmission consent was designed to ensure that stations choosing this status had the ability to bargain for compensation for the value of their local television signal. During roughly the first decade after the 1992 Act was passed, negotiations for retransmission consent usually did not result in cash payments from cable companies to local television stations, but rather, resulted in other forms of negotiated compensation, such as carriage of fledgling cable networks owned by broadcast networks. But over the last several years, negotiated cash fees for retransmission have increased significantly, as we reported in 2011. Stakeholders with whom we spoke—specifically cable and DBS companies, as well as industry experts—told us that the rapid rise in retransmission fees is of concern because these fees put upward pressure on subscriber rates, and the negotiation over fees have become increasingly contentious, leading to more “blackouts” during which local television signals are pulled from a particular MVPD’s channel lineup. Moreover, some of those concerned about increasing retransmission fees noted that while the concept underlying retransmission consent was to support local television stations, they believe that a portion of the financial compensation paid through retransmission fees is, in fact, not going to local television stations. Instead, critics told us that a good portion of retransmission fees are flowing to the broadcast networks that own or have affiliation agreements with the local stations and that ultimately a portion of these fees flow to the copyright holders of high-valued content purchased by broadcasters, such as sports leagues and studios producing popular dramatic TV series. Others with whom we spoke—from broadcast networks and an associated trade association—told us that retransmission consent remains an important foundation for developing programming. They noted that without this form of compensation, broadcasters would not be able to continue to provide high quality programming and emergency local information. These stakeholders also said that broadcast stations have the right to control their television signal and that any attempt to alter this long-standing policy would be harmful to the television broadcasting system that was developed decades ago. Program Access Rules. The 1992 Act gave FCC the authority to establish program access rules that require cable companies that produce content to make that content available to other unaffiliated MVPDs. Cable companies say these rules are no longer needed because they do not have monopoly power and that other MVPDs have access to content. Cable companies also note that it is their First Amendment right to determine to whom they license their content. However, OVDs, consumer groups, and experts state that the program access rules need to be continued and extended to include OVDs, whom they say have difficulty accessing content. As noted earlier, limited access to high-valued programming is one of the factors that stakeholders told us could hinder competition. Stakeholders told us that the program access rules allowed satellite and telephone companies to compete and grow and that new entrants, such as OVDs, should have the same protections. In October 2012, FCC declined to extend the exclusive contract prohibition, originally enacted as part of the program access statutory provisions, beyond its scheduled sunset date. FCC stated that a preemptive prohibition on exclusive contracts was no longer necessary because a case-by-case process under the program access rules would remain in place after the prohibition expired to assess the impact of individual exclusive contracts. FCC has received program access complaints, including one from an OVD—Sky Angel—in March 2010. Sky Angel also filed lawsuits against Discovery Communications and the National Cable Satellite Corporation, the owner of cable channel C-SPAN, to access programming from these two companies. Definition of MVPD. As defined in the Communications Act, an MVPD is, among others, a cable operator or satellite provider that makes available for purchase multiple channels of video programming. Experts with whom we spoke said that OVDs generally have not been regarded as MVPDs because OVDs are not facilities-based, in that they do not own the Internet bandwidth through which their content is distributed. FCC staff has determined that Sky Angel—an OVD that offers channels but is not facilities-based—failed to demonstrate that it is a MVPD entitled to seek relief under the program access rules. However, FCC has not conclusively decided the issue. Since OVDs may or may not be MVPDs, it is unclear if OVDs have the same rights to program access and obligations, such as must carry. While some OVDs like Sky Angel want to be classified as an MVPD so that they can have program access rights, other OVDs do not, saying that their business model differs from that of a MVPD and that therefore they should not be treated as one. Experts noted that complicating the distinction between OVDs and MVPDs is that some MVPDs also provide content online and on demand. Experts with whom we spoke said that defining OVDs as MVPDs could have negative implications on competition in that it could discourage companies that do not wish to be subject to MVPD regulations from entering. In response to Sky Angel’s program access complaint, FCC initiated a proceeding in March 2012 requesting comments from industry stakeholders on the definition of MVPD and channel. Open Internet. The growing use of the Internet since the 1992 Act has raised concerns among some industry stakeholders about the management of broadband networks. In particular, the literature we reviewed, as well as OVDs and consumer groups with whom we spoke, reported concerns that some companies providing broadband service that are affiliated with MVPDs could favor their own content. According to the literature and stakeholders, MVPD-affiliated broadband providers might have an incentive to limit access to their programming or block or slow consumers’ access to their competitors’ websites, thereby giving competitive advantage to their content and restraining the growth of rivals. As we previously mentioned, limited access to content is a factor that could hinder competition in the video marketplace. In December 2010, FCC issued its Open Internet Order, which provides that (1) fixed and mobile broadband providers must disclose the network management practices, performance characteristics, and terms and conditions of their broadband services; (2) fixed broadband providers may not block lawful content, applications, services, or non-harmful devices; mobile broadband providers may not block lawful websites or block applications that compete with their voice or video telephony services; and (3) fixed broadband providers may not unreasonably discriminate in transmitting MVPD-affiliated broadband providers told us that lawful network traffic.they should be able to manage their networks as they built, operate, and maintain the networks and that Open Internet rules could make it more difficult for them to recoup their investments. Verizon, which provides both broadband and video services, appealed FCC’s Open Internet Order in the U.S. Court of Appeals for the District of Columbia Circuit. Several parties, including the National Association of State Utility Consumer Advocates intervened in that appeal and argued, among other things, that the Commission has authority to adopt the Open Internet rules to protect against cable operators and their affiliates discriminating against their video programming competitors. Panelists at a January 2013 forum on cable and broadband law noted that any consideration of potential legislative and regulatory changes affecting the video marketplace should wait until the District of Columbia Circuit Court rules on FCC’s order since the outcome could dictate how the online video marketplace evolves. FCC is required by statute to report annually to Congress on both cable industry prices and competition in the video marketplace, but has not met this requirement every year. The 1992 Act established requirements for the purpose of increasing competition and diversity in MVPD distribution and required FCC to report annually on the average rates that cable companies charge for cable service and equipment and the status of competition in the video marketplace to measure progress toward these goals. Since the 1992 Act, FCC has published the annual cable industry price report 13 times, but did not publish the report in 2004, 2006, 2007, and 2010. In the 2009 report, FCC included data from 2006 and 2007, in addition to the 2008 data that it would have normally reported. FCC has submitted 14 video competition reports to Congress, but did not release the report in four years—2007, 2008, 2010, and 2011. The most recent report, published in July 2012, covered 4 years of information. FCC officials cited several factors that contributed to the missed reports, and legislation introduced in the 112th Congress would have reduced the frequency of the reports. FCC officials told us that the reports were generally prepared on time, but the delays in the release of the reports were due to a variety of administrative factors. In 2010, FCC initiated a comprehensive review of the way in which it uses data, including data used for its video competition report; ultimately, FCC altered the analytic framework of the video competition report to be consistent with its other competition reports. According to FCC officials, this review and change contributed to the Commission missing the 2010 and 2011 video competition reports. In addition, FCC officials told us that the reports are time consuming to prepare because of the amount of industry data the Commission reviews. While data and comments used for the video competition report are submitted by industry participants on a voluntary basis, the cable industry price reports impose a burden on some industry participants. In particular, FCC estimated that the public reporting burden for the information collection required for the cable industry price report was 6 hours per response, including the time for reviewing instructions, searching existing data sources, gathering and entering the data needed, and completing and reviewing the questionnaire. Some stakeholders told us that FCC reports are valuable, although the majority of stakeholders that we interviewed had no opinion on them. In our review of the video competition reports, we saw little change in the reported findings from year to year; therefore, less frequent reporting could allow for continued measurement of industry performance while reducing the burden on FCC and industry participants. In the past, both Congress and the executive branch have expressed concern about reporting requirements; the basic concern has been that some requirements result in reports that may be unnecessarily burdensome to produce or, in some instances, not very useful. A bill that the House of Representatives passed during the 112th Congress, would have required FCC to consolidate eight currently separate congressionally mandated reports, including the video competition reports, and issue a single report biennially; the legislation would have eliminated the cable industry price report. FCC officials expressed no opinion between an annual or biennial reporting requirement and said that the Commission prepares the reports as directed by Congress; the Commission has not communicated an opinion on this issue to Congress. The video marketplace consists of a complex set of interrelated and competing industries operating under a variety of related laws and regulations. In particular, communications and copyright law dictate how content providers, aggregators, and distributors operate in this marketplace. Competition has expanded in some segments of the video marketplace, most notably, the emergence of telephone companies providing video distribution services. In addition, technology in this arena is changing and has facilitated the formation of entirely new businesses and products, such as online video distribution, which have the potential to alter existing business models. It is too soon to tell what the outcomes of these technological and market changes will be, or whether anticompetitive behavior would necessitate any federal action. A lack of consensus, influenced by vested economic interests among industry officials, consumer groups, and experts reinforces that while federal laws and regulations may in some ways be outdated, it is not yet clear how they should be updated to reflect 21st century technologies and market conditions. FCC’s cable industry price and video competition reports provide useful information. However, these reports may not be needed on an annual basis, especially given demand on FCC staff’s time for other monitoring and regulatory duties. FCC’s 2009 cable industry price and 2011 video competition reports covered several years of data and could serve as models for issuing such reports on a less frequent basis. Since these annual reports are statutorily required, Congress, with input from FCC, would determine any new reporting frequency. To ensure that the Commission’s cable industry price and video competition reports provide timely and useful information, while minimizing the reporting burden and meeting statutory deadlines, we recommend that the Chairman of the Federal Communications Commission study the advantages and disadvantages of different reporting frequencies, including annual and biennial reporting, and transmit the results of its analysis to Congress. We provided a draft of this report to the Federal Communications Commission and the Department of Justice for review and comment. FCC provided written comments, which are reprinted in Appendix II of this report. In its letter, FCC said that the Commission strives to use its resources efficiently to meet the agency’s mission and its Congressional requirements, and the Commission is reviewing our recommendation. DOJ provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Attorney General and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report were to examine (1) how competition has changed since 2005; (2) the increased choices that consumers have in acquiring video programming and content; (3) the factors that can spur or hinder competition; and (4) stakeholders’ views on how the federal government’s regulations, reports, and other activities have kept pace with changes in the industry. To assess how competition has changed since 2005, we first conducted a literature search and reviewed media articles, academic studies, industry reports, and prior GAO reports on the structure, economics, and technological factors affecting the development and distribution of video content. We also reviewed relevant reports prepared by the Federal Communications Commission (FCC), the Department of Justice (DOJ), and the Federal Trade Commission (FTC). We verified information from the literature review through interviews with FCC and DOJ officials, industry participants, trade associations, consumer groups, industry analysts, and other experts. We selected industry participants to include producers, aggregators, and distributors of content, such as broadcast and cable networks, multichannel video programming distributors (MVPD)—cable, satellite, and telephone companies—and online video distributors (OVDs). To assess what increased choices consumers have in acquiring video programming and content, we conducted an analysis of MVPD services in 20 randomly sampled zip codes across the United States. To ensure a representative sample, we used the Rural/Urban Commuting Area zip code file that includes all U.S. zip codes. We then sorted the zip code file into four census regions—Northeast, South, Midwest, and West—and within these regions, made five urban and rural classifications—urban, suburban, large town, small town, or isolated rural. This process resulted in 20 different segments—5 classifications for each of the 4 census regions. We then randomly sorted each of the 20 segments, a process that resulted in the identification of 20 zip codes. We identified the community names associated with the 20 zip codes using the United States Postal Service online zip code locator; we used FCC’s cable franchise information to identify the MVPDs that served the communities. We contacted these MVPDs and asked them for publicly available information on their services and prices. The information requested and collected included channel lineups by package name, monthly rate for each package, broadband packages (whether provided stand-alone or combined with video services), available broadband speeds, available online video offerings, available out-of-household viewing options, and available DVR options. We had a 100 percent response rate to the survey as all 20 communities provided information. We then analyzed the data to compare packages offered, prices, level of competition, among other factors. Our results reflect the competition, packages, and pricing in the 20 zip codes and are not generalizable to all zip codes. To identify the prices for traditional MVPD services, we gathered data from FCC’s reports on cable industry prices for the years 2005 through 2012, which represent the most recent available data. These cable rates were for basic and expanded basic tiers. We reviewed FCC documentation and information provided by FCC staff to assess the reliability of the cable price data and determined that FCC’s data were sufficiently reliable for the purposes of our report. We conducted interviews with content producers, MVPDs, consumer groups, and experts to collect information on reasons for the rise in cable rates. To assess the factors that can spur or hinder competition in the video marketplace, we conducted a literature search and reviewed relevant articles and prior GAO reports, as discussed previously. Through a review of DOJ’s website, we examined DOJ’s activities in the video marketplace, including any investigations of potential relevant antitrust violations and the agency’s review of the Comcast/NBCU proposed merger; we verified our research with DOJ. We also conducted interviews with industry participants, trade associations, consumer groups, industry analysts, and other experts for their views on factors that have increased or hindered competition. Our ability to understand some specific aspects of the industry was limited because certain information and data are generally not made publicly available. In particular, details of the contracts between content providers and MVPDs—such as retransmission fees, per subscriber fees for cable networks, and other requirements (such as tier placement) surrounding the carriage of broadcast and cable channels— are generally covered under nondisclosure agreements. Similarly, information on the negotiations for the purchase of programming by OVDs is generally not publicly available. Other areas with limited publicly available data and information include the extent to which retransmission fees are retained by local broadcast stations or flow to broadcast networks and copyright holders, the extent to which the access of content through OVD providers congests broadband providers’ networks, and the cost of producing high-valued content, such as sports and popular TV dramas. As such, for some issues discussed in this report, our information is largely based on the statements and opinions of industry participants that we cannot independently corroborate. To assess stakeholders’ views on how the federal government’s regulations, reports, and other activities have kept pace with changes in the industry, we analyzed the FCC Media Bureau’s activities since 2005 to determine competitive issues that the Commission has or is addressing. To do this, we reviewed the Media Bureau’s website, which lists its activities. We also analyzed DOJ’s investigative activities in the video marketplace through its website. We verified information collected from our reviews with FCC and DOJ. As part of this review, we reviewed the relevant laws, regulations, and FCC proceedings including Notices of Inquiry, Notices of Proposed Rulemakings, and Reports and Orders. We also conducted interviews with industry participants, trade associations, consumer groups, industry analysts, and other experts for their views on how federal regulations, reports, and activities are keeping pace with the industry. We prepared a summary analysis of all interviews that we conducted to determine the four major issues that interviewees said that Congress or the federal government should address. To determine FCC’s consistency in publishing its cable industry price and video competition reports, we analyzed all reports since they were first published. In this analysis, we looked at when the reports were completed and submitted for Commission approval and when the Commission approved and published the reports. We also interviewed FCC on why the reports were not published annually. We conducted this performance audit from July 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Mark Goldstein, (202) 512-2834 or [email protected]. Other key contributors to this report were Mike Clements (Assistant Director), Amy Abramowitz, Matt Cail, Dave Hooper, Delwen Jones, Maureen Luna-Long, Josh Ormond, and Andrew Stavisky.
Video provided through subscription video services, such as cable and satellite television, is a central source of news and entertainment for the majority of U.S. households. Technological advances have ushered in a wave of new products and services, bringing online distribution of video to consumers. Federal laws and regulations have sought to foster competition in the video programming and distribution marketplace, but many such laws were adopted prior to the emergence of these advances. Among other things, GAO examined (1) how competition has changed since 2005; (2) the increased choices that consumers have in acquiring video programming and content; and (3) stakeholders' views on how the government's regulations, reports, and other activities have kept pace with changes in the industry. GAO reviewed relevant literature and reports; interviewed agency officials, industry stakeholders, and experts; and analyzed prices and service offerings in 20 randomly sampled zip codes (the prices and services offerings reflect conditions in the 20 zip codes and are not generalizable to all zip codes). Since GAO reported on competition in 2005, competition among video content producers is little changed, while competition among distributors has increased. According to data cited by the Federal Communications Commission (FCC), seven companies' broadcast and cable networks accounted for about 95 percent of all television viewing hours in the United States. Further, ownership of broadcast and cable networks changed little from 2005 through 2012. Alternatively, the introduction of video service provided by telephone companies, such as Verizon's FiOS service, has brought additional competition to video distribution. At year-end 2010, roughly 1 in 3 households could choose among 4 or more subscription video distributors: typically a cable company, 2 satellite companies, and a telephone company. With technological advances, companies are increasingly distributing video online. Online video distributors (OVD) are developing a variety of business models, including free and subscription-based services. However, online viewing and revenues represent a small portion of overall media viewing hours and revenue. Consumers continue to acquire programming and content through packages, but OVDs are delivering new choices. All the video distributors that GAO analyzed required consumers to purchase a package of channels often through the basic, expanded basic, and premium tiers. According to FCC data, in 2011, the average price for expanded basic service was $57.46, and had increased over 33 percent since 2005, exceeding the 15 percent increase in the Consumer Price Index. OVDs and other companies allow consumers to select content on a program or episode basis. However, these services typically do not include the most recent television programs and movies, thereby limiting their value for some consumers. Stakeholders generally noted that laws and regulations have not kept pace with changes in the video industry, and FCC has not consistently reported on competition. Some legislation governing the media industry was adopted over 20 years ago, before telephone companies entered the marketplace and the commercialization of the Internet facilitated new OVD services. A majority of stakeholders with whom GAO spoke stated that some provisions should be revisited. FCC is required to annually report to Congress on cable industry prices and competition in the video marketplace. However, since 1992, FCC has not published the cable industry price report 4 times--in 2004, 2006, 2007, and 2010--and has not published the video competition report 4 times--in 2007, 2008, 2010, and 2011. According to FCC officials, a variety of administrative factors contributed to the missed reports, and the reports are time consuming to prepare. The reports also impose burdens on some industry participants. Less frequent reporting on cable industry prices and competition in the video marketplace could allow for continued measurement of industry performance while reducing the burden on FCC and industry participants. GAO found little change in the reported findings from year-to-year in FCC's video competition report. FCC's 2009 cable industry price and 2012 video competition reports followed missed reports, and these reports included data covering multiple years; these reports could serve as a model for issuing such reports less frequently. Since these reports are statutorily required, Congress, with input from FCC, would need to determine any new reporting frequency. FCC should study the advantages and disadvantages of different reporting frequencies for its cable industry price and video competition reports and transmit the results of its analysis to Congress. FCC said that the Commission strives to use its resources efficiently to meet the agency's mission and its Congressional requirements, and the Commission is reviewing GAO's recommendation.
Climate change is having a variety of impacts on natural resources in the United States, ranging from more severe drought to increased flooding, and is altering assumptions that have been central to water resource planning and management. Congress and the Executive Office of the President have directed federal agencies to address the potential impacts of climate change. The two key federal water resource management agencies included in this review—the Corps and Reclamation—have similar yet distinct roles in managing water for a wide variety of purposes. In addition to the water management challenges posed by climate change, both agencies are dealing with aging water management infrastructure with limited funding for maintenance and construction. According to the U.S. Global Change Research Program, changes in the climate in the United States and its coastal waters have altered—and will continue to alter—the water cycle, affecting where, when, and how much water is available for all uses. Changes in the climate—including warmer temperatures, changes in precipitation patterns, rising sea levels, and more frequent and intense storms—affect water resources in a number of ways such as increased flooding in some areas and drought in others, and inundation and erosion in coastal areas. Precisely how and to what extent changes in the climate will affect particular water resources in the future is uncertain, but climate-related changes are expected to continue and increase in intensity in some areas of the nation. Climate change has the potential to affect many aspects of the environment and society in which water resource management plays an active role. A 2011 federal interagency review of the potential impacts of climate change on water resources identified four interrelated areas of concern for water resource managers as follows: assuring an adequate water supply for multiple needs, such as drinking water, agriculture, energy production, industrial uses, navigation, and recreation; protecting life, health, and property in the face of risks posed by protecting the quality of freshwater resources, including the quality of surface water and groundwater, and the health of fisheries and aquatic habitat; and protecting coastal and ocean resources as rising sea levels and changes in storm frequency, intensity, and duration impact coastal infrastructure. Adaptation—defined by the National Research Council as adjustments in natural or human systems to a new or changing environment that exploits beneficial opportunities or moderates negative effects—is an element of the proposed responses to climate change that is gaining more attention. More specifically, policymakers are increasingly viewing adaptation as a risk-management strategy to protect vulnerable sectors and communities that might be affected. As we reported in our May 2013 report on land resource management agency adaptation efforts, climate change adaptation planning frameworks generally consist of four key elements that are reviewed and revised as needed as new information emerges. These four elements are the following: Establish a mandate to address climate change with clearly articulated adaptation goals, objectives, and measures of success toward meeting goals. Assess and understand the risks, vulnerabilities, and opportunities posed by climate change by determining (1) what aspects of the climate are changing and over what periods, (2) which resources will be most at risk, (3) why these resources are likely to be vulnerable, and (4) what uncertainties are associated with the predicted climate change impacts and how this may impact adaptation efforts. Develop and prioritize management adaptation actions; that is, determine how to respond to the identified risks by considering a wide array of possible adaptation measures and identifying the highest priority adaptation measures. Implement management options, evaluate the results to determine the actions’ effectiveness, and makes adjustments as necessary. As climate continues to change, adaptation actions need to be regularly monitored for effectiveness and plans need to incorporate new information about risks, lessons learned, and modified priorities. According to a 2009 study collaboratively produced by the Corps, NOAA, Reclamation, and USGS, a variety of water-management options might be considered to facilitate adaptation to climate change, including operational changes, demand management, and infrastructure changes. The study concluded that the options for responding to the effects of climate change will vary by location, and that evaluating options will likely require a partnership between federal, state, and local interests to attain consensus among water managers and users. In 2009, Congress passed the SECURE Water Act, requiring Reclamation to establish a climate change adaptation program to (1) assess the effect of and risk resulting from global climate change on the quantity of water resources and (2) develop strategies to address potential water shortages and other impacts. To assess the effect of climate change, the law requires Reclamation to analyze the extent to which changes in water supply will impact several areas, including Reclamation’s ability to deliver water to its customers, hydroelectric power generation, fish and wildlife habitat, and recreation. The law requires Reclamation to assess specific risks to the water supply of each of its major river basins, including risks related to, among other things, changes in snow cover and the timing and quantity of runoff. To develop strategies to address impacts, the law requires Reclamation, in consultation with nonfederal stakeholders such as appropriate state water resource agencies, to consider and develop appropriate strategies to mitigate the impacts of water supply changes, such as strategies for modifying reservoir storage or operating guidelines and water conservation. Also in 2009, the President signed Executive Order 13514, Federal Leadership in Environmental, Energy, and Economic Performance which, among other things, directs agencies to participate in the Interagency Climate Change Adaptation Task Force (Task Force), which was already developing a strategy for adaptation to climate change, and to develop approaches through which the policies and practices of the agencies can be made compatible with the Task Force’s strategy. In October 2010, the Task Force delivered a progress report to the President through the Council on Environmental Quality containing overarching policy goals to advance climate adaptation and recommending the development of a national action plan to strengthen climate change adaptation for freshwater resources. Based on the work of the Task Force, the Council on Environmental Quality subsequently issued detailed adaptation planning implementation instructions in March 2011. The instructions directed the agencies to issue an agency-wide climate change adaptation policy statement, complete a high-level analysis of agency vulnerability to climate change, and submit to the Council on Environmental Quality and the Office of Management and Budget their climate adaptation plans by June 4, 2012, for implementation in fiscal year 2013. For over 230 years, the Corps has led the development and stewardship of much of the nation’s public water resources. The Corps’ Civil Works Program plans and manages water for transportation, recreation, energy, wildlife habitat, aquatic ecosystems, and water supply, while reducing the impacts of flood damages and other natural disasters. Specifically, the Corps has constructed—and continues to operate, maintain, and rehabilitate—a large inventory and wide variety of water management infrastructure, including reservoirs, hydropower facilities, commercial inland waterways, harbors, and levee systems. In June 2011, in response to the implementing instructions for Executive Order 13514, the Corps established its adaptation policy statement for addressing the effects of climate change, which called for the integration of climate change adaptation in all Corps activities. Two Corps programmatic efforts—the Interagency Performance Evaluation Task Force/Hurricane Protection Decision Chronology Lessons Learned Implementation Team (also known as Actions for Change) and the Responses to Climate Change Program—support the Corps’ ongoing adaptation activities. Since 1902, Reclamation has carried out its mission to manage, develop, and protect water and related resources in 17 western states. The agency has led or provided assistance in constructing most of the large dams and water diversion structures in the West for the purpose of developing water supplies for irrigation—that is, “reclaiming” these lands for human use. In 2009, Reclamation directed its Office of Research and Development and its Office of Policy and Administration to take the lead in implementing the actions required by the SECURE Water Act. Reclamation’s climate adaptation efforts fall within the larger effort by Interior to, among other things, implement Executive Order 13514. In 2013, Interior adopted a policy directing the agency to integrate climate change adaptation strategies into its operations, policies, programs, and planning. See appendix I for more information on the Corps’ and Reclamation’s organization and infrastructure. In addition to each agency’s individual efforts, the Corps and Reclamation established a partnership in 2007—the Climate Change and Water Working Group (CCAWWG)—to address their mutual concerns about the potential effects of climate change on their agencies’ missions. In 2009, the CCAWWG partners jointly produced a preliminary assessment of how climate could impact federal water resources management, which explored strategies to improve water management by tracking, anticipating, and responding to climate change, and identified adaptation challenges. According to a 2012 National Research Council report on Corps infrastructure, large portions of the Corps’ water resources infrastructure were built over 50 years ago and are experiencing various stages of decay and disrepair, making project maintenance and rehabilitation a high priority. The report also found that federal funding over the past 20 years has consistently been inadequate to maintain the Corps’ infrastructure at acceptable levels of performance and efficiency. Similarly, most of Reclamation’s water infrastructure facilities are more than 50 years old and, according to a 2011 Congressional Research Service report, with limited budgetary resources and aging infrastructure, Reclamation’s maintenance needs are likely to increase, as is competition for limited funding. Despite the ongoing challenges of operating and maintaining aging infrastructure under budgetary constraints, it is important that the Corps and Reclamation address the challenge of managing climate change risk in order to limit the fiscal exposure of the federal government. As noted in our 2013 high-risk update, climate change poses a significant financial risk to the federal government, including, but not limited to its role as the owner or operator of extensive infrastructure vulnerable to climate impacts. State and local authorities are responsible for planning and implementing many types of infrastructure projects, and decisions at these levels of government can affect the federal government’s fiscal exposure. While implementing adaptive strategies to protect infrastructure may be costly, there is a growing recognition that the cost of inaction could be greater and—given the government’s precarious fiscal position—increasingly difficult to manage given expected budget pressures, which will constrain not just future ad hoc responses, but other federal programs as well. As stated in a 2010 National Research Council report, increasing the nation’s ability to respond to a changing climate can be viewed as an insurance policy against climate change risks. Since 2009, as directed by executive order or required by law, the Corps and Reclamation have taken steps to assess water resource and infrastructure vulnerabilities and develop guidance and strategies for adapting to climate change, as shown in table 2. Officials from both agencies told us that as they develop the necessary guidance, they plan to implement specific adaptation strategies and share costs with state and local partners. Since 2009, the Corps has broadly assessed how climate change could affect its missions. Specifically, a phased assessment of the vulnerability of coastal projects is under way, more refined watershed-level vulnerability assessments are being developed, and pilot studies are being conducted to develop adaptation guidance and strategies. In March 2012, responding to guidance for implementing Executive Order 13514, the Corps provided the President’s Council on Environmental Quality with a high-level analysis of the vulnerability of the Corps’ missions and operations to climate change. The Corps’ analysis included an assessment of whether the potential effects of climate change on the Corps’ business areas would likely be negative, positive, or a mix of both. The analysis found, for example, that increasing air temperatures may have an effect on glaciers that could negatively impact Corps business areas such as navigation, flood and coastal storm damage reduction, ecosystem restoration, and emergency management, but the effect of increasing air temperatures on river ice could have both positive and negative impacts on those same four business areas. As part of the high-level analysis, the Corps also identified a number of adaptation priorities. To address its priority of developing more refined vulnerability assessments, the Corps is currently undertaking coastal project vulnerability assessments and is developing and testing a methodology for nationwide, watershed-level vulnerability assessments of its inland missions, operations, programs, and projects. The nationwide watershed-level vulnerability assessments will, according to Corps officials, help them make initial screening-level determinations of where adaptation strategies are needed or not needed, and then to prioritize accordingly. The agency officials expect to complete the first phase of screening assessments in 2013, with more refined assessments to come in future years. Ultimately, the Corps plans to combine the coastal and inland assessments into a unified methodology. In addition, Corps Civil Works officials told us that they are developing syntheses of literature to provide up-to-date information about regional climate impacts in order to support project-specific planning and help implement adaptation strategies. As part of its efforts to address the adaptation priorities of developing a risk-informed decision-making framework for climate change adaptation and a portfolio of adaptation approaches, the Corps is conducting 15 pilot studies nationwide to test different methods and frameworks for adapting to climate change. The Corps has completed 5 of these studies and 10 others are ongoing. According to Corps Civil Works officials, the pilot studies were proposed by Corps district staff and selected by senior Corps staff in a competitive, internal process. The 15 pilots are led by 13 different Corps districts and address project planning, engineering, operations, and maintenance for 6 different Corps business areas, involving a variety of different infrastructure types, such as flood risk reduction projects, reservoirs, and canals. Some of the pilots are being conducted collaboratively with federal, state, and university partners. Taken together, Corps officials expect the pilot projects to provide a body of knowledge and tested methods that will serve as the foundation for its guidance and future adaptation efforts. See appendix II for additional information on the locations and results of the Corps’ pilot studies. Reclamation has broadly assessed how climate change may affect water resources in the western United States as part of the Basin Study Program it established to meet the requirements of the SECURE Water Act. Specifically, responding to the Act’s requirement to assess specific risks to water supplies, Reclamation reported to Congress in 2011 its assessments of the potential effects of climate change on water supplies in the major river basins in the West. The studies, known as West-Wide Climate Risk Assessments (WWCRA), are high-level, baseline assessments of the potential impacts of climate change on future water supplies—including impacts on Reclamation’s ability to deliver water and hydropower—for each of the major river basins where Reclamation owns and operates water management infrastructure. According to agency officials, Reclamation is now conducting WWCRAs that focus on future water demand and will combine this information with its water supply assessments to form a more complete picture of the potential impacts of climate change on its water infrastructure. Reclamation policy officials told us that these combined assessments will be included in Reclamation’s next SECURE Water Act report that is due in 2016. These assessments will also be updated based on the latest climate science as water uses and conditions change, and they will be included in future reports that are due to Congress every 5 years. In addition to its WWCRAs, Reclamation is partnering with nonfederal entities to conduct more focused assessments, known as Basin Studies, to identify specific water resources vulnerabilities and to implement the SECURE Water Act’s requirement that Reclamation consider and develop strategies to mitigate climate change impacts on water supplies. Reclamation selects the Basin Studies through a competitive process and shares their costs with nonfederal partners. Through the Basin Studies, Reclamation intends to identify basin-wide water supply vulnerabilities, project climate change’s impacts on the performance of water infrastructure, and develop adaptation strategies—such as operational or physical changes to existing water infrastructure or development of new facilities—to address these impacts. According to Reclamation guidance, the Basin Studies are to develop long-term projections of water supply and demand that take into account specific climate change risks identified in the SECURE Water Act. The studies will analyze how well existing water and power infrastructure are meeting current demands and then forecast their performance in light of projected water supply and demand. To address projected imbalances in supply and demand, the studies are to identify adaptation strategies that include strategies for nonstructural (i.e., management and operations) and structural (i.e., capital expenditures) changes. As of September 2013, 3 Basin Studies have been completed, and an additional 14 studies have been funded and are under way. Reclamation and its partners, including state water management agencies and local irrigation districts, completed the Yakima River Basin Study in 2011, and both the St. Mary River and Milk River Basins Study and Colorado River Basin Study in 2012. Some studies entirely cover the major river basins specified in the SECURE Water Act—such as the Colorado River Basin Study, while other studies cover subbasins or tributaries within the boundaries of the major river basins—such as the Yakima River Basin Study, covering a tributary of the Columbia River. Reclamation officials told us that they next intend to initiate feasibility studies for adaptation strategies identified in completed Basin Studies by making funds available to nonfederal partners, beginning with an initial feasibility study in 2013. See appendix II for additional information on the locations and results of Reclamation’s Basin Studies. As recommended by the Task Force in 2010, the Corps and Reclamation are taking a phased approach to climate adaptation, including developing agency-wide guidance for adaptation. Specifically, through broad climate vulnerability assessments, agency officials told us they have expanded their knowledge of climate change and its impacts, allowing them to assess, at a high level, how these impacts may affect agency missions, programs, and operations. These initial vulnerability assessments have informed the agencies in developing and conducting more detailed vulnerability assessments, while also identifying specific strategies for climate change adaptation through the pilots and basin studies. Both agencies have also begun integrating what they have learned into their policies and program guidance. For example, beginning in 2009 and updated in 2011, the Corps issued guidance requiring that potential sea- level changes be considered in all of the agency’s coastal planning, engineering, operations, and maintenance activities. The Corps is currently developing guidance for implementing coastal and inland adaptation strategies. Similarly, in 2013, Reclamation officials told us they began to incorporate climate change adaptation considerations into its policies and guidance for project feasibility studies and environmental impact studies, among other things, using information and lessons learned from its WWCRA and Basin Study vulnerability assessments. According to agency officials, as the Corps and Reclamation integrate climate adaptation considerations into their policies and program guidance, they will begin to take steps toward implementing the potential adaptation strategies that they have identified. According to the Corps’ 2012 adaptation plan, the agency’s goal is to create a policy and guidance framework that will support the implementation of practical, nationally consistent, legally justifiable, and cost-effective adaptation strategies. Accordingly, Corps Civil Works officials told us that the pilots conducted to date have been largely focused on informing the development of policy and guidance, and that no structural or operational adaptation strategies have been implemented. Similarly, Reclamation officials told us no structural or operational adaptation strategies have been implemented in response to the agency’s vulnerability assessments, and that its efforts to update guidance will be informed by the adaptation strategy feasibility studies as they are completed. Corps and Reclamation officials told us that because they are early in the adaptation process, the extent to which limited budgets and existing infrastructure maintenance backlogs will affect the implementation of adaptation strategies remains to be determined. However, the implementation of adaptation strategies by both agencies will likely rely on collaborative sharing of costs and resources with federal, state, local, and nongovernmental stakeholders. Both agencies have already initiated cost-sharing and resource-leveraging measures. For example, according to agency officials, the Corps is leveraging resources for adaptation pilot studies with state, local, and nongovernmental entities, and Reclamation is splitting the cost of Basin Studies with state and local partners. The agencies plan to continue such collaborative approaches going forward. For example, as required under the SECURE Water Act, Reclamation officials told us they intend to share the cost of the feasibility studies for adaptation strategies equally with nonfederal partners. In 2009, the Corps and Reclamation—with its CCAWWG partners NOAA and USGS—published a study, (referred to in this report as the CCAWWG study). This study identified several challenges that climate change poses for water resource managers, including (1) identifying the data and tools needed by water managers to address climate change, (2) ensuring the sustained collection of climate data, (3) incorporating climate science into water management tools, and (4) educating water managers to use climate data and tools. We found that the Corps and Reclamation are addressing these challenges, making collaboration a key element of their efforts, and doing so in a manner generally consistent with best practices for sustained collaboration. The CCAWWG study identified a number of challenges faced by the Corps and Reclamation in adapting to climate change, and the agencies have taken a variety of actions to address these challenges. Identifying the data and tools needed by water managers to address climate change: The CCAWWG agencies are collaborating to produce a series of four documents identifying their common data and tool needs and a strategy for meeting them, with the objective, among other things, of guiding and fostering federal and nonfederal research and technology investments toward meeting these needs. The first document, published in 2011, described the water management community’s needs for climate change information and tools to support long-term planning for time scales of 5 years and more. The second document, published in 2013, described the data and tools needed to support short-term planning of less than 5 years. For both documents, the Corps and Reclamation asked their water resource managers to identify the information and tools most relevant to their programs, and they also consulted with other federal, state, and local agencies and stakeholders with a role in water resource management about their needs. The documents summarized the information and tools needed into categories and identified the users’ most pressing needs within each category. According to these documents, the CCAWWG agencies plan next to prepare two companion documents to identify a scientific strategy for meeting the research needs identified in the two initial documents. The two completed documents note that USGS and NOAA will jointly prepare the companion documents, incorporating perspectives from other federal and nonfederal representatives of the scientific community. Ensuring the sustained collection of climate data: The Corps and Reclamation are coordinating with the data collecting agencies and sharing some costs associated with their efforts. According to the 2009 CCAWWG study, at the same time as the need for observational data to support climate adaptation is increasing, the observational networks crucial to increasing understanding are shrinking. For example, in recent decades, maintenance of long-term monitoring networks has declined because of a lack of funding—USGS alone has deactivated or discontinued almost 1,700 surface-water stream gauges. Corps Civil Works officials told us that stream gauge data is extremely important, not only to the Corps’ ongoing operations, but also because science agencies use the data to produce the climate change information upon which the Corps bases its adaptation planning. To ensure the needed data is collected, the Corps has a formal agreement to provide funding to USGS—about $18 million in fiscal year 2013 according to Corps officials—to operate stream gauges that provide data for the Corps’ water planning and management activities. This data is also available for use by all federal and state agencies, as well as others interested in water information. Reclamation policy officials told us, in response to a SECURE Water Act requirement to consult with federal and applicable state agencies to develop a monitoring plan for acquiring and maintaining water resources information, agency staff are currently identifying information needs and plans to work collaboratively with data collecting agencies, including NOAA, USGS, and the U.S. Department of Agriculture, to develop the plan. Reclamation policy officials stated that the Basin Study Program activities, including the WWCRAs and Basin Studies, are also contributing to the planning effort by providing valuable information about how the agency’s monitoring needs are changing as a result of climate change. Reclamation officials told us that they intend to work with the U.S. Department of Agriculture and USGS to initiate the required monitoring plan in 2014. These collaborative efforts can help ensure that the long-term water resource monitoring networks critical for detecting and quantifying climate change and its impacts—as well as measuring the effectiveness of future adaptation strategies—will be properly configured and continue to operate. Incorporating climate science into water management tools: The Corps, Reclamation, and others are collaborating in a number of efforts to incorporate climate science into water management tools. For example, Reclamation research and development, as well as policy officials told us that as part of their effort to enhance the capabilities of water resource managers to use climate data, the agency is coleading two of Interior’s Landscape Conservation Cooperatives in the Colorado River Basin area. Reclamation officials told us that the Landscape Conservation Cooperatives, which are partnerships of governmental and nongovernmental stakeholders, will focus on developing and communicating science to inform climate adaptation strategies for ecological regions, or “landscapes.” In collaboration with academia, other federal agencies, local and state partners, and the public, the Landscape Conservation Cooperatives will provide products and services, such as climate change computer models and vulnerability assessments, coordinate with Interior’s regional Climate Science Centers to synthesize existing climate change impact data and management strategies, and help resource managers put them into action on the ground. The Landscape Conservation Cooperatives will also coordinate with NOAA’s Regional Integrated Sciences and Assessment program. Similarly, the Corps is engaged in collaborative efforts with external partners to integrate climate science into planning tools for water resource managers. For example, the Corps partnered with NOAA’s National Ocean Service to create an online sea level change calculator. According to Corps officials, this collaboration allowed the rapid integration of climate science into engineering guidance for coastal projects. Corps Civil Works officials also told us that collaboration with the Federal Emergency Management Agency and NOAA’s Urban Northeast Regional Integrated Sciences and Assessment program contributed to the development of a post- Superstorm Sandy sea level rise tool to help affected communities, residents, and other stakeholders consider risks from future sea level rise in planning for reconstruction. In addition, the Corps and NOAA are coleading actions to implement the Task Force recommendation to develop a federal Internet portal to provide current, relevant, and high-quality information on water resources and climate change data applications and tools for assessing the vulnerability of water programs and facilities to climate change. As a result, the Corps-hosted Federal Support Toolbox is now publicly available online. According to Corps Civil Works officials, the website is a “one stop shop” for technical resources to support water management. The website states that the Toolbox is an evolving and comprehensive water resources data portal with direct links to valuable databases, innovative programs and initiatives, and state-of-the-art models and tools. Educating water managers to use climate data and tools: The Corps and Reclamation are collaborating as coleaders in developing and implementing a training program as recommended by the Task Force’s National Action Plan: Priorities for Managing Freshwater Resources in a Changing Environment. These agencies—joined by NOAA, USGS, and the Environmental Protection Agency—are collaborating with the University Corporation for Atmospheric Research’s COMET Program and the Western Water Assessment to produce training courses for federal and nonfederal resource management professionals who need to assess the impacts of climate change on water and related resources. Specifically, in 2012, the agencies implemented an online course on incorporating climate change into water resource planning, which was a prerequisite for participating in the first two pilot residence courses—addressing climate impacts on surface hydrology and on water demand for irrigated crops—offered in early 2013. According to the program’s website, the online course was designed to provide students with water resource planning knowledge, while the residence courses offer opportunities for gaining hands-on experience in applying that knowledge. According to the website, these courses and the numerous planned future classes are collectively designed to provide a professional development and training series that will help managers assess climate change impacts across the spectrum of natural resources. The Corps and Reclamation have collaborated with each other and other agencies in a manner that is generally consistent with practices that we have previously identified as important to helping enhance and sustain collaboration among federal agencies. In 2005, we reported that collaboration—broadly defined as any joint activity that is intended to produce more public value than could be produced when organizations act alone—can be enhanced and sustained by engaging in eight key practices: (1) defining and articulating a common outcome; (2) establishing mutually reinforcing or joint strategies; (3) identifying and addressing needs by leveraging resources; (4) agreeing on roles and responsibilities; (5) establishing compatible policies, procedures, and other means to operate across agency boundaries; (6) developing mechanisms to monitor, evaluate, and report on results; (7) reinforcing agency accountability for collaborative efforts through agency plans and reports; and (8) reinforcing individual accountability for collaborative efforts through performance management systems. Running throughout these practices are a number of factors, such as leadership, trust, and organizational culture, which are necessary elements for a collaborative working relationship. The Corps and Reclamation have made collaboration a key element of their adaptation policies and plans and have reinforced accountability for collaboration through agency performance management systems. For example, the Corps’ climate adaptation policy states that collaborations are the most effective way to develop strategies to identify and reduce vulnerabilities to potential future climate change, and it calls for continued collaborative adaptation efforts. As stated in the Corps’ 2012 Climate Adaptation Plan and Report, it is the objective of the agency to facilitate and promote closer and more fruitful interagency cooperation and to promote sharing of impact and adaptation data and information between federal, state, and local partners. Finally, to reinforce accountability, the draft performance metrics for climate adaptation in the Corps’ 2013 Army Campaign Plan includes a target for the number of products developed in collaboration with other water resource agencies for adaptation planning and action. Reclamation has similarly included collaboration as a key element of its adaptation policy, plans, and performance metrics. Under Interior’s 2013 Climate Change Adaptation Policy, Reclamation is to integrate climate change adaptation strategies into its policies and practices by, among other actions, collaborating with stakeholders through Landscape Conservation Cooperatives, Climate Science Centers, and other partnerships to increase understanding of climate change. Furthermore, Reclamation’s strategic plan for its adaptation and conservation programs states that collaborative partnerships must be developed to identify the adaptive strategies needed to address climate change. Finally, through its climate adaptation efforts, Reclamation is contributing to Interior’s goal of identifying resources that are particularly vulnerable to climate change and implementing coordinated adaptation responses for half of the nation by September 30, 2013. We also found that the Corps and Reclamation have collaborated together and with others in accordance with best practices for collaboration among agencies. For example, in their key collaborative effort—CCAWWG—each agency’s role is well defined; the Corps and Reclamation provide water engineering and management expertise, and their partner agencies provide climate science expertise. CCAWWG has clearly defined common objectives, including the development of working level relationships between federal water management and federal science agencies, and it leverages resources across agencies to meet common needs. The CCAWWG agencies also have mutually reinforcing strategies. For example, the operating needs of the Corps and Reclamation drive the direction of science inquiries by the science agencies, resulting in improved operations, while at the same time the data collected and compiled by the water management agencies for a specific purpose can be used by the science agencies for alternative objectives. Outside of CCAWWG, as mentioned elsewhere in this report, the agencies have also followed key collaborative practices, such as the Corps leveraging resources to fund maintenance of the USGS’s stream- flow monitoring networks, and Reclamation establishing joint strategies with state agencies and others to conduct Basin Studies. We provided a draft of this product for review and comment to the Departments of Defense and the Interior. The Department of Defense provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies to the Secretaries of Defense and the Interior; the appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The U.S. Army Corps of Engineers (Corps)—an agency within the Department of Defense and a Major Command within the Army—is composed of four program areas including Civil Works, Military Construction, Real Estate, and Research and Development. To carry out its Civil Works missions nationwide, the program is organized into eight geographic divisions composed of 38 districts, as shown in figure 1. Division and district geographic boundaries are generally aligned with watershed boundaries. The Corps’ Civil Works Program is implemented through nine business areas that represent the diversity of the nation’s water resource management needs as follows: Navigation—provides safe and reliable commercial waterways; Flood and Coastal Storm Damage Reduction—reduces risks to people, homes, and communities from flooding and coastal storms; Environment—restores and protects wetlands and other aquatic Hydropower—generates hydroelectric power for distribution to Regulatory—regulates work in navigable rivers and the discharge of dredges and fill materials in U.S. waters; Recreation—provides recreational and educational opportunities; Emergency Management—prepares for natural disasters and acts Water Supply—provides water storage for multiple purposes; and Executive Direction and Management—provides leadership, strategic planning and performance measurement. To implement its responsibilities in the business areas, the Corps has constructed—and continues to operate, maintain, and rehabilitate—a large inventory and wide variety of water management infrastructure. For example, according to Corps records, as of September 2012, its inventory includes 702 reservoirs that, among other things, provide water supply storage and help reduce flood risk, and 75 hydropower facilities with 353 generating units that produce hydroelectric power for homes, businesses, and communities. In the area of navigation, the Corps’ inventory includes 12,000 miles of commercial inland waterways, 193 lock sites with 239 chambers, and 926 harbors. For flood reduction, the Corps inventory includes 14,501 miles of levee systems. For recreation, the Corps’ inventory includes 54,879 miles of lakeshore and recreation areas that support 370 million annual visitors, 270,000 jobs, and $16 billion of economic activity. In 2012, the National Research Council reported that the Corps’ infrastructure had an estimated value of approximately $164 billion. The Department of the Interior’s Bureau of Reclamation (Reclamation) is, according to agency records, the nation’s largest wholesale water supplier, providing water to over 31 million people and irrigation water to one out of five western farmers on 10 million acres of farm land, which produce 60 percent of the nation’s vegetables and one-quarter of its fresh fruit and nut crops. Reclamation is also the second largest producer of hydropower in the United States. To carry out its mission and operations, Reclamation is organized into five regions, as shown in figure 1. According to agency records as of 2011, Reclamation’s asset inventory includes 476 dams and dikes, creating 337 reservoirs with a total storage capacity of 245 million acre-feet of water. Reclamation’s inventory also includes 53 hydroelectric power plants that it owns and operates. These plants provide an average of more than 40 megawatt hours of energy per year. In 2011, Reclamation, estimated that the replacement value of its assets was approximately $90 billion. According to the Congressional Research Service, about two-thirds of the assets in Reclamation’s inventory are “transferred works”—facilities that it owns, but for which it has transferred operations and maintenance to nonfederal entities. The remaining one-third of these facilities are “reserved works”—facilities that are owned, operated, and maintained by Reclamation, where the nonfederal beneficiaries, such as irrigation districts, are responsible for repaying construction and maintenance costs. The Corps is conducting pilot studies nationwide to support the development of adaptation guidance and a portfolio of adaptation approaches. Reclamation is conducting Basin Studies—focused assessments within basins—to develop strategies to mitigate climate change impacts on water supplies. As part of its efforts to address the priorities of developing a risk-informed decision-making framework and a portfolio of adaptation approaches, the Corps is conducting 15 pilot studies nationwide to test different methods and frameworks for adapting to climate change (see fig. 3). As of September 2013, the Corps has completed 5 of these studies, and 10 others are ongoing. The Corps plans to initiate 1 additional pilot in 2013. According to the Corps’ September 2012 report on its climate change adaptation pilots, each study addresses a central question designed to help the Corps, according to officials, test new ideas, develop and utilize information at the project-level scale, and to collect information needed to develop policy and guidance for incorporating adaptation into all agency activities. For example, because the reliability of a planned flood reduction project could depend on how climate change affects future flooding, the Corps launched a pilot study with the goals of determining (1) whether tools and data are available for the Corps to provide reliable estimates of future flooding using climate projections and (2) how changes in precipitation patterns will affect flood events. This pilot study’s results supports the Corps’ effort to design an approximately $1.8 billion flood risk reduction project on the Red River of the North, where flooding has increased in magnitude and frequency since 1942. According to Corps officials, the pilot study successfully adapted existing tools and data to project future flooding, while also testing current guidance and contributing to the development of new guidance for future adaptation efforts. Specifically, Corps officials told us that climate data indicated a trend of flooding consistent with climate change projections. In accordance with existing Corps risk analysis guidance, the pilot study leader convened an expert panel to review the climate information. Based on the expert panel’s findings, the pilot team is integrating climate data into the Corps’ flood model to project future river conditions. In addition, a member of the pilot study team who is also responsible for developing the Corps’ guidance for inland adaptation activities told us that he is integrating his observations and experiences from this study—as well as from his participation in other Corps pilots—into the guidance. Corps officials told us that the information developed by the Corps’ climate change adaptation pilots is also contributing to subsequent Corps studies. For example, the climate change and modeling data for a pilot study of sediment impacts to Cochiti Dam and Lake in New Mexico is contributing to ongoing studies in the Corps’ Albuquerque District. One is the Santa Clara Pueblo Watershed Assessment, which is studying observed climate trends and projected climate changes to address likely future changes to watershed hydrology on the Pueblo’s lands, with particular attention to flood risk and water resources development. Reclamation is partnering with nonfederal entities to conduct focused assessments, known as Basin Studies, to identify specific water resources vulnerabilities and to implement the SECURE Water Act’s requirement that Reclamation consider and develop strategies to mitigate climate change impacts on water supplies. As of September 2013, 3 Basin Studies have been completed, and an additional 14 studies have been funded and are under way in western states (see fig. 4). All three of Reclamation’s completed Basin Studies projected climate change to result in warming and changes in precipitation that will alter the snow cover and runoff that supply water to the river basins, although the changes may be difficult to predict with certainty for any particular location and time. For example, the Colorado River Basin study projected that snow cover will decrease in the basin as more precipitation falls as rain, rather than snow, and warmer temperatures cause earlier melting. As a result, the runoff in the basin is generally projected to decrease—except in the northern Rockies. The study concluded that without additional water management actions, a wide range of future imbalances in supply and demand are plausible, primarily because of the uncertainty in future water supplies. The completed Basin Studies also found that climate change is expected to contribute to long-term imbalances in water supply and demand. The completed Basin Studies also identified, with varying degrees of specificity, possible adaptation strategies to address the projected effects of climate change on water resources. For example, the Colorado River Basin Study identified various combinations of potential adaptation strategies to address projected water supply and demand imbalances. The study noted that the purpose of identifying these strategies was not to select a single, best strategy, but rather to recognize that there are various ways to address projected water imbalances in the basin, and that each approach has positive and negative implications that should be considered in future planning efforts. Agency officials and study participants told us that Reclamation has formed workgroups to further refine the strategies and identify options that should then be evaluated through feasibility studies. In contrast, the Yakima River Basin Study identified a more specific set of adaptation strategies for which feasibility studies are planned. Nonfederal stakeholders, including those representing agricultural and the environmental interests, told us they are collaboratively pursuing state funding to initiate the feasibility studies. Reclamation policy officials and several other basin stakeholders— including state water managers, environmental advocates, and local water providers—told us that a key outcome of the completed Basins Studies is the establishment of a shared view of how climate change will impact the basins and their water management infrastructure. Reclamation policy officials, as well as basin stakeholders representing environmental and tribal interests, told us that the Basin Studies represent Reclamation’s first concerted efforts at holistic planning in the river basins, taking into account not only the needs and concerns of irrigation users, but also the interests of tribes, recreational users, environmental advocates, and others. A state water manager told us that a prior river basin planning effort had largely been unsuccessful due to the lack of involvement or opposition by environmental and tribal interests, and several stakeholders commended Reclamation for working to ensure that the views of a wide range of stakeholders, including environmental and tribal interests, were considered in the Basin Studies. In addition to the individual named above, Elizabeth Erdmann, Assistant Director; Brad Dobbins; Richard Johnson; Mick Ray; Jeanette Soares; Lorelei St. James; and Sarah E. Veale made key contributions to this report. Nirmal Chaudhary, John Delicath, Cindy Gilbert, and Armetha Liles provided additional technical assistance.
The effects of climate change on water resources have already been observed and are expected to continue. The Corps and Reclamation own and operate key water resource management infrastructure, such as dams and reservoirs. Adaptation-- adjustments in natural or human systems to a new or changing environment that exploits beneficial opportunities or moderates negative effects--can be used to help manage the risks to vulnerable resources. In 2009, a law--commonly referred to as the SECURE Water Act--and a presidential executive order directed federal agencies to address the potential impacts of climate change. GAO was asked to review agency actions to address climate change impacts on water infrastructure. This report examines (1) actions taken by the Corps and Reclamation since 2009 to assess and respond to the potential effects of climate change on water infrastructure and (2) challenges, if any, faced by the Corps and Reclamation in assessing and responding to the potential effects of climate change on water infrastructure, and the steps the agencies are taking to address them. GAO analyzed the agencies' climate change adaptation guidance and planning documents and interviewed agency officials and other key stakeholders, including water users, environmental groups, and researchers. The Department of Defense's U.S. Army Corps of Engineers (Corps) and the Department of the Interior's Bureau of Reclamation (Reclamation) have assessed water resource and infrastructure vulnerabilities and taken steps to develop guidance and strategies to adapt to the effects of climate change. Specifically, since 2009, the Corps has completed a high-level assessment of the vulnerabilities to climate change of various agency missions. The assessment found, for example, that the effects of increasing air temperatures on glaciers could negatively impact mission areas including navigation and flood damage reduction. The Corps has also conducted pilot studies to help identify adaptation guidance and strategies; it has completed 5 of the 15 pilot studies initiated and plans to start another study in 2013. Similarly, Reclamation has completed baseline assessments of the potential impacts of climate change on future water supplies for the major river basins where it owns and operates water management infrastructure. Reclamation, in collaboration with nonfederal entities, is now conducting more focused assessments, known as Basin Studies, through which Reclamation seeks to identify water supply vulnerabilities and project future climate change impacts on the performance of water infrastructure. According to agency officials, these studies will also help Reclamation develop adaptation strategies to address these impacts, such as operational or physical changes to existing water infrastructure or development of new facilities. Three Basin Studies have been completed, an additional 14 are under way, and 2 more are planned. Reclamation next plans to initiate feasibility studies for adaptation strategies identified in completed Basin Studies. Both agencies are incorporating what they have learned from their efforts into agency policies, planning, and guidance, according to agency officials. In 2009, the Corps, Reclamation, the National Oceanic and Atmospheric Administration, and the U.S. Geological Survey (USGS), jointly published a study that identified several challenges that climate change poses for water resource managers, and the Corp and Reclamation are collaboratively addressing these challenges. Specifically, these agencies are identifying the data and tools needed by water managers to address climate change, which will help guide federal research efforts; obtaining needed climate data by collaborating with other agencies to help ensure that the data are collected, such as by sharing some costs associated with maintaining USGS's stream flow measurement activities, which are valuable to Corps water planning and management; integrating climate science into water resource management decision making through activities such as developing and communicating science to inform climate adaptation strategies; and collaborating in the development of a climate change science training program for federal and nonfederal water resources managers. The Corps and Reclamation have collaborated together and with others in a manner that is generally consistent with practices that GAO has identified as important to enhancing and sustaining collaboration among agencies. The Corps and Reclamation have made collaboration a key element of their policy and plans for adapting to the effects of climate change and have reinforced accountability for collaboration through agency performance management systems. GAO is not making any recommendations.
FDA’s approval is required before brand-name drugs and generic drugs can be marketed for sale in the United States. To obtain FDA’s approval to market a brand-name drug, sponsors must submit a new drug application (NDA) containing data on the safety and effectiveness of the drug as determined through clinical trials and other research. To obtain FDA’s approval to market a generic drug, sponsors must submit an abbreviated new drug application (ANDA). The ANDA must contain data showing, among other things, that the generic drug is bioequivalent to, or performs in the same manner as, a drug approved through the NDA process. If a sponsor wants to change any part of its original NDA or ANDA after its approval—such as changes to manufacturing location or process, the type or source of active ingredients, or the labeling—it must generally submit an application supplement to notify FDA of the change. If the change has a substantial potential to adversely affect factors such as the identity, strength, quality, purity, or potency of the drug, the sponsor must obtain FDA approval. As part of the application and application supplement review process, FDA may conduct an inspection of the establishment where the drug will be manufactured to verify the accuracy and authenticity of the data contained in the application, to determine that the establishment is following commitments made in the application, and to verify that the establishment is prepared to make the drug named in the application or supplement. After approving brand-name and generic drugs for marketing in the United States, FDA’s oversight responsibilities continue, as it is charged with monitoring their safety, effectiveness, quality, and promotion. FDA periodically inspects drug manufacturing establishments, including those manufacturing brand-name, generic, and over-the-counter drugs to assess their ongoing compliance with current good manufacturing practice regulations. In addition to these surveillance inspections, FDA may also conduct for-cause inspections when the agency receives information indicating problems in the manufacture of marketed drugs, among other reasons. FDA may conduct an inspection that includes multiple components (e.g., both preapproval and surveillance) during a single visit to an establishment. Based on the agency’s findings during an inspection, FDA classifies the inspection as either (1) no action indicated, when insignificant or no deficiencies were identified; (2) voluntary action indicated, when deficiencies were identified and must be corrected, but the agency is not prepared to take regulatory action; or (3) official action indicated, when serious deficiencies were found that warrant regulatory action. Specifically, if FDA identifies a violation of law or regulations during an inspection and therefore finds the establishment to be out of compliance with manufacturing standards, the agency may issue a warning letter. FDA issues warning letters when the agency has identified violations that may lead to enforcement action if not promptly and adequately corrected. Recommendations to issue a warning letter are either made by staff in FDA’s district offices or by staff in FDA’s Center for Drug Evaluation and Research. Multiple levels of the Center’s staff review all warning letter recommendations. It is FDA policy to consider many factors in determining whether to issue a warning letter. For example, the agency is to consider the compliance history of the establishment, the nature of the violation (e.g., whether the establishment was aware of the violation, but failed to correct it), and the risk associated with the product and the impact of the violation on such risk. FDA is also to consider corrective actions taken or promised by the establishment since the inspection, and it may decide not to issue a letter if an establishment’s corrective actions are adequate and the violations that would have supported the letter have been corrected. To determine whether actions planned or taken by an establishment to correct violations are adequate FDA may, among other activities, review documentation describing proposed or completed corrective actions or hold meetings with representatives of the establishment to discuss these actions. FDA is also required by law to consider whether issuing the warning letter could reasonably cause or exacerbate a shortage of a life-saving drug. If it determines a shortage could occur or an existing shortage could worsen, the agency must evaluate the risks associated with the impact of such a shortage upon patients and the risks associated with the violation before taking action, unless there is an imminent risk of serious health consequences or death from not taking action. Once issued, warning letters are publicly posted on FDA’s website. FDA’s Drug Shortage Staff (DSS) coordinates the agency’s response to drug shortages. FDA is notified of actual and potential drug shortages by manufacturers, health professionals, and the public. Once DSS becomes aware of a potential or actual shortage, DSS attempts to determine whether the total supply of the drug and any pharmaceutical equivalents is inadequate to meet demand. To verify that a shortage is in effect or a potential shortage is pending, DSS contacts all manufacturers of the drug to collect up-to-date information on the inventory and demand for the drug and manufacturing schedules. DSS also analyzes market research data from IMS Health to compare current supply of the drug with historical demand. DSS coordinates as needed with several other FDA offices including the Office of Generic Drugs and the Office of Compliance to address drug shortages. Once DSS verifies a shortage or potential shortage of a drug, it may seek assistance from these offices to address that shortage, including the following: identifying the extent of the shortage and determining whether other manufacturers are willing and able to increase production of the shortage drug; prioritizing reviews of drug applications, supplements, and inspections for manufacturers attempting to restore, increase, or begin production of the shortage drug; and applying regulatory discretion, such as refraining from taking enforcement action to stop the distribution of a drug that is in shortage despite a labeling or quality issue. For example, DSS provides the Office of Generic Drugs with drug shortage information so that the office can identify ANDAs or ANDA supplements it can prioritize its review of to address a shortage. While there are a number of steps FDA can take to address a shortage, FDA cannot require manufacturers to start producing or continue to produce a drug. It also cannot require manufacturers to maintain or introduce manufacturing redundancies in their establishments to provide them with increased flexibility to respond to shortages. Finally, FDA cannot control the prices of marketed drugs. In our February 2014 report, we identified unique characteristics of the sterile injectable drug industry that may make these drugs susceptible to shortages. These characteristics include limited inventory, need for regulatory approval, production complexity, and constrained manufacturing capacity. Limited inventory. The widespread use of “just-in-time” inventory practices can increase the vulnerability of the supply chain to shortages. For example, according to one manufacturer representative, manufacturers typically have about 2 to 3 months of inventory on hand, wholesale distributors usually have about 1 month, and providers only have a few weeks of inventory. Consequently, when a manufacturer stops production, a shortage can result quickly. Regulatory approval. New manufacturers may not be able to quickly enter the market to produce a drug in shortage because FDA’s approval of an ANDA—which can take more than a year—is required. Further, even existing manufacturers of the drug need FDA approval of changes to manufacturing conditions or processes that have a substantial potential to adversely affect factors such as the identity, strength, quality, purity, or potency of the drug before the drug manufactured under the new conditions or processes can be marketed. For example, FDA approval of an application supplement may be required for changes in location of a manufacturing site or the source of the raw materials or components for manufacturing a drug. Production complexity. Costly, specialized equipment is required to manufacture prescription drugs and production processes are complex, particularly for sterile injectables. Maintaining sterility throughout the production process is challenging, yet it is particularly important for these drugs as serious injury can occur if contaminated drugs are injected into patients. Some generic sterile injectable drugs need to be manufactured on lines or in facilities dedicated solely to those drugs, thus creating challenges for new manufacturers to enter the market. We previously found that sterile injectable anti-infective and oncology drugs require lines, and sometimes whole facilities, that are limited to the production of such drugs. For example, some anti- infective drugs, such as penicillin, can trigger serious allergic reactions at very low levels and as a result, may be limited to specific manufacturing lines. Constrained manufacturing capacity. The generic sterile injectable drug industry is highly concentrated and this limited manufacturing capacity has been challenged in recent years as the industry has expanded the number of generic products it manufactures. The pressures to produce a large number of drugs on only a few manufacturing lines leaves the manufacturers that do participate in the generic sterile injectable market with little flexibility when one manufacturer ceases production of a particular drug. For example, manufacturer representatives told us that manufacturing establishments schedule the production of each drug in their product line for specific time periods, often months in advance. An establishment that produces a particular drug may not be able to produce additional quantities in response to a shortage until the next time the particular product is scheduled for production—which could be months after a shortage begins. If a manufacturing establishment has available production capacity, the manufacturer also faces risks when deciding to ramp up production to address a shortage. In particular, one manufacturer representative said that manufacturers do not know how long their competitors will be out of the market. If the manufacturer that left the market quickly restarts production of the drug, the manufacturer that made the investment to ramp up production to address the shortage may face a financial liability if it is unable to sell the additional product it manufactured. Another capacity-related issue is that the company whose name is on the drug label, which we term the supplier, may or may not be the same as the company that actually manufactures the drug. Rather than produce the drug themselves, some suppliers enter into a relationship with a contract manufacturer to produce the drug on their behalf. Therefore, the number of suppliers of a particular drug may not be the same as the number of manufacturers of that drug. The number of suppliers could also be different than the number of manufacturers if the name on the drug’s label is that of a repackager, a distributor, or the parent company of the manufacturer. New drug shortages continue to be reported, although the number of new shortages each year has generally decreased since 2011. New shortages peaked in 2011 with 257 reported, while 136 new shortages were reported in 2015, a decrease of 47 percent from 2011. Meanwhile, since 2012, the number of ongoing shortages (shortages that began in prior years) has remained high with over 250 ongoing shortages each year from 2012 through 2015. (See fig. 1.) As a result, the majority of drug shortages each year since 2012 have been ongoing shortages rather than newly reported shortages. For example, in 2015, 68 percent of the shortages (291 out of 427) were ongoing shortages that began in a prior year. Since 2013, the majority of the ongoing shortages in a given year were first reported at least 2 years earlier. (See fig. 2.) For example, in 2015, 171 of the 291 ongoing shortages (59 percent) were first reported during 2013 or an earlier year, while the remaining 120 ongoing shortages were first reported during 2014. The duration of all shortages reported from January 2010 through December 2015 varied, ranging from 1 day to almost 6 years.Of these shortages, 65 percent lasted 1 year or less, while 12 percent lasted more than 3 years. The average duration of all shortages reported during this time period was 418 days. The fact that some shortages have lasted 3 or more years suggests that manufacturers and FDA have had difficulty addressing the issues behind these persistent shortages. For example, FDA stated that some drugs have been in shortage for multiple years because manufacturers have been unable to address the issues that led to the shortage or have chosen not to continue producing the drugs. The experiences of providers dealing with shortages every day generally supports the trend seen in the UUDIS data that shortages persist. In following up with representatives from the 10 national associations representing health care providers (including hospitals, physicians, and pharmacists) that we contacted for our 2014 report, we learned that shortages continue to affect providers’ ability to safely and effectively care for patients. Reflecting on their experiences with shortages in the last 2 years, representatives of 6 of the 10 associations reported that shortages had remained constant or increased, 2 reported a decrease, and 2 others did not identify a trend in recent shortages, but noted that they were still a concern. FDA prioritized its review of 383 drug applications and supplements to address shortages from January 2010 through July 2014, 240 of which were for generic sterile injectable drugs. Our analysis of a subset of those submissions indicates that some were approved before the shortage was resolved. Although the timing of FDA’s approvals of submissions does not establish a causal link, it could indicate that prioritizing reviews may be a useful strategy in addressing some drug shortages. From January 2010 through July 2014, FDA prioritized its review of 383 submissions—applications and supplements to change approved drug applications—to address drug shortages. These submissions represent 3 percent of all submissions that FDA received during this time period. Almost all of the submissions that FDA prioritized during this time period were ANDAs or ANDA supplements for generic drugs; the remaining few were NDA supplements for brand-name drugs. (See fig. 3.) Further, the majority of the submissions for which FDA prioritized its reviews to address drug shortages were for generic sterile injectable drugs. Specifically, 63 percent (240) of the 383 submissions granted a prioritized review were for generic sterile injectable drugs, and an additional 4 percent (17) were for brand-name sterile injectable drugs. Twenty-four percent (92) of the prioritized submissions were for drugs in capsule or tablet form, while the remaining 9 percent (34) were for drugs in other dosage forms, such as ointments or patches. Overall, FDA had completed at least one review cycle for approximately 80 percent of the 383 prioritized submissions as of October 30, 2014. FDA’s review of a submission may span several review cycles before the agency makes a decision regarding its approval, and once the review of a submission is prioritized any subsequent reviews of it are also prioritized. An additional review cycle may occur if, for example, to ensure the safety and efficacy of the product, FDA asks a sponsor to supply additional data, analyses, or other information to address concerns identified in its review. According to FDA it has historically taken, on average, about four review cycles to approve an ANDA. As of October 30, 2014, 43 percent (164) of the 383 prioritized submissions had been approved and FDA had completed at least one review for another 37 percent (140). The majority of submissions for which FDA had not completed a review cycle as of October 2014, were received in 2013 and 2014. See table 1 for the status of the 383 prioritized submissions FDA received from January 2010 through July 2014. For the submissions in our review that FDA approved as of October 30, 2014, the time from when they were prioritized to approval varied by submission type. These review times ranged from 3 days to more than 3 years for ANDA supplements and from 3 days to 6 months for NDA supplements. For ANDAs, review times ranged from 40 days to more than 3 years. See table 2 for the median time to approval for submissions that FDA received and prioritized its review of from January 2010 through July 2014. If FDA does not approve a submission after the first review, it will provide sponsors with complete response letters seeking additional information that addresses deficiencies that FDA identified, making the time to approval longer. For 47 of the 71 approved ANDAs, FDA issued at least one complete response letter and therefore these ANDAs had more than one review cycle, with a range of two to five cycles. The remaining 24 approved ANDAs were approved at the end of the first review cycle in which they were prioritized. Of the 70 approved ANDA supplements, 52 were approved at the end of the first review cycle in which their review was prioritized, with the number of review cycles ranging from one to three. Of the 12 approved NDA supplements, 11 were approved at the end of the first review cycle that FDA prioritized. Lastly, FDA prioritized its review of submissions to address drug shortages for many different sponsors and sometimes for more than one submission per drug during this time period. The 383 prioritized submissions came from 107 different sponsors. The number of prioritized submissions for any given sponsor ranged from 1 to 24. The majority of these sponsors (69 percent) had 1 to 2 submissions prioritized, and 9 percent of the sponsors had more than 10 submissions prioritized. The 383 prioritized submissions were associated with 160 drugs. The number of submissions for each drug ranged from 1 to 16. Multiple submissions for a single drug were typically from multiple sponsors seeking approval to market the drug. Seventy percent of these drugs were associated with 1 to 2 prioritized submissions, while 6 percent were associated with 7 or more prioritized submissions. Our analysis of a subset of the 383 submissions, consisting of 153 submissions that were associated with 38 drugs, suggests that FDA’s prioritization of submissions may be helpful in addressing some drug shortages. To examine this strategy, we reviewed the following: Relationship between submissions and shortage prevention or resolution. When we examined the subset of prioritized submissions that were associated with 38 drugs, we found that 15 of the drugs were associated with at least one prioritized submission that was approved before the shortage was resolved or a potential shortage was prevented. The timing of FDA’s approvals of these submissions suggests that this strategy may have contributed to addressing shortages of these 15 drugs, although it does not establish a causal link. Specifically, the approved submissions for these 15 drugs may have helped resolve 12 shortages, prevent 2 shortages, and mitigate 1 shortage. These approved submissions were for drugs in several therapeutic classes (including anti-infective, oncology, and central nervous system drugs) and used to treat a variety of conditions (including bacterial infections, breast cancer, and attention deficit hyperactivity disorder). Conversely, another 13 of the 38 drugs did not have any prioritized submissions approved prior to the shortage resolution or prevention date, so submissions for those drugs could not have contributed to addressing a shortage. However, for 9 of these 13 drugs at least one prioritized submission was approved after the shortage was resolved or prevented, which FDA determined may have helped to reduce supply vulnerabilities and prevent future shortages. In addition, submissions for 2 of the 38 drugs were not approved as of October 2014 and the shortages the submissions were prioritized to address remained active. Finally, the submissions for 8 of the 38 drugs were not associated with a specific shortage at the time of prioritization, although the majority of these drugs had previously been in shortage or were otherwise vulnerable to shortage. (See table 3.) Time to approval for approved submissions. The median time to approval from the date prioritized differed for the 26 submissions that may have contributed to the prevention or resolution of a shortage, compared to the 24 submissions approved after the associated shortage was prevented or resolved. This difference was more pronounced for ANDAs than for supplements. Specifically, the median time to approval for ANDAs that may have helped to prevent or resolve a shortage was almost 4 months faster than it was for ANDAs that were not approved until after the prevention or resolution of the associated shortage. For the 26 submissions that may have contributed to the prevention or resolution of a shortage, the median time to approval was 494 days for ANDAs and 87 days for supplements. For the 24 submissions that were not approved until after the associated shortage was prevented or resolved, the median time to approval was 613 days for ANDAs and 80 days for supplements. Given that the median time to approval for prioritized ANDAs is over a year, prioritizing reviews of ANDAs to address drug shortages is generally not a strategy for addressing shortages in the short term. However, this strategy may be useful to address drug shortages that have persisted across multiple years or recurred multiple times in a few years. This may also be a helpful approach if FDA is notified as early as possible about potential shortages. FDA’s drug shortages strategic plan states that early notification of potential supply disruptions is critical because it puts the agency in a better position to use all of its available strategies to address drug shortages, including prioritizing its reviews of ANDAs from sponsors who want to enter the market for a drug that is vulnerable to shortage or already in shortage. The success of this strategy, however, also depends on whether sponsors are willing or able to submit ANDAs for drugs that are vulnerable to shortage or already in shortage, which is beyond FDA’s control. The number of warning letters FDA issued annually to sterile injectable drug manufacturing establishments found to be out of compliance with manufacturing standards generally increased from fiscal year 2007 through fiscal year 2013. The number of letters issued ranged from 1 letter resulting from an inspection conducted in fiscal year 2007, to 11 letters resulting from fiscal year 2010 inspections and another 11 letters resulting from fiscal year 2011 inspections. In addition, FDA issued a growing number of such letters to non-injectable drug establishments, ranging from 16 letters resulting from fiscal year 2007 inspections to 45 letters resulting from fiscal year 2010 inspections and another 45 letters resulting from fiscal year 2011 inspections. Although the number of warning letters issued increased, the percentage of inspections that resulted in warning letters in a given year remained relatively small. One percent of FDA’s fiscal year 2007 inspections of sterile injectable drug establishments resulted in the issuance of warning letters, compared with 5 percent of such inspections in fiscal years 2010 and 2011. (See fig. 4.) The percentage of inspections of non-injectable drug establishments that resulted in warning letters was similar, ranging from 1 percent of fiscal year 2007 inspections to 4 percent of fiscal year 2013 inspections. As the number of warning letters issued to sterile injectable drug establishments for noncompliance with manufacturing standards generally increased from fiscal year 2007 through fiscal year 2013, so did shortages of these drugs. (See table 4.) Both the number of warning letters and the number of shortages were particularly high in fiscal years 2010 and 2011. While a corresponding rise in warning letters and shortages in certain years could reflect an increase in FDA inspection rigor, as was suggested by some sources in the literature review conducted for our prior report, it could also indicate growing manufacturing problems. Such problems could lead to shortages as establishments recalled defective products or shut down or slowed production to correct manufacturing problems. What is not known is whether establishments experiencing such manufacturing problems would have shut down or slowed production in the absence of an FDA warning letter. FDA officials disputed the notion that the agency’s issuance of warning letters to establishments found to be out of compliance with manufacturing standards caused shortages. First, FDA officials noted that some shortages are unrelated to manufacturing problems and therefore could not have been caused by FDA’s issuance of warning letters for manufacturing violations. This notion is consistent with our prior analysis of FDA shortage data, which found that from January 2011 through June 2013, 30 percent of shortages were reportedly caused by issues unrelated to manufacturing, such as increased demand or unavailability of raw materials or components. Second, although the agency does have other enforcement powers to stop distribution of a product, FDA officials stated that warning letters issued for noncompliance with manufacturing standards do not order a stop in production or distribution. Finally, FDA officials stated that it is important to put warning letter data in perspective by considering the reason that FDA conducted the inspections that resulted in warning letters. According to FDA officials, if the inspections that resulted in warning letters were inspections with a for-cause component and thus were conducted to investigate potential manufacturing problems, then any underlying manufacturing problems that led to the warning letter could also have caused shortages. For example, officials told us that during this time frame, inspections of sterile injectable drug manufacturers were often conducted because of reports of problems with particulates, such as a number of voluntary recalls conducted in response to glass fragments in sterile injectable drugs in 2010 and 2011. Our analysis of 7 years of FDA data on inspection type does not reveal a clear trend in terms of the relationship between shortages, warning letters, and one indication of potential manufacturing problems—the frequency of inspections with a for-cause component. Our analysis shows that between fiscal years 2007 and 2013 the percentage of inspections with a for-cause component was consistently higher for sterile injectable drug manufacturing establishments than it was for non- injectable establishments. (See fig. 5.) FDA officials told us that they evaluate the health hazards of all reports of potential manufacturing problems that they receive. However, because of the potentially serious health consequences of using a sterile product that has been contaminated, the agency may be more likely to conduct a for-cause inspection in response to reports of potential manufacturing problems at a sterile injectable drug establishment than at one that manufactures non- injectable drugs. Across this time period, the percentage of sterile injectable drug establishment inspections with a for-cause component varied. After declining from its fiscal year 2007 peak, the percentage of sterile injectable drug establishment inspections with a for-cause component grew to 17 percent of fiscal year 2011 inspections. Fiscal year 2011 was also both the peak in new sterile injectable drug shortages and warning letters issued to sterile injectable drug establishments. While the number of warning letters issued to sterile injectable drug establishments was equally high a year earlier in fiscal year 2010 and new shortages were at their second highest, the percentage of inspections with a for- cause component was at its lowest—10 percent. Thus, comparing the trend in inspections with a for-cause component to the trends in shortages and in warning letters provides support for FDA officials’ contention that there were underlying manufacturing problems that could have led to shortages and warning letters in some years, but not others. From fiscal year 2010 through fiscal year 2012, seven sterile injectable drug manufacturing establishments that received warning letters for noncompliance with manufacturing standards slowed or shut down production. FDA and others said these slowdowns and shutdowns led to widespread shortages. For example, the fiscal year 2012 voluntary shutdown of one of the seven establishments reportedly led to the actual or potential shortage of more than 100 drugs. Another of the seven establishments manufactured more than 300 different drugs, so its production slowdown also led to multiple shortages. FDA issued warning letters to all seven establishments after finding that the establishments were not in compliance with manufacturing standards during inspections conducted from fiscal year 2007 through fiscal year 2011. Although FDA did not require the establishments to shut down or slow production, the agency noted in a letter to a member of Congress about this issue that when products manufactured under problematic conditions pose a safety threat to patients—such as glass shards or metal shavings in vials of injectable drugs or fungal contamination— manufacturers generally must stop production to resolve the problem. Such problems were experienced by six of the seven establishments linked to widespread shortages when particulates were discovered in their sterile injectable products. For example, a drug at one establishment was found to contain microscopic particles that were “stringy, amorphous, and globular” and sterile injectable drugs at two other establishments contained stainless steel particles. The presence of metal particles in sterile injectable drugs can cause serious injury to patients when injected. Following the receipt of reports of serious injury and illness, a drug manufactured at the seventh establishment was discovered to contain endotoxin, a component of certain bacteria, which may cause severe fever and death if present in a drug. FDA documents and data indicate that all seven of these establishments had difficulty meeting manufacturing standards prior to FDA’s issuance of a warning letter, which, at least for these establishments, runs counter to the claim that the increase in warning letters was an indication that the agency began to apply manufacturing standards more rigorously. For example, FDA staff previously recommended issuing warning letters to two of the establishments, but after further internal review, FDA issued an untitled letter to one establishment and did not issue a warning letter to the other. (See fig. 6.) For two other establishments, the inspections preceding the inspection that resulted in the warning letter often included a for-cause component. For example, one of the two establishments was inspected six times between fiscal year 2007 and the inspection that resulted in the warning letter and each inspection was conducted in response to manufacturer reports of potential manufacturing problems submitted to FDA, complaints from consumers or health care providers, or both. Our analysis of FDA documents also shows that, for four establishments, the same manufacturing violations that led FDA to issue a warning letter had also been observed during previous FDA inspections. In the case of one of the four establishments, FDA documents show that the agency had expressed concerns about one violation 5 years prior to the inspection that resulted in the warning letter. For nearly all of the seven establishments linked to widespread shortages, there were continued indications of difficulty meeting manufacturing standards following their receipt of a warning letter. In addition to issuing one establishment a warning letter, FDA subsequently sought and obtained an injunction against this establishment to prevent it from manufacturing and distributing most drugs until FDA determined that the establishment was compliant with the Food, Drug, and Cosmetic Act. An agency press release about the injunction noted that inspections of the establishment found several product quality problems, including facility cleaning issues and poor equipment maintenance practices resulting in equipment shedding particles into some sterile injectable products. Despite investments to address these issues, this establishment decided to cease manufacturing all drugs and was permanently closed in 2013. Subsequent inspections of four other establishments resulted in the classification of official action indicated, signifying that FDA continued to identify serious deficiencies that warranted regulatory action. With these continued indications of potential manufacturing problems at multiple sterile injectable establishments manufacturing such medically necessary drugs as those used to treat cancer, administer anesthesia, and prevent blood clots, shortages of multiple sterile injectable drugs persist. FDA officials told us that these seven establishments all made improvements and in many cases are now helping to prevent and resolve some shortages. However, as of April 2016, five of these seven establishments continue to cause shortages, according to FDA officials. Shortages of sterile injectable anti-infective and cardiovascular drugs during 2012, 2013, and 2014 were strongly associated with certain factors we examined. We estimated a regression model to examine the relationship between drug shortages and four factors: (1) a decrease in the number of suppliers, (2) sales of a generic version, (3) the failure of an establishment making the drug to comply with manufacturing standards resulting in a warning letter, and (4) price decline. We found all factors but price decline to be strongly associated with shortages of the drugs in our study. For each factor, table 5 displays the estimated percentage point increase in the probability of a shortage when the factor is present for all drugs relative to the mean probability of a shortage predicted for all drugs in our study by our model. These estimates show that the presence of a single factor increases the probability of a drug shortage by as much as 16.8 percentage points from what the model otherwise predicts for all drugs in our study. The strong association between shortages and both (1) a decrease in the number of suppliers and (2) the failure of an establishment making the drug to comply with manufacturing standards resulting in a warning letter suggests that shortages may be triggered by supply disruptions. Characteristics of the sterile injectable drug industry may make these drugs susceptible to shortage when the number of suppliers decreases. For example, a supplier may decide to permanently discontinue an unprofitable product or the unavailability of raw materials may lead to production delays. Further, failure to comply with manufacturing standards resulting in a warning letter could also trigger a supply disruption if a manufacturer chooses to temporarily shut down production in a particular establishment to correct the conditions that led to a warning letter. In this industry, there is limited inventory in the supply chain, manufacturing capacity is constrained because production is scheduled months in advance, new manufacturers must receive regulatory approval before entering the market, and the production process is complex. After a supply disruption for any reason, if other manufacturers are not able to increase supply in a timely manner, a shortage may ensue. For the drugs in our study, the association between noncompliance with manufacturing standards resulting in a warning letter and shortages is largely driven by the structure of the generic injectable manufacturing industry. The warning letters received by three large manufacturing establishments for failure to comply with manufacturing standards appear to be driving our finding that failures to comply with manufacturing standards resulting in warning letters were strongly associated with certain sterile injectable drug shortages. In this industry, establishments produce multiple drugs and so one establishment’s failure to comply with manufacturing standards that results in receipt of a warning letter could affect many drugs. For example, 69 percent of the 118 drugs in our study were manufactured by at least one of nine establishments. Thus, if one of these nine establishments failed to comply with manufacturing standards, many drugs in our study could be affected. For example, in 2012 one establishment that failed to comply with manufacturing standards and received a warning letter manufactured 22 drugs in our study. (See table 6.) While the strong association between failure to comply with manufacturing standards resulting in the receipt of a warning letter and shortages could support the contention that FDA regulatory activity triggered some shortages, it could also support the contention that there were growing manufacturing problems and possibly related quality concerns that both precipitated the warning letters and led to shortages. The findings of one study indicate that supply disruptions that led to recent shortages of generic sterile injectable drugs were often linked to quality problems. According to this study, quality problems stem from various sources, including insufficient maintenance, outdated or inadequate design of sterile manufacturing processes, and poor oversight that does not test for or respond adequately to indicators of potential quality problems. Additionally, our finding that sales of a generic version were associated with shortages suggests that relatively low profit margins may also trigger shortages for sterile injectable drugs. Specifically, compared with drugs for which there were only brand-name sales and thus only one supplier, drugs sold generically may have multiple suppliers and relatively lower profit margins. The 88 drugs in our study sold generically were available from an average of four suppliers during 2013, and 10 drugs had eight or more suppliers. Researchers have found that prices, and consequently profit margins, decline for generic drugs as the number of suppliers increase. Relatively low profit margins may cause suppliers to exit the market for less profitable drugs in favor of more profitable ones or may make it unprofitable to increase supply, which could make the market vulnerable to shortages. Lastly, though we did not find a price decline in the previous year to be significantly associated with shortages of the anti-infective and cardiovascular drugs in our study from 2012 through 2014, other evidence indicates that price may influence the amount of drugs produced. Research indicates that price influences a supplier’s profit margins, which may affect a supplier’s decision to stay in the market or invest in the manufacturing establishments. Further, research on shortages in another therapeutic class examined price trends and found that the average price of oncology drugs decreased every year leading up to a shortage, whereas the average price stayed the same or increased for oncology drugs that were not in shortage. (See app. II for more information about our data sources and methodology for our regression model, and app. II, III, and IV for more information about the relationship between certain factors and whether a drug was in shortage.) We provided a draft of this report for comment to the Department of Health and Human Services (HHS). We also provided excerpts of this report for comment to UUDIS. We received written comments from HHS, which are reproduced in appendix V. We also received technical comments from HHS and UUDIS, which we incorporated as appropriate. In its comments, HHS reiterated its commitment to the prevention of new drug shortages and the mitigation and resolution of those shortages that do occur. HHS concurred with our finding that there are still critical shortages affecting the public health. However, in contrast to our finding based on UUDIS data that ongoing shortages remain high, HHS presented FDA drug shortage data indicating that ongoing shortages have decreased. HHS attributed this difference to FDA and UUDIS defining drug shortages differently, which we describe in detail in appendix I of our report. It is important to note that HHS presents FDA’s drug shortage data from 2010 through 2015 to describe the decrease in drug shortages, but we have previously identified reliability concerns with these data, which we describe in appendix I. Because of these concerns, we have not used FDA’s drug shortage data in our current and previous work, and we instead relied on UUDIS drug shortage data, which we continue to believe are the most comprehensive and reliable information available for the time periods we reviewed. Also, despite the declining trend in both new and ongoing shortages suggested by the FDA drug shortage data, our communications with health care provider organizations suggest that shortages are still a significant concern. Representatives from 8 of the 12 organizations representing health care providers told us that, in their experience, shortages have remained constant or increased. HHS stated it is not surprising that we identified an association between warning letters and drug shortages, given that the most common cause of drug shortages is manufacturing deficiencies, and that warning letters, by definition, are issued in response to such deficiencies. HHS cautions that this association should not be interpreted as suggesting that warning letters themselves cause shortages. We agree with this note of caution. HHS also noted that our regression analysis may overestimate the direct impact of issuing warning letters because we did not include other measures of manufacturing quality in our model. We considered a number of additional variables in developing our model, some of which are described in appendix II. However, given the size of our study population (118 drugs), we limited the number of variables in our regression analysis. Our model includes key variables grounded in economic theory and findings from our previous work on drug shortages. We are sending copies of this report to the appropriate congressional committees, the Secretary of HHS, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Throughout our work we have received questions from members of Congress about the similarities and differences between the drug shortage data collected by the Food and Drug Administration (FDA) and the University of Utah Drug Information Service (UUDIS). This appendix provides a summary of each data source and the results of a comparison of data from both sources. FDA and UUDIS, on behalf of the American Society of Health-System Pharmacists (ASHP), both track and maintain data on drug shortages that occur in the United States. Both organizations make drug shortage information publically available through their respective websites. FDA and UUDIS also maintain drug shortage data that are separate and more comprehensive than the information available on the respective websites. For example, FDA does not post shortages on its website if a shortage is expected to be resolved quickly. Meanwhile, UUDIS only posts information on ASHP’s website for a subset of shortages that it deems to be critical. We have previously conducted analyses of UUDIS drug shortage data to determine trends in the number of drug shortages from January 2001 through June 2013. UUDIS began tracking data on drug shortages in 2001 to inform ASHP’s members, such as hospital pharmacists, and other health care providers about the status of new, ongoing, and resolved shortages. These data are generally regarded as the most comprehensive and reliable source of drug shortage information for the time periods we have reviewed. We used UUDIS data because while conducting work for our 2011 report, we found that FDA did not have a database on drug shortages. While FDA collected some information, it did not lend itself to analysis—it was not easily retrievable, routinely recorded, or sufficiently reliable. Because FDA was unable to provide us with the information necessary to analyze trends in drug shortages, we obtained these data from UUDIS. FDA has since taken steps to track drug shortage data in a systematic manner; it started tracking drug shortages in 2011 in response to our report, and its efforts have evolved over the last several years. However, the data it has compiled since our 2011 report was issued do not include information on shortages prior to 2010. FDA and UUDIS have different definitions of what constitutes a drug shortage. Consequently, they do not always determine the same drugs are in shortage and they do not generally report the same number of shortages overall. Specifically, the Food and Drug Administration Safety and Innovation Act, which FDA implements, defines a drug shortage as a period of time when the demand or projected demand for a drug within the United States exceeds the supply of the drug. In determining whether a shortage exists, FDA focuses on the overall market for a specific drug, meaning that even if a particular manufacturer does not have product available, it is not a shortage if the other manufacturers of that product can meet the projected demand for the whole market. For example, if Manufacturer A has no product available, and Manufacturer B is able to manufacture enough product to satisfy the entire market demand, FDA would not consider this situation a drug shortage, even if Manufacturer B’s product is a different strength and package size as long as it views the different sizes and strengths as clinically interchangeable. In contrast, UUDIS defines a shortage as a supply issue that affects how pharmacies prepare and dispense a product or that influences patient care when prescribers must choose an alternative therapy because of supply issues. According to a UUDIS official, the organization therefore focuses on the supply of drugs by national drug code, which is a code that uniquely identifies specific drug products for a given manufacturer. Focusing on the supply of drugs by national drug code means that if one manufacturer does not have enough supply of all strengths and package sizes to meet demand for a period of time it will be considered a shortage. For example, if Manufacturer A has no product available and Manufacturer B has product available—whether it is the same strength and package size or not—UUDIS would consider this to constitute a drug shortage. A UUDIS official said that focusing on supply of a drug by national drug code is important for pharmacists and clinicians because it is the level at which products are ordered and used. Further, the UUDIS official said that substituting one package size for another may create a safety issue. For more information about FDA’s and UUDIS’s processes for determining whether a drug shortage exists, see table 7. To compare FDA’s and UUDIS’s drug shortage data, we reviewed documentation from both FDA and UUDIS. We also analyzed data from both sources related to drug shortages from January 2013 through March 2013. Given our previous finding that shortages last about 9 months to a year on average, we selected this time frame for our comparison to allow for sufficient time for shortages that began during this time frame to have been resolved by the time we started our analysis in July 2014. Data from this time period were also available from both FDA and UUDIS when we started our analysis. Although we considered analyzing data for a more recent time period, FDA did not have reliable drug shortage data readily available from January 2014 through July 2014 when we began our analysis. We considered an FDA shortage and a UUDIS shortage to be a match if they shared the same active ingredient and route of administration (e.g., hydroxyzine injection) and had some overlapping time period that both were considered to be in shortage. For example, if FDA identified a shortage of a specific drug that began in January 2013 and lasted through December 2013, we considered the UUDIS shortage of the same drug to be a match if it occurred at any point from January 2013 through December 2013. We confirmed the results of our comparison with FDA and UUDIS. For shortages that did not match, we asked both organizations for reasons why one source identified a shortage while the other did not. The purpose of this comparison is to generally illustrate the differences between the two sets of data. The findings of our comparison are not generalizable to drug shortage data from other time periods. Our analysis shows that FDA and UUDIS identified a different number of shortages that began from January 2013 through March 2013—17 and 39 respectively. Many of the shortages that FDA identified during this time period were also identified by UUDIS during this same time period. Specifically, 8 of the 17 shortages that FDA identified as beginning within this time frame matched to a UUDIS shortage that was also identified during the same period. Another 5 of the 17 shortages identified by FDA matched to a UUDIS shortage that was identified as beginning either before January 2013 or after March 2013. The remaining 4 shortages were not identified by UUDIS during any time period. While many of the shortages identified by FDA during this time period were also identified by UUDIS, our analysis showed the opposite was true for shortages identified by UUDIS. Specifically, most of the 39 shortages UUDIS identified during this timeframe—28 of 39—were not identified as shortages by FDA. Another 3 of the 39 shortages matched to an FDA shortage that was identified as beginning either before January 2013 or after March 2013. (See fig. 7.) For the 8 shortages that both FDA and UUDIS identified from January 2013 through March 2013, we found that in 5 out of 8 instances FDA identified shortages earlier than UUDIS. Also, UUDIS and FDA both had instances of first considering a shortage resolved (in 3 and 5 instances, respectively). (See table 8.) In the 5 instances in which FDA identified a shortage first, the agency identified the shortage between 6 and 50 days prior to UUDIS. In the 1 instance in which UUDIS identified a shortage first, the organization identified the shortage 13 days prior to FDA. When FDA considered a shortage resolved prior to UUDIS (in 5 out of 8 instances), this determination was made between 23 and 359 days prior to UUDIS. When UUDIS considered a shortage resolved prior to FDA (in 3 out of 8 instances), it reached this conclusion between 67 and 199 days prior to FDA. FDA provided various reasons for why it did not consider 28 of the 39 drug shortages UUDIS identified from January 2013 through March 2013 to be shortages. According to FDA officials, the most common reason is that the agency determined that other manufacturers had the same package size and strength of the drug available. (See table 9.) For example, UUDIS identified a shortage of methylprednisolone sodium succinate injection, a drug used to treat endocrine disorders and allergic reactions, among other things, on January 15, 2013, and considered the shortage resolved on October 23, 2013. UUDIS considered methylprednisolone sodium succinate injection to be a shortage at this time because Manufacturer A discontinued production due to raw material issues, and Manufacturer B had the drug on intermittent back order. Though UUDIS acknowledged that Manufacturer C had this drug available, as did Manufacturer B at times, it posted extensive clinical alternatives on ASHP’s website for those providers that were unable to obtain the available drug. FDA did not consider methylprednisolone sodium succinate injection to be in shortage at this time because the agency determined that manufacturers other than Manufacturer A had the same strength and package size of the drug available to meet demand. Overall, the 28 UUDIS shortages that FDA did not consider to be shortages lasted from 6 days to over 2 years, according to UUDIS’s data. In the case of the 5 UUDIS shortages that FDA stated were short- term supply disruptions rather than shortages, the duration ranged from about a month to over 2 years—one shortage was still active as of December 2015. FDA officials described a short-term supply disruption as a situation where manufacturers report a disruption, but that inventory in the supply chain remains available and FDA has not received any reports of shortage from the public. FDA officials said that these types of disruptions commonly involve delays in importing drugs manufactured at foreign establishments or other short-term delays involving transport. FDA also said that it prevented 3 of the 28 shortages that UUDIS identified. According to UUDIS data, those 3 shortages lasted from about 8 months to over 2 years. Lastly, though FDA did not determine that these 28 situations met its criteria to be a drug shortage, UUDIS deemed 13 of these shortages critical, a designation made because alternative medications were unavailable, the shortages affected multiple manufacturers, or the shortages were widely reported. There were also 4 shortages that FDA identified from January 2013 through March 2013 that UUDIS did not determine to be in shortage. UUDIS did not consider these drugs to be in shortage because it (1) was never notified that these drugs were in short supply or (2) heard from suppliers that they had full stock. According to FDA data, these four shortages lasted between 2 months to more than 2 years. To examine the relationship between certain factors and sterile injectable drug shortages, we used economic theory and findings from our previous work on drug shortages to identify factors that may be associated with shortages. We included these factors in a multivariate regression model to determine which factors are associated with shortages. Our study population included all sterile injectable anti-infective and cardiovascular drugs that were marketed and sold from 2010 through 2014—a total of 118 drugs. We defined a drug to be all products with the same active ingredient and route of administration (e.g., epinephrine injection). We limited the analysis to sterile injectable drugs because we previously found that approximately 65 percent of all critical shortages from January 2009 through June 2013 were for sterile injectable drugs. We selected anti-infective and cardiovascular drugs because in our prior reports we found that approximately one-fourth of all critical drug shortages reported from January 2009 through June 2013 were for drugs in these therapeutic classes. Both anti-infective and cardiovascular drugs continue to be subject to multiple and prolonged shortages. Also, prior studies have focused on other classes, such as oncology. To identify the drugs in our study population, we used National Sales PerspectivesTM data from IMS Health, a company that collects and analyzes health care data. We selected all drugs that were in the anti- infective and cardiovascular Anatomical Therapeutic Classes as listed in the 2014 guidelines from the European Pharmaceutical Market Research Association. For each drug, we also used these data to develop annual measures of suppliers, dollar sales, volume sales, and sales for generic and brand-name products. In addition, for each drug we calculated a proxy for average annual transaction price by dividing total dollar sales of the drug by total volume sales. We used data from the University of Utah Drug Information Service (UUDIS) to determine whether a drug was in shortage during 2012, 2013, and 2014 and when each shortage was first reported to UUDIS. UUDIS defines a shortage as a supply issue that affects how pharmacies prepare and dispense a product or that influences patient care when prescribers must choose an alternative therapy because of supply issues. For example, UUDIS would consider acyclovir injection to be in shortage if the 20 mL package size was available, but the 10 mL package size was not because health care providers would need to draw out 10 mL doses from a 20 mL vial, which may create a safety issue. In our analysis, drugs classified as being in shortage were in shortage at any time during a given calendar year, and includes shortages that started in a prior year and remained ongoing. We used drug registration and listing data from the Food and Drug Administration (FDA) to identify the establishments that were listed as manufacturing the drugs in our study population. Using FDA’s warning letter data, we then determined whether each drug was manufactured by at least one establishment that failed to comply with manufacturing standards and received a warning letter from FDA at any time in the 2 years before each of the years we examined in our regression analysis. If FDA identifies a violation of law or regulations during an inspection, the agency may then take various regulatory actions, such as issuing a warning letter. FDA issues warning letters when it identifies violations that may lead to enforcement action if not promptly and adequately corrected. We developed an econometric model to examine the association between shortages of sterile injectable anti-infective and cardiovascular drugs and certain factors. Our model uses 3 years of shortage history for each drug in our study to examine the relationship between whether a drug was in shortage during 2012, 2013, or 2014 (dependent variable), and certain factors (the explanatory variables) described below. To estimate the model we created a panel data file that has three observations, corresponding to 2012, 2013, and 2014, for each of the 118 drugs in our study. Each of the 354 observations contains data on whether the drug was in shortage that year plus data on certain factors pertaining to the preceding 1 or 2 years. Our dependent variable is a binary variable indicating whether a drug was in shortage during 2012, 2013, or 2014 in a repeated measures model. We selected this time period because, according to UUDIS data, the number of shortages was the highest during 2014, and the number of shortages was also high during 2012 and 2013. We developed four categories of factors that may be associated with shortages: drug characteristics, market structure, compliance with manufacturing standards, and price and volume of sales. Our inclusion of an explanatory variable to measure compliance with manufacturing standards is unique to this study. Our regression model controlled for one factor from each category. We hypothesized that each of the following factors would be positively associated with a shortage in the following year: Generic sales (drug characteristic). Because drugs sold generically are more likely to have lower profit margins when compared to their brand-name counterparts, we hypothesized that suppliers of such drugs are less likely to increase production in response to a shortage. Drugs sold generically include drugs that had any sales of a generic product, regardless of the presence of any brand product sales. We classified branded generic products as generic products. A decline in the number of suppliers (market structure). Such a decline may disrupt the supply of a drug if other suppliers do not increase their production. The number of suppliers for each drug during a year is the number of suppliers that had sales of the drug at any point during that year. Failure to comply with manufacturing standards resulting in a warning letter (compliance with manufacturing standards). Manufacturers may choose to temporarily shut down production to correct the conditions that led to the violations of current good manufacturing practice regulations cited in a warning letter. They may also shut down permanently if the costs of correcting the problematic conditions outweigh the potential benefits of producing drugs at that establishment. A drug was associated with a warning letter if at least one establishment manufacturing the drug received a warning letter for failure to comply with manufacturing standards. Price decline (price and volume of sales). Shortages may occur if prices decline because suppliers will not have a financial incentive to increase production of the drug in shortage. For each drug, we calculated a proxy for the average annual price as the ratio of its total dollar sales to its total volume sales. We adjusted all prices to 2014 dollars using the Consumer Price Index for all urban consumers. We used our 3-year panel data file to estimate a repeated measures logistic regression model in which the dependent variable was a binary variable indicating whether there was a new or ongoing drug shortage in the given year (2012, 2013, or 2014). The model included the following binary explanatory variables for whether: there were sales of the drug in its generic or branded generic form during the previous year, the number of suppliers of the drug was greater 2 years before the given year compared with 1 year before it, the proxied average price of the drug was greater 2 years before the given year compared with 1 year before it, and an establishment that manufactured the drug failed to comply with manufacturing standards and received a warning letter from the FDA in either of the preceding 2 years. We used the coefficient estimates from the repeated measures logistic regression model to calculate for each explanatory variable the estimated percentage point increase in the probability of a shortage when the explanatory variable is present for all drugs relative to the mean probability of a shortage predicted for all drugs in our study by our model. We did this in three steps. First, for each explanatory variable, we estimated the probability of a shortage in the presence of that variable by setting the value of the variable to one, leaving the values of the other explanatory variables unchanged, and then calculated the probability. Second, we used the coefficient estimates from the regression model and the data for every drug in our study to calculate the mean probability of shortage predicted for all the drugs in our study, which was 0.607. Third, to compute the estimated percentage point increase in probability of a shortage for each explanatory variable, we computed the difference between the probability of a shortage for each explanatory variable (step 1) and the mean probability of a shortage predicted for all drugs in our study (step 2).Table 10 presents the coefficients (log odds ratios) and odds ratios we estimated from our repeated measures logistic regression model. For each explanatory variable, the estimated probability of a shortage if that variable is present for every drug in our study is presented in table 11. To inform our selection of the explanatory variables to include in the regression model, we computed descriptive statistics for a broad range of factors in the following categories: drug characteristics, market structure, compliance with FDA manufacturing standards, and price and volume of sales. Specifically, we compared frequencies, medians, and trends over time for these factors for drugs in shortage and those not in shortage during 2014. Some of the additional factors that we analyzed were: Years since brand-name or generic drug approval (drug characteristic). The years since brand-name drug approval is based on the date of the oldest approved new drug application associated with a particular drug. The years since generic drug approval is based on the oldest approved abbreviated new drug application associated with a particular drug. Both of these measures truncate at 32 years because FDA’s data source for approval history—Approved Drug Products with Therapeutic Equivalence Evaluations (Orange Book)— does not provide approval dates before 1982. Number of establishments that manufacture the drug (market structure). We used FDA drug registration and listing data from 2009 and 2014 to identify the number of establishments that were listed as manufacturing the drugs in our study. As many establishments manufacture more than one drug, we also created a measure that identifies the relationship between the establishments and all of the drugs in our analysis. Receipt of an official action indicated inspection classification (compliance with manufacturing standards). FDA classifies establishment inspections as official action indicated when serious deficiencies are found that warrant regulatory action. When an inspection is so classified, FDA may take various regulatory actions, including issuing a warning letter, which we include in our regression model. Our analysis has some limitations. First, our findings are limited to data for sterile injectable anti-infective and cardiovascular drugs that were marketed and sold from 2010 through 2014 and shortages in these two therapeutic classes from 2012 through 2014. Our findings are not generalizable to drugs in other routes of administration, other therapeutic classes, or shortages during other time periods. Second, missing manufacturing location data may have caused us to underestimate or overestimate the relationship between shortages and noncompliance with manufacturing standards resulting in a warning letter. For the drugs in our study that were missing manufacturing location data, we could not always identify whether the drugs were manufactured by at least one establishment that received a warning letter. Therefore, we may have misclassified some drugs that were manufactured by establishments that received a warning letter as drugs manufactured by establishments that did not receive a warning letter. Whether we may have overestimated or underestimated the relationship depends on whether the potentially misclassified drugs were in shortage. If these potentially misclassified drugs were in shortage, our model may underestimate the relationship between shortages and receipt of a warning letter. If these potentially misclassified drugs were not in shortage, our model may overestimate the relationship between shortages and receipt of a warning letter. The extent to which we may have underestimated or overestimated this relationship is unclear. For 57 of the 118 drugs in our study, we found partial manufacturing location data, and for 3 drugs we found no manufacturing location data. We were not able to identify these data for drugs if the IMS data did not include a national drug code for a particular product or if the manufacturer was not listed in FDA’s drug registration and listing data. Finally, our proxy for average transaction price for the drugs in our study applies to all strengths and package sizes of the drug, because we defined a drug to include all products with the same active ingredient and route of administration, regardless of strength or package size. We used this definition of a drug because it is the definition that UUDIS uses to record drug shortages. In the market, average transaction prices for each drug vary by strength and package size. We took several steps to ensure that the data used to produce this analysis were sufficiently reliable. Specifically, we assessed the reliability of the IMS Health National Sales PerspectivesTM data by interviewing officials at IMS Health. We also reviewed relevant documentation and examined the data for obvious errors, such as missing values and values outside of expected ranges. We assessed the reliability of the UUDIS and FDA data by interviewing officials, reviewing relevant documentation, and examining the data for obvious errors. We determined that these data were sufficiently reliable for the purposes of this analysis. This appendix compares certain factors for sterile injectable anti-infective and cardiovascular drugs in shortage during 2012, 2013, and 2014 to those same factors for drugs not in shortage during those years. Drugs classified as being in shortage during a year were in shortage anytime during that year, and include shortages that started in a prior year and remained ongoing. In general, we found differences between drugs that were in shortage during this time period and drugs that were not in shortage (see table 12). For example, our analysis showed that for 19 to 23 percent of these drugs in shortage between 2012 and 2014, the number of suppliers decreased during the 2-year period before the shortage compared with 6 percent or less of drugs that were not in shortage. A decline in the number of suppliers indicates that a supplier that had sales for a particular drug in one year had no sales for that drug in the next. This appendix compares certain factors for sterile injectable anti-infective and cardiovascular drugs in shortage in 2014 to those same factors for drugs not in shortage that year. Drugs classified as being in shortage were in shortage anytime during 2014, and include shortages that started in a prior year and remained ongoing. In general, we found differences between drugs that were in shortage and not in shortage in 2014 for certain factors, such as for sales of generic versions of the drug (see table 13). In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Katherine L. Amoroso; Zhi Boon; Sandra George; Alison Goetsch; Cathleen Hamann; Rebecca Hendrickson; Richard Lipinski; Yesook Merrill; Vikki Porter; Oliver Richard; Daniel Ries; Merrile Sing; Alison Smith; and Eric Wedum made key contributions to this report. Controlled Substances: DEA Needs to Better Manage Its Quota Process and Improve Coordination with FDA. GAO-15-494T. Washington, D.C.: May 5, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 2015. Drug Shortages: Better Management of the Quota Process for Controlled Substances Needed; Coordination between DEA and FDA Should Be Improved. GAO-15-202. Washington, D.C.: February 2, 2015. Drug Shortages: Threat to Public Health Persists, Despite Actions to Help Maintain Product Availability. GAO-14-339T. Washington, D.C.: February 10, 2014. Drug Shortages: Public Health Threat Continues, Despite Efforts to Help Ensure Product Availability. GAO-14-194. Washington, D.C.: February 10, 2014. Drug Shortages: FDA’s Ability to Respond Should Be Strengthened. GAO-12-315T. Washington, D.C.: December 15, 2011. Drug Shortages: FDA’s Ability to Respond Should Be Strengthened. GAO-12-116. Washington, D.C.: November 21, 2011.
Drug shortages are a serious public health concern. GAO previously found that many shortages were of sterile injectable drugs and could generally be traced to supply disruptions caused by manufacturers slowing or halting production to address quality issues. Congress included a provision in statute for GAO to review several aspects of drug shortages. This report examines (1) trends in drug shortages, (2) FDA's efforts to prioritize reviews of drug submissions to address shortages, (3) trends in FDA warning letters issued to sterile injectable manufacturing establishments for noncompliance with manufacturing standards, and (4) the relationship between certain factors and shortages of sterile injectable drugs. GAO analyzed—using various methods including regression analyses—drug shortage data from the University of Utah Drug Information Service from 2010 through 2015; drug sales data from IMS Health from 2010 through 2014 for sterile injectable anti-infective and cardiovascular drugs (which have been subject to multiple and prolonged shortages); and FDA data, including data on warning letters related to inspections conducted from October 2006 through September 2013 and data on prioritized reviews from January 2010 through July 2014, which were generally the latest available data at the time GAO began its analysis. GAO also interviewed FDA officials and reviewed agency documents, including documents related to the issuance of warning letters to seven establishments FDA and others said were linked to widespread shortages. When available supplies of prescription drugs are insufficient, patient care may be adversely affected. The number of new shortages has generally decreased since 2011, while the number of ongoing shortages remained high. To help address shortages, the Food and Drug Administration (FDA) prioritized the review of—more quickly reviewed—383 drug applications and supplements during the time period GAO examined. Most were for generic sterile injectable drugs. FDA's approval of some of these submissions occurred before the shortage was resolved. Although the timing of FDA's approval does not establish a causal link, it could indicate that FDA's action helped address some shortages. GAO found that, as part of FDA's oversight of drug safety and quality, it generally issued an increasing number of warning letters to sterile injectable drug establishments during the time period GAO reviewed for noncompliance with manufacturing standards outlined in federal regulations. However, the percentage of inspections resulting in warning letters remained relatively small as the number of inspections also increased. Moreover, seven establishments that were linked to widespread shortages and received warning letters all had previous indications of difficulty complying with manufacturing standards. Shortages of sterile injectable anti-infective and cardiovascular drugs in 2012, 2013, and 2014 were strongly associated with certain factors GAO examined. Two factors—a decline in the number of suppliers and failure of at least one establishment making a drug to comply with manufacturing standards resulting in a warning letter—suggest that shortages may be triggered by supply disruptions. A third factor—drugs with sales of a generic version—suggests that due to relatively low profit margins for generic drugs, manufacturers are less likely to increase production, making the market vulnerable to shortages. The Department of Health and Human Services (HHS) reviewed a draft of this report and reiterated its commitment to addressing drug shortages. GAO incorporated HHS's technical comments as appropriate.
As trustee, the Secretary of the Interior is responsible for administering the government’s trust responsibility to tribes and Indians. Several Interior agencies administer various portions of the government’s Indian trust responsibility, including the Bureau of Indian Affairs (BIA), the Bureau of Land Management (BLM), the Minerals Management Service (MMS), the Office of American Indian Trust, and the Office of the Special Trustee for American Indians (OST). In several instances these agencies’ lines of authority overlap or their functional areas of responsibility interrelate. See attachment I for a chart showing the current Interior organizations responsible for trust fund accounting and asset management functions. Attachment I also highlights those agencies which the Strategic Plan proposes to transfer to the American Indian Trust and Development Administration (AITDA). BIA performs land title and lease ownership determinations and maintains official ownership records. BIA also performs appraisals of some Indian assets and negotiates and executes leases and contracts for use or sale of nonmineral assets—such as timber, farming, grazing, real estate, and rights-of-way—and mineral assets such as oil, gas, and coal. In addition, BIA collects and accounts for Osage tribe mineral royalties. BLM assists BIA in preleasing activities associated with valuing mineral resources. BLM is also responsible for inspecting and enforcing the terms of Indian mineral leases and verifying production. MMS collects and accounts for mineral royalty payments on Indian leases and transfers the revenues to Treasury for deposit to the Indian trust funds managed by OST’s Office of Trust Funds Management (OTFM). In addition, MMS performs compliance audits that are directed at ensuring that Indian royalty payments are consistent with lease terms and production volume. MMS also provides funding to some tribes for cooperative agreements to perform their own compliance audits. OTFM, in the Office of the Special Trustee for American Indians, accounts for nonmineral revenues and distributes mineral royalties received from MMS to tribal and individual Indian accounts, based on lease and ownership information. OTFM disburses unrestricted funds to account holders upon request. OTFM also invests IIM and tribal trust funds on behalf of account holders. While IIM accounts are currently maintained in BIA’s Integrated Records Management System as separate accounts, OTFM invests the cash balances in these accounts as a pool, primarily in U.S. Treasury and U.S. agency securities. OTFM invests tribal funds in government securities or collateralized accounts in federal depository banks. The Office of American Indian Trust, in the Office of the Secretary of the Interior, conducts annual reviews of tribes’ performance of trust functions assumed under the Tribal Self-Governance Act of 1994. The office prepares federal Indian trust protection standards and guidelines and reviews significant decisions affecting American Indian trust resources, including treaty rights. To describe the trust asset management problems that the Strategic Plan proposes to resolve, we reviewed the problems identified in the Strategic Plan and relied on our past work and the work of independent public accountants that Interior contracted with to perform financial statement audits and reviews. To summarize the Strategic Plan, we reviewed the Plan, its accompanying appendixes, and other supporting documents. We met with Office of Special Trustee officials, including the Special Trustee for American Indians, and officials in BIA, BLM, and MMS to obtain clarification on certain aspects of the Plan. To explain the basis for the cost estimates contained in the Strategic Plan, we reviewed its budget document and the cost data it provided. We contacted OST officials for further information, as necessary. As agreed with your office, we did not attempt to validate the estimates presented in the Plan or their underlying assumptions, nor did we assess whether the estimates included all necessary costs of full implementation of the Plan. To identify implementation issues, we analyzed the Plan in detail and relied on our past work on Indian trust fund accounting and asset management issues. We also met with Department of the Interior officials, the Special Trustee for American Indians, and officials in BIA, BLM, and MMS and contacted the Director of Interior’s Office of American Indian Trust to obtain their views on the Plan. In addition, we reviewed tribal comments on the Plan, which were provided to the Special Trustee as a result of his consultation meetings with the tribes. Although our work identifies key issues that the Congress needs to consider in deciding whether to approve the initiatives described in the Plan, it is by no means all inclusive and there are other issues yet to be identified. We conducted our work between April and July 1997 in accordance with generally accepted government auditing standards. As we have reported in the past, Interior’s Indian trust fund accounting and asset management problems are long-standing and permeate all facets of the trust fund management business cycle. They include (1) the lack of accurate, up-to-date information on ownerships to ensure that revenue is distributed to the correct account and the increasing workload associated with fractionated ownerships, (2) inadequate management of natural resource assets resulting in a lack of assurance that all earned revenues are collected, (3) weaknesses in trust fund management systems and internal controls and policies and procedures that result in a lack of assurance about the accuracy of trust fund balances, and (4) the failure, in the past, to consistently and prudently invest trust funds and pay interest to account holders. These overall weaknesses preclude account holders from having assurance that their account balances are accurate and that their assets are being prudently managed. Currently, trust fund accounting and asset management are complicated by the lack of adequate numbers of trained field staff. In fiscal year 1996, the Congress transferred the funding for BIA’s Financial Trust Services to OST. As a result, on February 9, 1996, a Secretarial Order made OST responsible for accounting for IIM receipts and, as a result, a number of BIA staff were transferred to OTFM. However, at a number of area and agency offices, small staffs handle a wide variety of duties of which trust activities are only one part. Consequently, there are insufficient field staff at present to provide separate, full-time collection and accounting functions for OTFM and separate, full-time leasing and ownership recordkeeping staff for BIA. As a result, depending on the agency office, either OTFM or BIA performs IIM accounting functions and procedures for processing receipts, leasing activities, paying allowed claims, administering IIM accounts (including establishing new accounts), monitoring leases, performing guardianship activities, and billing and printing checks. In addition, lines of supervision and accountability are sometimes blurred. This problem has not yet been resolved. Moreover, continued fractionation of Indian land and lease ownerships has seriously complicated trust fund accounting and asset management. According to the Strategic Plan, Interior may soon be unable to cope with the recordkeeping of land titles and accurate distribution of income due to the worsening fractionation. The Plan contains a proposal for dealing with this problem. Interior officials agree that fractionation must be reduced and eliminated to ensure the success of Indian trust fund accounting and resource management reforms. Interior has submitted a legislative proposal for congressional consideration. The Strategic Plan proposes a two-phase change to Interior’s current organizational and management structure for Indian trust management. The first phase would establish a single organization for trust management activities—the American Indian Trust and Development Administration (AITDA)—independent of the Interior Department. The proposed organization would be in the form of a government-sponsored enterprise (GSE). The AITDA would be organized by function—such as accounting or land titles—and would be managed by a full-time Chairman and Chief Executive Officer and a five-member Board of Directors appointed by the President and confirmed by the Senate. Three members are to be proposed by the Indian community and two members—the Chairman and Chief Executive Officer—are to have financial and trust management expertise and may also be American Indians. Board members would serve staggered terms of 12 years. Attachment II provides a chart showing AITDA and identifies those organizational components of AITDA and lines of coordination with Interior agencies. The Plan proposes that AITDA assume the federal government’s Indian trust authority related to Indian trust funds and assets. It also proposes that certain organizations and related funding be transferred to AITDA—including OST and OTFM, BIA’s Land Title and Records Office, and Interior’s Office of American Indian Trust—along with various Interior agency records management functions related to trust fund accounting and asset management. Specifically, the Plan proposes that responsibility for and funding of various Interior asset management and compliance functions be transferred to AITDA. These transfers include the following: BIA’s leasing activities and its Land Title and Records Office to AITDA’s Trust Resources Management Division. BLM’s lease inspection, enforcement, and production verification activities to AITDA’s Trust Resources Management Division. MMS’ compliance and valuation function to AITDA’s Risk Management and Control Division. Interior’s Office of American Indian Trust to AITDA’s Risk Management Control Division. According to the Plan, AITDA would use the funds transferred from BIA, BLM, and MMS to contract with these agencies or with tribes to perform the related trust asset management activities. Also, it would use funds transferred from MMS to contract with MMS for compliance and control functions and perform oversight of self-governance tribes, respectively. However, AITDA would have the option to contract with other entities for these services. In addition, the Plan would create the following three new organizations within AITDA: the National Indian Fiduciary Records Center, responsible for controlling and preserving all Indian trust-related records, to be located in Albuquerque, New Mexico, near OTFM; a trust risk management unit to conduct operational audits, credit and compliance reviews and audits of outside servicers (including BIA, BLM, MMS, and tribes) and to perform appraisals and other asset management functions; and a centralized technical center for data processing. To support the operations of the new organization, the Plan calls for hiring qualified staff; acquiring or modifying trust fund accounting and asset management systems; developing policies and procedures and internal controls; and implementing internal and external audit functions. The major systems that would collectively support the new organization fall under an umbrella concept known as the Trust Asset and Accounting Management System (TAAMS). TAAMS would include trust asset and accounting systems, a land title and records system, and a trust fund general ledger accounting system. The second phase of the Strategic Plan would establish a bank and trust company—the American Indian Trust and Development Bank (TD Bank)—to provide full financial services and economic development funding to Indians. The TD Bank would be a nationwide financial institution, backed by the full faith and credit of the U.S. government, that lends to, invests in, and provides financial services for American Indians, tribes, and their communities. The TD Bank’s Board of Directors’ would consist of five members appointed by the President and confirmed by the Senate and would be “identical with AITDA’s Board.” The TD Bank would initially be capitalized at $500 million by the federal government through “appropriations, judgment funds, or funds provided by other government-sponsored enterprises.” This initial capital would be permanent. Ownership of the TD Bank would be distributed in initial capital stock to federally recognized American Indian Tribes in proportion to the number of Indians living on or near reservations, as determined by the latest census or other appropriate information. This stock ownership would not be subject to sale, trade, or withdrawal. Except for the right to receive dividends and qualify for certain types of loans, the Plan does not explain the rights and privileges that tribes would have as a result of their stock ownership. The TD Bank would be a for-profit financial institution but could also receive appropriations to provide for the cost of lifeline financial servicesand the cost of other programs that the Congress may choose to authorize in the future. The TD Bank would be authorized to invest up to 25 percent (initially $125 million) of its permanent capital in eligible individual Indian and tribal business ventures and projects. The TD Bank would also be allowed to invest up to $300 million for the purchase, holding, and financing of fractionated Indian realty interests on allotted lands. The Plan also proposes that the TD Bank be authorized to receive up to $5 billion in additional funding from borrowing to provide loans and other economic development funding to American Indians. The additional funding would include $3 billion from Treasury borrowing and $2 billion from the sale of bonds and notes to be guaranteed by the U.S. government. The TD Bank would provide financial services through 50 to 75 branch offices located on or near major American Indian communities. In addition, Phase II of the Plan calls for systems technology enhancements and a new headquarters building. These proposals are not fully discussed in the Plan. Phase I of the Strategic Plan includes initiatives that are directed toward (1) data conversion, reconciliation, and backlog cleanup, (2) upgrading some existing systems and acquiring new systems, and (3) substantially changing the way existing programs and functions are performed. To implement these initiatives, the Strategic Plan includes budget estimates indicating that about $168 million will be needed for fiscal years 1997 through 1999 and approximately $61 million and $56 million in fiscal years 2000 and 2001, respectively. These cost estimates are generally based on the OST contractor’s assessment of the costs of similar functions performed by private sector trust companies, vendor estimates, actual costs of functions currently performed by certain Interior agencies; and assumptions about the workload, staffing and number of locations to be serviced. We did not attempt to validate the estimates presented in the Plan or their underlying assumptions, nor did we assess whether the estimates included all necessary costs of full implementation of the Plan. Table 1 summarizes the cost estimates in the Strategic Plan. Attachment III details the basis for each of the Phase I cost estimates in the Strategic Plan. Phase II costs in the Strategic Plan include previously discussed capitalization and funding of the TD Bank and the fractionated realty holdings, purchase, and sales program. Costs for Phase II would also include automated systems modifications and acquiring a headquarters building. Estimates of these costs are not provided in the Strategic Plan. A number of areas require further clarification, planning, or consideration before the Plan can move forward. These include implementation timing of certain initiatives, such as records cleanup and acquiring a new IIM accounting system component; proposals, such as establishing a centralized organization and upgrading and acquiring systems, that need more planning before they can be successfully implemented; and issues requiring congressional consideration that relate to the desirability and feasibility of establishing the new organization as a private entity and establishing the trust development bank. Past audits by independent public accounting firms, Interior’s Office of Inspector General, and GAO have identified serious internal control and systems weaknesses that impair the reliability of trust fund accounting. To resolve these weaknesses, auditors have made recommendations for BIA and Interior to take timely actions such as correcting inaccurate and incomplete IIM accounting records, eliminating ownership determination and recordkeeping backlogs, and establishing a master lease file. The Special Trustee has also concluded that there is an urgent need to take action to correct increasingly deteriorating recordkeeping deficiencies. Because Interior lacks the financial and managerial resources to clean up the records, the Special Trustee proposes that the cleanup be outsourced to independent contractors. This proposal is consistent with our past recommendations. Cleanup of IIM accounts is under way, and cleanup of appraisal and lease and ownership backlogs could begin within a relatively short time. As part of TAAMS, the Strategic Plan proposes that the commercial trust accounting and investment system—which is currently used by OTFM for tribal accounts—be expanded to include a component for IIM accounting. Currently, IIM accounts are maintained on BIA’s Integrated Records Management System (IRMS), which is not a trust accounting system. However, in determining the appropriate timing for acquiring an IIM commercial trust accounting system component, certain questions need to be addressed, including whether to (1) convert all IIM accounts to the new system immediately, or convert them as they are cleaned up, (2) identify and archive inactive accounts before conversion, (3) convert small-balance or pass-through accounts (zero balance accounts where receipts are immediately withdrawn) to the new system or maintain them separately. Once these issues and any other identified issues are resolved, the IIM accounting system expansion should be able to move forward, assuming it can reasonably be expected to support the systems and interfaces required to build an integrated TAAMS. The Strategic Plan includes proposals for establishing a centralized organization responsible for trust fund accounting and asset management and upgrading or acquiring systems to support these functions. While the basic premise—the need for a central organization and major systems improvements may be sound, the Plan does not adequately address how these reforms would be implemented. For example, the Strategic Plan refers to MMS’ mineral royalty collection and accounting function, but it also refers to AITDA acquiring a mineral management and accounting system. In addition, the Plan does not adequately define all interrelated business functions, such as the co-located BIA, BLM, and MMS mineral program office in Farmington, New Mexico, or how the proposed new organization will work with BIA, BLM, and MMS to provide assistance to tribes on mineral leasing activities. Furthermore, the Plan does not adequately address how BIA’s agriculture, forestry, and realty activities will be performed in the future. Finally, the Plan was developed without sufficient input from affected Interior agencies. For example, BIA, BLM, MMS, and Office of American Indian Trust officials told us that they were not consulted on the development of the Plan. Changes in trust systems outlined in the Plan could have major effects on the business processes and practices in these agencies. The Plan needs to be more fully developed to (1) provide adequate evidence of a framework for sharing related business and functional information and program requirements among the cognizant organizations and functions and (2) support the design and development of management and information systems. In addition, before proceeding with the major information technology investments proposed by the Plan, the processes and structures required by the Paperwork Reduction Act of 1995, the Clinger-Cohen Act of 1996, and OMB guidance for funding information systems investments need to be put in place. These include the development of a strategic Information Resources Management (IRM) plan, criteria for the evaluation of major information system investments, and an information architecture which aligns technology with mission goals. Because OST has not developed a strategic IRM plan, and investment process, or an information architecture, the organization risks acquiring systems that will not meet their business needs. In late May 1997, in response to the Clinger-Cohen Act, Interior hired a Chief Information Officer (CIO) with both industry and federal agency experience. The CIO and the Special Trustee need to work closely to ensure that the investments in information systems are made appropriately and effectively. Because of the systems’ size, impact, and complexity, the Department has reported to the Office of Management and Budget that these trust systems constitute a major information system investment for Interior. Two fundamental issues need to be addressed before the Congress can make further decisions on whether and how to implement the Strategic Plan’s proposed initiatives. These two issues relate to the desirability and feasibility of establishing (1) AITDA as a government-sponsored enterprise (GSE) and (2) the Indian Trust Development Bank. The Plan needs to provide more information on each of these proposals in order to support full consideration by the Congress. Specifically, (1) The Strategic Plan proposes the establishment of AITDA as a single organization responsible for trust fund and asset management activities. The Plan proposes that AITDA be a GSE which is, typically, a private corporation. The Plan should more fully address the extent to which the United States may transfer trust authorities and responsibilities to a GSE. The government assumed many of these authorities and responsibilities as a result of treaties negotiated with individual Indian tribes. Although the Plan characterizes AITDA as a GSE, it proposes that AITDA receive appropriations and congressional oversight. The Plan does not identify, however, the amount of funding or whether the funding will be appropriated directly to AITDA or provided in the form of grants or borrowing authority. Also, the plan does not discuss what is meant by congressional oversight. The Plan proposes that AITDA, a private entity, oversee the functions of various Interior agencies. Typically, nonfederal entities do not have oversight responsibilities for federal agencies. This issue needs to be addressed in the Plan. (2) The Strategic Plan proposes the establishment of an Indian Trust and Development Bank. The Plan also proposes that the TD Bank receive appropriations, judgment funds, or funds provided by other GSEs. Under current law, judgment funds are not available to fund programs. Also, the nature and type of contractual arrangement with private sector institutions needs further clarification and explanation. In addition, the basis for capital to be provided by other GSEs needs to be defined and clarified. The relationship, contractual or otherwise, that would exist between the AITDA and the TD Bank is not fully defined. This relationship, including the degree of liability that the AITDA would be subject to regarding the TD Bank’s operations, also needs to be defined. The Strategic Plan proposes that the TD Bank provide a wide range of lifeline services at no cost or at a subsidized cost. These services include basic functions such as checking and savings accounts, money orders, and account statements, but also include tax, investment, and retirement planning services. Because these services would likely be funded by appropriations, their cost needs to be identified. The Plan would require that the federal government maintain equity capital equal to 5 percent of average risk-adjusted assets. Because this could result in significant additional contributions by the federal government resulting from losses or expansion by the TD Bank, the appropriateness of this proposed requirement needs to be addressed. Limitations on who can be a customer or shareholder (whether only tribal members with certificates of Indian blood and federally recognized tribes or others, including non-Indians) needs to be defined and clarified. These are key implementation issues that must be considered before the Plan can move forward. Additional information is needed from the Special Trustee about the proposed organization so that the Congress may carefully consider the government’s Indian trust responsibility; type of organization, funding, and oversight; the types of programs and services to be provided by the new organization; and the relationship of any new organization to the Interior Department and other external organizations. Once these and other organizational issues are resolved, the next step is to proceed with the development of the information systems planning described earlier. In our view, both the additional organizational planning and the information systems planning are essential to the success of this important endeavor. Mr. Chairman and Mr. Vice Chairman, this concludes my statement. I would be glad to answer any questions that you or the Members of the Committee might have. (OST) (OTFM) (BIA/LTRO) (OAIT / MMS) The Strategic Plan includes budget estimates indicating that about $168 million will be needed for fiscal years 1997 through 1999 and about $61 million and $56 million for fiscal years 2000 and 2001, respectively, to implement Phase I of the Plan. The Office of the Special Trustee’s fiscal year 1997 appropriations included a little over $13 million to begin these improvements. Table III.1 summarizes the cost estimates contained in the Strategic Plan followed by detailed explanations of the basis for these cost estimates. As agreed with committee staff, we did not attempt to validate these estimates or assess whether they represent the full cost of implementing the Plan. The following discussion explains the basis for the cost estimates for the three main components of the plan—data conversion, reconciliation, and backlog cleanup; upgrading and acquiring systems; and forming a new organization. The principal objective of Phase I of the Strategic Plan is to address and resolve the root causes of the long-standing trust management problems as quickly as possible. The Plan proposes that $49 million be provided for fiscal years 1997 through 1999, and $8.6 million and $7.2 million for fiscal years 2000 and 2001, respectively, to support data conversion to new systems. These estimates include cleanup of probates, land title records, IIM and tribal accounting records and reconciliations, and appraisals. Table III.2 summarizes these costs. To eliminate probate backlogs, the Plan proposes that $11.5 million be provided for fiscal years 1997 through 1999 and $1.4 million be provided in fiscal year 2000. This estimate includes approximately $1.1 million for BIA agency office initial document preparation, $2.4 million for probate hearings and appeals, and $8 million for BIA’s Land Title and Records Office (LTRO) title and ownership determination and recordkeeping for fiscal years 1997 through 1999. The Plan also estimates that $1.4 million will be needed for fiscal year 2000 to complete the LTRO effort. The estimate of $1.1 million for fiscal years 1997 through 1999 to reduce the backlog associated with BIA agency office preparation of probate documents was based on OST’s estimate of a backlog of 3,500 probates and an average workload of 10 completed probates a month per probate clerk, or 120 per year. Thus, the Plan proposes providing a total of 30 probate clerk staff years at a GS-7 salary and benefits rate of $38,000 a year. The estimated $2.4 million for fiscal years 1997 through 1999 to eliminate probate court hearing and appeals backlogs was based on OST’s estimate of a backlog of 3,453 cases. The Plan proposes providing an additional 12 administrative law judges and 12 paralegals and 12 secretaries at the GS-7 level to eliminate the backlog. To eliminate backlogs in land title and ownership determinations and recordkeeping, the Plan proposes that $8 million be provided for fiscal years 1997 through 1999, and that an additional $1.4 million be provided for fiscal year 2000. This estimate is based on BIA information on backlogs and the level of effort needed to complete the tasks shown in table III.3. To clean up inaccurate IIM accounting records and perform data and document checks, the Plan proposes that $7.4 million be provided through fiscal year 1999. Cost estimates for cleanup of IIM accounting records are based on OTFM’s experience with records cleanup at five field offices. As of mid-July 1997, OTFM had performed work at 11 BIA agency offices to clean up IIM records. Cost estimates for data conversion of IIM and lease records from BIA’s Integrated Records Management System (IRMS) to the expanded trust accounting system are $2.2 million for fiscal years 1997 through 1999. These estimates are based on data obtained from private sector trust companies for conversion of similar data. To support LRIS conversion, reconciliations of ownership data, and cleanup of defective titles, the Plan proposes that $4.6 million be provided for fiscal years 1997 through 1999, and that $2 million each be provided for fiscal year 2000 and fiscal year 2001. These estimates cover Land Title Office research, review, and identification of all tracts with title defects. The Plan estimates that $3 million will be needed for imaging cleanup for fiscal years 1997 through 1999. The cost estimate is based on the level of work needed to complete the electronic imaging of documents (preparing a hard-copy document in electronic form) identified for the tribal reconciliations. To support asset management and cleanup appraisals, the plan proposes that $20 million be provided for fiscal years 1997 through 1999 and that $5 million each be provided for fiscal year 2000 and fiscal year 2001. The cost estimates are based on (1) OTFM’s estimates that there are, on average, 100,000 active leases at a given point in time and that 20,000 of these produce 80 percent of the total lease revenues, (2) fee information for summary report appraisals obtained through interviews with private sector appraisal companies in two geographic areas of the country, and (3) appraisal policy assumptions that transactions producing 80 percent of the total lease revenue from Indian lands would receive an outside, certified, independent appraisal during fiscal years 1998 and 1999 and once every 5 years after 1999, and that 5 percent of smaller-revenue leases would receive an independent appraisal annually. The Plan estimates that about $61 million would be needed for fiscal years 1997 through 1999 and approximately $31 million and $29 million for fiscal years 2000 and 2001, respectively, to implement the new systems. The estimated systems and infrastructure costs are shown in table III.3. Trust Asset and Accounting Management System (TAAMS) Land Title and Records Management System (LTRMS) General Ledger System (GLS) According to the Plan, the $18 million for TAAMS includes about $17 million for the trust fund accounting system and about $.8 million for a trust real property system for fiscal years 1997 through 1999. The accounting system costs are based on estimates obtained from commercial trust system vendors and include estimated annual account maintenance fees of $35 per IIM account per year for 350,000 IIM accounts and $85 per tribal account per year for 1,500 tribal accounts, which would total about $12 million. The estimates also include one-time licensing and start up fees and user fees. The Trust Real Property Management System component of TAAMS would provide for management and administration of an estimated 100,000 active and pending surface and mineral leases each year. The Trust Real Property Management system would consist of 2 major components—an asset management information system (including a history file), and one or more surface and mineral property management and accounting systems, depending on the type of asset, such as real estate rentals, mineral leasing, and timber contracts. The cost estimate for the Trust Real Property System is based on pricing structures for private sector lease management software that is compatible with commercial trust accounting systems. Cost estimates total about $.8 million for fiscal years 1997 through 1999 including a one-time fee for an interface with the trust financial system. The Plan also proposes a Land Title and Records Management System to provide land title and records management and administration of over 170,000 tracts of land and related title documents. This system is estimated to cost $11 million for fiscal years 1997 through 1999, with costs of ongoing operations of about $6.1 million in fiscal year 2000 and about $4.6 million in fiscal year 2001. These cost estimates are based on BIA estimates for LRIS upgrades to achieve automated chain-of-title and records storage, which were included in OST’s fiscal year 1998 budget request. The Plan proposes that $2.3 million be provided for fiscal years 1997 through 1999 and that $.3 million each be provided for fiscal years 2000 and 2001 for a trust fund accounting general ledger system. These estimates are based on private sector vendor information and OTFM’s current general ledger trust accounting system needs. The Plan proposes that $3 million be provided for fiscal years 1997 through 1999 and that $.3 million be provided each year thereafter for ongoing costs of integrating and implementing the systems described above. To support the overall TAAMS, the Strategic Plan proposes that $26.5 million be provided for fiscal years 1997 through 1999 for an information technology infrastructure, plus approximately $12 million in fiscal year 2000 and about $10 million in fiscal year 2001 for ongoing costs. This infrastructure is to include systems architecture, a local area network, and systems installation. The infrastructure estimate includes $4 million for fiscal years 1997 through 1999, $2 million in fiscal year 2000 and approximately $1 million in fiscal year 2001 for computer equipment and end-user training for tribes. Cost estimates for these systems components are based on private sector vendor fee schedules for servicing over 1,900 sites, 450 tribal and 1,535 AITDA work stations, a network and 120 file servers, software, encryption, laser printers, support, and maintenance. Implementing the new organization is estimated to cost about $52 million for fiscal years 1997 through 1999 and ongoing costs are estimated at $19.4 million for fiscal year 2000 and $18.2 million for fiscal year 2001. These estimates are based on OST’s strategic planning contractor’s analysis of private sector equipment, systems, and software costs. According to the Strategic Plan, implementation costs include those shown in table III.4. Training of AITDA, BIA, MMS, and BLM staff is estimated to cost $10 million for fiscal years 1997 through 1999 and $2.5 million in fiscal year 2000 and $2.3 million in fiscal year 2001. These estimates are based on a training needs assessment and information obtained from private sector vendors on the costs of commercially available courses. The estimate includes nearly $5 million for function and task training for all levels of the organization and over $5 million directed at training four functional groups—end users, end-user support, application developer support, and trust management systems staff. Training costs are projected to vary from $150 per day to $365 per day for each participant depending on the type of training and such factors as the cost to transport trainers to remote locations. In addition to the $10 million for training AITDA, BIA, MMS, and BLM staff, the Plan proposes $2.7 million for end-user training for tribes. That amount is included in the systems infrastructure cost estimates shown in table III.3. Cost estimates of $4.2 million for fiscal years 1997 through 1999 and $.2 million each for fiscal years 2000 and 2001 are to cover development of policies and procedures and legal manuals. These estimates are based on OST’s strategic planning contractor’s assessment of costs for similar efforts at private trust banks. The cost estimates of $9 million for fiscal years 1997 through 1999 and $4.5 million each for fiscal year 2000 and 2001 for the new risk management organization are based on actual amounts spent by risk managers in two separate private sector trust companies with a scale of operations similar to the trust management activities at Interior. As proposed by the Strategic Plan, risk management and control activities would include internal and external audits, review and approval of policies and procedures, oversight of appraisal and leasing functions, and computer security. Risk management and control would be carried out by a Risk Control Group, which would monitor the effectiveness of systems and controls, and an Audit Group, which would be responsible for audit and review of service bureau functions provided by BIA, BLM, MMS, and tribes. The Audit Group would include the following: An Asset Review Division, responsible for internal and external audits and evaluations. An Appraisal Services Division, responsible for assessing real property values and market trends affecting leasing decisions for natural resource assets and portfolio management. A Compliance Division, responsible for ensuring compliance with laws and regulations. The Plan proposes transferring Interior’s Office of American Indian Trust and funding and MMS’ funding for compliance and valuation functions to AITDA’s Compliance Division. The Plan does not include these funds in the AITDA budget proposal because they do not represent new funding, but rather, they represent existing funding that would be transferred to AITDA from these Interior agencies. The incremental costs of the proposed risk management and control function are shown in table III.5. The remaining costs of the new organization include archives and records management, external professional services, and overall management. Archives and records management costs are based on OST’s estimates of costs for a leased facility, personnel, building lease, equipment, supplies, and shipping. The cost for external professional services is based on private sector fee schedules for system integration services. These costs were expected to be $6 million for fiscal years 1997 through 1999 and ongoing costs are estimated as $2.5 million for fiscal years 2000 and $1.5 million for 2001. Overall management costs associated with AITDA are estimated at $5.5 million for fiscal years 1997 through 1999, including $4.9 million for AITDA’s executive management and $.6 million for its Advisory Board. Ongoing costs are estimated at $1.9 million, including $1.7 million for AITDA and $.2 million for the Advisory Board in both fiscal years 2000 and 2001. According to OST, AITDA cost estimates are based on OST’s current authorized staffing and related operating costs for office space and travel. Advisory Board cost estimates are based on current OST Advisory Board costs for travel and per diem at government rates. Paperwork Reduction Act of 1995, requires agencies to use information resources in a manner to improve the efficiency and effectiveness of their operations in the fulfillment of their missions. Clinger-Cohen Act of 1996, requires federal agencies to focus on results that they are achieving through information technology (IT) investment. The act requires the head of an agency to implement a process for maximizing the value and assessing and managing the risks of the agency’s IT acquisition and ensuring the development of reliable financial and program performance information. The act also requires agencies to appoint a Chief Information Officers (CIOs) to direct and oversee agency information resource management. Federal Acquisition Streamlining Act of 1994, requires agencies to define cost, schedule, and performance goals for federal acquisitions, including IT projects, and monitor the programs to ensure that they remain within prescribed tolerances. OMB Circular A-127, Financial Management Systems, prescribes policies and standards for executive departments and agencies to follow in developing, operating, evaluating, and reporting on financial management systems. OMB Memorandum M-97-02, “Funding Information Systems Investments,” commonly referred to as “Raines Rules,” prescribes decision criteria for evaluation of major information system investments proposed for funding in the President’s fiscal year 1998 budget. Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994). Assessing Risks and Returns: A Guide for Evaluating Federal Agencies’ IT Investment Decision-making, Version I (GAO/AIMD-10.1.13, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the results of its analysis of the Special Trustee for American Indians' Strategic Plan for Indian trust fund accounting and asset management improvement, focusing on: (1) the trust asset management problems that the Strategic Plan proposes to resolve; (2) a high-level summary of the Strategic Plan; (3) the basis for the cost estimates included in the Plan; and (4) implementation issues, including key issues that the Congress would need to consider in deciding whether to approve the initiatives described in the Plan. GAO noted that: (1) management of the Indian trust funds and assets has long been characterized by inadequate accounting and information management systems, untrained and inexperienced staff, backlogs in appraisals and ownership determination and recordkeeping, lack of a master lease file and an accounts receivable system, inadequate written policies and procedures, and poor internal controls; (2) because of these overall weaknesses, account holders do not have assurance that their accounts balances are accurate and that their assets are being prudently managed; (3) to address the Department of the Interior's long-standing Indian trust fund accounting and asset management problems, the Congress passed the American Indian Trust Fund Management Reform Act of 1994, which created the Office of the Special Trustee for American Indians; (4) the act required that the Special Trustee provide oversight of reforms within Interior, including development of policies, procedures, and systems; (5) in April 1997, the Special Trustee submitted his Strategic Plan to the Congress; (6) the Strategic Plan proposes a new organization, independent of Interior, to administer trust fund accounting and asset programs; (7) these proposals are estimated to cost $168 million for fiscal years 1997 through 1999 and another $61 million and $56 million for fiscal years 2000 and 2001, respectively; (8) in addition, the Plan proposes establishing an Indian economic development bank to be capitalized by the federal government; (9) a number of areas require further clarification, planning, or consideration before the Plan can move forward; (10) these include: (a) implementation timing of certain initiatives, such as records cleanup and the acquisition of a new individual Indian Money accounting systems component; (b) proposals, such as establishing a centralized organization and upgrading and acquiring systems, that need more planning before they can be successfully implemented; (c) issues relating to the desirability and feasibility of establishing the new organization as a private entity, including the legality of transferring the federal government's trust authorities and responsibilities to such an entity; and (d) issues relating to the establishment of the trust development bank, including the initial funding and on-going capital maintenance proposals; and (11) in order to appropriately address these issues, more information and analysis need to be included in the Plan to provide clarification of the authority and responsibility of the proposed organization, and its relationship to Interior.
The risks associated with natural disasters have always been a part of farming. Historically, farmers assumed these risks as part of the hazards of doing business. Since the 1930s, many farmers have been able to transfer part of the financial losses from these risks to the federal government through subsidized crop insurance. Before 1980, the crop insurance program was smaller, covering fewer crops and locations, and its premiums were generally adequate to pay the claims. Since the program was expanded in 1980 to cover more crops in more locations, it has not been financially stable, paying out more in claims in most years than the premiums the farmers and the government had paid in. To reduce the government’s cost for the crop insurance program, the Congress required that, by October 1, 1995, the U.S. Department of Agriculture (USDA) lower the program’s projected losses from over $1.40 in claims paid for every $1 of premiums taken in to $1.10 or less. In March 1994, USDA issued a plan explaining how it expected to achieve the desired improvement. Federal crop insurance is a program that is relatively simple in concept but highly complex in implementation. Farmers who buy crop insurance can file claims for part of the money that they would otherwise lose when droughts, floods, infestations of insects, or other natural disasters keep them from harvesting their normal expected crop. The size of the claim depends on the extent of the crop loss and the amount of insurance coverage the farmer has purchased. Two types of coverage—catastrophic and additional—are available for most major crops under changes made by the Congress in 1994. Under catastrophic coverage, the government provides a free minimum level of coverage to farmers for a small processing fee. The government pays the premium for this insurance. Farmers must sign up for this program if they sign up for the annual USDA commodity programs; obtain USDA farm ownership, operating, or emergency loans; or contract to place land in the Conservation Reserve Program. They can sign up through their local Consolidated Farm Service Agency office—the USDA agency responsible for administering the program—or obtain their policy from a participating private insurance agent. The free catastrophic program protects farmers against extreme losses. The program pays farmers only when they are able to harvest less than 50 percent of their normal crop. The normal crop is determined on the basis of a farmer’s past production history as reported to the USDA office or insurance agent. If a farmer does not report past production, that farmer’s normal crop is determined by using a modified average production level for the county, reduced by a discount, because of the uncertainty of the farmer’s expected production. For losses in production below the 50-percent level, farmers are paid 60 percent of USDA’s estimated market price. Farmers can purchase additional insurance from participating private insurance companies. As authorized by the 1980 act redesigning and expanding the program (P.L. 96-365, Sept. 26, 1980), the managers of USDA’s crop insurance program have entered into reinsurance agreements authorizing the participating insurance companies to sell the insurance and process the resulting claims. The government pays part of the farmers’ premium. Farmers who purchase this additional insurance must choose both the coverage level (the proportion of the crop to be insured) and the unit price (e.g., per bushel) at which any loss is calculated. With respect to level of production, farmers can choose to insure as much as 75 percent of normal production (25-percent deductible) or as little as 50 percent of normal production (50-percent deductible) at different price levels. With respect to the unit price, farmers choose whether to value their insured production at USDA’s full estimated market price or at a percentage of the full price. USDA sets the premium rates and assigns correspondingly higher premiums for higher production and price levels. The following example illustrates how a claims payment is determined. A farmer whose normal crop production averages 100 bushels of corn per acre and who chooses to buy insurance at the 75-percent coverage level will be guaranteed 75 percent of 100 bushels, or 75 bushels per acre. Assuming that the farmer had chosen the maximum price coverage and that USDA had estimated the market price for corn at $2 per bushel, the farmer would have total coverage of $150 per acre. Should something like drought cut the farmer’s actual harvest to 25 bushels, the farmer will be paid for the loss of 50 bushels per acre—the difference between the insured production level of 75 bushels and the actual production of 25 bushels. The insurance would pay the farmer’s claim at $2 x 50 bushels, or $100. In addition, the crop insurance program’s “prevented planting” provision pays farmers who have purchased insurance but never planted crops because of adverse weather conditions. These farmers are entitled to claims payments ranging from 35 to 50 percent of the coverage they purchased, depending on the crop. Critical to the success of the crop insurance program is aligning the premium rates with the risk each farmer represents. The riskiness of growing a particular crop varies from location to location, from farm to farm, and from farmer to farmer. If the rates are too high for the risk represented, farmers are less likely to purchase insurance, lowering the program’s income from premiums. Conversely, if the rates are too low, farmers are more likely to purchase crop insurance, but because the rates are too low, the income from premiums will be insufficient to cover the claims. To align crop insurance premium rates with the risk represented, USDA establishes rates that vary by crop, location (county), farm, and farmer. Because of all the combinations involved, literally hundreds of thousands of premium rates are in place. For this review, we examined crop insurance rates at the state level for six major crops: barley, corn, cotton, grain sorghum, soybeans, and wheat. For these crops, the average premium rates for crop insurance purchased at the 65-percent coverage level in 1994 varied widely among the states. As shown in figure 1.1, the average rates ranged from a low of $1.95 per $100 of insurance coverage for wheat in one state to a high of $32.94 per $100 of insurance coverage for soybeans in another state. To adjust the hundreds of thousands of rates it publishes each year, USDA goes through a multistep process involving considerable computer analysis and judgment. USDA’s objective is to set the rates that each farmer pays according to the risk associated with the farmer’s location, crop, past production, and past losses. For the six crops we reviewed, USDA begins its rate-setting process each year by looking at the crop insurance experience over the past 20 years for each county and state. On the basis of a county’s and state’s historical experience, USDA sets a basic rate for each crop in each county at the 65-percent coverage level for average production. Using this basic rate, USDA makes adjustments to establish rates for other coverage levels and for farmers whose production levels are higher or lower than the county’s average. This latter adjustment is based on USDA’s research showing that farmers with higher-than-average production levels are less likely to experience losses. USDA aligns rates with risk in several other ways as well. For example, it imposes an additional premium on those farmers who insure individual fields rather than all fields combined, purchase hail insurance, and are high risk as evidenced by frequent and high experience with claims. Moreover, for those farmers who have production records for fewer years than required to establish the amount of production that can be insured, USDA uses the modified average production level for their county, adjusting the production down according to the number of years for which the farmers have provided records. USDA’s rate-setting methodology is described in more detail in appendix I. Since 1980, when the Congress redesigned and expanded the crop insurance program to be the primary form of agricultural disaster assistance, the program has not been financially sound. USDA has regularly paid out more in claims than it received in premiums paid by farmers and the government. Two key requirements of the 1980 legislation were to (1) operate the program on a financially sound basis and (2) eliminate the need for government-funded disaster assistance by having most farmers buy crop insurance. The program has never met either requirement. First, to be financially sound, the program needed to realize more income from premiums, including the government’s subsidy, than it paid to settle farmers’ claims so that it could build up a cash reserve to pay farmers’ claims in years of catastrophic loss. As shown in figure 1.2, the claims paid per $1 of premium (including the government’s subsidy) for crop years 1981 through 1994 varied greatly from year to year, averaging $1.41. During this period, claims exceeded premiums by a total of $3.3 billion. The highest claims payments in relation to premiums were in 3 catastrophic years—resulting from severe droughts in 1983 and 1988 and excessive moisture and severe flooding in 1993. Excluding the 3 catastrophic years, the average claim per dollar in premiums was $1.22. Thus, even in years without catastrophic losses, the program consistently operated at a loss; catastrophic years just made the situation worse. Moreover, the Congress’s goal of having most farmers buy crop insurance to eliminate the need for direct government disaster payments was not reached. Farmers never insured more than 40 percent of their eligible acres, and the pressure for direct disaster assistance continued. In fact, the Congress passed emergency disaster legislation to cover several crop years in the 1980s and each crop year from 1988 through 1993. Over the period 1981-93, USDA paid farmers about $11 billion in disaster assistance payments. Adding this to the government’s $8 billion share of the cost of crop insurance, the government’s spending to assist farmers who lost crops exceeded $19 billion over the 13-year period. Figure 1.3 depicts the outlays by year. The crop insurance program’s financial condition is influenced by several key management activities that, taken together, determine whether the program will produce sufficient income to cover claims. These key activities are setting appropriate premium rates, setting and enforcing the rules for calculating a farmer’s normal establishing the periods when insurance can be sold, and setting and enforcing the rules for adjusting claims. Historically, these activities, individually and collectively, have prevented the crop insurance program from reducing its losses to an acceptable level. As we reported throughout the 1980s, USDA’s crop insurance program unsuccessfully attempted to achieve financial soundness at the same time it was rapidly expanding to include more crops and locations. In 1993, the crop insurance program’s acting manager acknowledged to a congressional committee that during the 1980s, the agency had focused “solely” on improving participation in the program and “sacrificed” actuarial soundness. Moreover, we and USDA’s Inspector General reported problems with the private insurance companies’ claims adjustments. In 1993, the Inspector General estimated an overpayment rate for claims of about 9 percent—an improvement over the 16-percent overpayment rate in 1987 payments that we had previously reported. Furthermore, we had previously identified inherent problems with crop insurance and problems in the design of the crop insurance program that made it exceedingly difficult for the program to be financially sound.Crop insurance is an inherently difficult proposition because many weather-related hazards can reduce crop production over large areas of the nation, thereby increasing the chance that a substantial number of policies will require payments during the same year. This widespread impact reduces the probability that financial stability can be achieved because risk pooling—the concept that limited premiums are paid by many to fund claims paid to relatively few—is less likely to be successful if most of the insured farmers simultaneously face severe losses. For example, in the severe drought of 1988, 92 percent of the 34,773 crop insurance policies purchased by wheat farmers in North Dakota and Montana resulted in payments for claims, as did 58 percent of the 65,159 policies purchased by corn farmers in Iowa, Minnesota, and Illinois. Similarly, in 1993—a year with extensive moisture and flooding—72 percent of the 71,131 crop insurance policies purchased by Iowa and Minnesota corn farmers resulted in payments for claims, as did 56 percent of the 54,909 policies purchased by soybean farmers in the same two states. Statutes and regulations designed to encourage participation in the program have further limited USDA’s ability to make the program financially sound because they encourage participation at the expense of appropriate rates. These provisions include (1) allowing all farmers to participate regardless of risk (entitlement); (2) allowing farmers to insure for production levels higher than would be expected on the basis of their production history, thereby increasing the likelihood that claims will be paid; (3) restricting USDA’s ability to increase premiums; and (4) allowing farmers more time to assess current growing conditions before purchasing insurance, which enables them to better determine the likelihood of loss and to purchase insurance when that likelihood is high. As a result of persistent problems and high costs in the delivery of crop insurance to farmers, potential reform of the crop insurance program was a major focus in developing the 1990 farm bill. However, congressional and administration officials were unable to reach agreement on a design for the crop insurance program that fostered high participation, eliminated the need for expensive ad hoc disaster assistance legislation, and stayed within budget guidelines. Consequently, in the 1990 legislation the Congress reemphasized the need for the crop insurance program to achieve financial soundness by mandating that USDA raise the premium rates, where necessary. However, the Congress limited the increase for any farmer to no more than 20 percent per year. Continuing to be concerned about the losses in the crop insurance program, the Congress, in the Omnibus Budget Reconciliation Act of 1993, directed USDA to improve the crop insurance program’s financial condition. The act required USDA, by October 1, 1995, to lower the program’s projected losses (loss ratio) from an average of over $1.40 paid in claims for every $1.00 of premium taken in down to $1.10. In response to the legislation, USDA developed a blueprint explaining how it expected to improve the program’s financial condition by reducing losses to the level specified in the legislation. In October 1994, the Congress made additional changes. Under the Federal Crop Insurance Reform Act of 1994 (P.L. 103-354, Oct. 13, 1994, title I), the Congress combined the existing crop insurance program and the new catastrophic insurance program for which USDA pays the farmers’ premiums. By adding the catastrophic coverage, the Congress planned to eliminate the need for ad hoc, emergency disaster assistance for crop losses. This change should resolve the inherent conflict in the program between expanding participation and achieving financial soundness. The legislation also repeated the requirement that USDA lower the projected loss ratio to $1.10 in claims paid for every $1 in premiums on and after October 1, 1995. This requirement remains in effect through September 30, 1998; thereafter, the amount paid in claims must be reduced to $1.075 for every $1 in premiums. The act also specifically provided that USDA establish insurance rates that will fulfill the requirement for 1998. The estimated cost of the integrated program, according to USDA’s budget request for fiscal year 1996, is $2.1 billion, which will be partially offset by about $600 million in premiums paid by farmers. Thus, the net cost to the government is estimated at $1.5 billion. The estimated outlays consist of about $1.6 billion in payments of claims to farmers and $500 million for USDA’s and the insurance companies’ operating and delivery costs. In response to the 1993 legislation, USDA released its Blueprint for Financial Soundness on March 2, 1994. USDA described 18 initiatives intended to improve the financial stability of the crop insurance program. USDA had started most of these initiatives before the legislation was enacted. The initiatives most critical to promoting the success of the crop insurance program are setting appropriate rates, charging higher rates to high-risk producers. establishing accurate production levels, and setting appropriate deadlines for purchasing insurance. In September 1994, USDA contracted with the actuarial firm of Milliman and Robertson to perform an overall evaluation of its rate-setting process. This review is expected to be completed by September 1996. The last comprehensive review of USDA’s rate-setting methodology was completed in 1983 by the same firm. Concerned about the financial condition of the crop insurance program, the Ranking Minority Member of the Senate Committee on Agriculture, Nutrition, and Forestry asked us to examine whether USDA (1) set insurance rates to achieve the legislative requirement of collecting premiums sufficient to cover 91 percent of the claims paid—termed “91-percent adequacy” in this report; (2) reduced the losses caused by high-risk farmers; (3) based claims payments on farmers’ normal production levels; and (4) set deadlines for farmers to purchase crop insurance before planting begins. These activities, taken together, substantially determine the program’s financial soundness. As part of our review, we examined an initial draft of USDA’s blueprint. On the basis of this analysis, we briefed crop insurance program officials on actions that we believed could be taken to reduce the program’s losses. In response, USDA added more specific time frames for accomplishing tasks. To determine the extent to which USDA’s premium rates for crop insurance were adequate under the legislative requirement, we met with crop insurance program officials at USDA’s headquarters in Washington, D.C., the Department’s main crop insurance field office in Kansas City, Missouri, and selected regional service offices. We reviewed USDA records and past studies to understand the Department’s actions to set premium rates. We also obtained USDA’s computer files for crop insurance to evaluate the adequacy of the rates. In addition, we interviewed insurance representatives from the private sector and reviewed insurance literature. We also reviewed previous reports by GAO and USDA’s Inspector General. For our review, we evaluated the adequacy of the premium rates for 1991-95 for six of the seven major crops insured by USDA. We selected these six crops for review because they were the largest programs for which USDA used the same methodology to set the rates. For 1994, the income from premiums for these six crops totaled about $721 million.For these six crops, the losses experienced were at about the same level—$1.37 compared with $1.41 in claims payments for each $1 in income—as in the overall program for the period 1981 through 1994. As shown in table 1.1, the six crops account for 74 percent of the claims paid and 76 percent of the premiums collected under the program. In the absence of a USDA annual or periodic evaluation showing how the rates it establishes each year compare with the rates that its historical data indicate are needed to pay future claims, we developed benchmark rates to measure the adequacy of USDA’s basic premium rates that it sets at the 65-percent coverage level and average production level. We developed the benchmark rates by generally following USDA’s methodology for setting premium rates. USDA uses the past 20 years’ claims experience to set its rates each year. USDA believes that the past 20 years’ claims experience provides the basis for setting rates each year that are needed to produce sufficient income from premiums to pay future claims. For example, if claims payments averaged $100 over the past 20 years and the insurance sold averaged $1,000 in coverage, the benchmark rates would be 10 percent of the amount of the insurance coverage sold, or $10 per $100 of coverage. Although the future claims paid would vary from year to year, they would be expected to average about $100 per year. Thus, to achieve a rate that is 91 percent adequate, USDA would need to set the rate at $9.10. Following USDA’s methodology, we used 20 years of historical data for the insurance claims paid and insurance coverage sold to calculate a benchmark premium rate for each crop in each county and state. Because USDA sets its basic rates at the county level on the basis of the historical experience in the county and state, we calculated benchmark rates for each crop overall, weighting the county experience to the state crop, national crop, and national level (six crops combined). We then compared these benchmark rates with USDA’s basic premium rates for the year reviewed to assess the adequacy of USDA’s rates. Appendix II provides more detail on our methodology. Appendix III lists the results of our analysis by crop, state, and year. USDA applies mathematical factors to its basic rates to set rates for coverage and production levels above and below those used to set the basic rates. To determine the accuracy of these other rates, we compared the relative losses at the various levels over the period 1990 through 1994. Appendix II provides more detail on our methodology. To evaluate the effectiveness of USDA’s program to target high-risk farmers for individual rate increases, we identified the policyholders that USDA targeted for 1993, the most recent information available at the time of our analysis. We used USDA’s historical results from 1992 to estimate the reductions in claims and increases in premiums that would result from targeting high-risk farmers. Appendix IV provides more information on our methodology. To determine the effectiveness of USDA’s revised rules for estimating a farmer’s expected production level, we analyzed USDA’s experience for crop year 1994. We calculated the difference between the production level each farmer qualified for in 1994 and the production the farmer would have qualified for if the 1993 rules had continued. Appendix V provides more information on our methodology. To determine whether USDA’s deadlines for purchasing crop insurance were appropriate, we determined the extent to which USDA permitted farmers to purchase crop insurance after the planting period had begun. We compared the deadlines for purchasing insurance with the initial date USDA establishes for planting. We briefed USDA officials on our initial comparison, showing them that many deadlines needed to be set earlier. They included in their blueprint a plan for changing these deadlines. We compared these revised deadlines with the initial dates set for planting. We conducted our review from August 1993 through August 1995 in accordance with generally accepted government auditing standards. Although we did not assess the accuracy and reliability of USDA’s computerized databases, we used the same files that USDA uses to set its rates. For the six crops we reviewed, the basic premium rates are, on average, approaching the level necessary to achieve the legislative requirement of 91-percent adequacy. The basic rate is set, by county, for the 65-percent coverage level and the average production level for each crop. However, for certain crops in certain states, these basic rates remain too low. USDA has generally not raised rates sufficiently because it was concerned that higher rates would reduce sales of crop insurance. While the basic rates are approaching the 91-percent adequacy requirement, the rates for coverage higher or lower than the basic rates have not been set to ensure that premiums are aligned with risk. Most farmers purchase crop insurance coverage at these other rates. USDA has not adjusted the mathematical factors applied to the basic rates to calculate these other rates because of the time and resources required. However, USDA is currently reviewing these factors. Finally, while the program has been moving in the direction of adequate income to cover 91 percent of the claims paid, USDA recently made a decision that further calls into question the program’s ability to meet that requirement. USDA increased the benefits provided under the program’s “prevented planting” provision for crop year 1995 without first adjusting the premium rates. USDA acknowledges that this change will result in payments of up to $135 million in claims. According to our analysis of the basic premium rates USDA established for the six crops reviewed, the rates overall are nearly adequate to meet the Congress’s legislative requirement of charging premiums that are projected to cover at least 91 percent of claims—resulting in $1 in income from premiums for every $1.10 paid in claims. As figure 2.1 shows, USDA’s basic rates for the six crops reviewed were about 84 percent adequate overall in 1991, and this percentage increased slightly in the following years. The rates in 1994 and 1995 were just below the requirement of 91-percent adequacy. In 1995, the rates were 89 percent adequate, meaning that USDA should receive about $0.98 in income for every $1.10 in claims paid. While the overall basic rate is approaching the requirement of 91-percent adequacy for the six crops combined, the ultimate achievement of this requirement is being hampered because the basic rates for some crops are not adequate. Furthermore, the basic rates in many states are not adequate. USDA did not raise the rates for these programs as much as it could have because of concern that higher rates would discourage farmers from buying crop insurance. As table 2.1 shows, USDA’s basic premium rates for some crops in 1995 are still well below the 91-percent requirement. For the six crops reviewed, the rates for cotton and soybeans exceed the requirement of 91 percent, while the others fall short. In fact, the corn rates—accounting for 37 percent of the crop insurance business for the six crops—were the farthest from the requirement at 81 percent. This shortfall occurred because 1993 (when claims payments were very high) was added to the rolling 20-year database used for setting rates and 1973 (when claims payments were lower) was deleted, without a corresponding increase in the premium rates. Our analysis of the adequacy of the basic rates is consistent with USDA’s blueprint, which stated that only about 30 percent of the crops the Department analyzed met the required level of adequacy. The results of both our and USDA’s analysis depend heavily on the number of years included and the weight assigned to each year. For example, in 1983 USDA’s consultant suggested changing from the current methodology of giving equal weight to each year of the 20 years’ experience to giving greater weight to more recent years’ experience. Specifically, the consultant suggested assigning a 50-percent weight to the experience for the most recent 10 years and a 50-percent weight to the experience for all available years. We found that the consultant’s approach had a significant impact on the premium rates for three crops. For soybeans, barley, and wheat, the adequacy was reduced from 94 to 87, 86 to 77, and 87 to 76 percent, respectively. The impact is greatest on these three crops because of changes in the level of losses that have occurred in the most recent 10 years. In response to our evaluation, USDA’s senior actuary for crop insurance told us that the Department will have its actuarial consulting firm evaluate whether the trend in losses in recent years requires a change in USDA’s methodology. He said this evaluation would be completed in late September 1995. For the 183 state crop programs we examined, only 54 had basic rates that were at least 91 percent adequate for 1995. These 54 programs were generally those that had the greatest volume of insurance. For the remaining 129 programs, 40 were approaching 91-percent adequacy—ranging from 80 to just under 91 percent. The other 89 programs, representing about 24 percent of the crop insurance premiums for the six crops in 1994, had basic rates that were less than 80 percent adequate. As table 2.2 shows, many of these 89 programs had not charged adequate rates for the entire 1991-95 period. As the table also shows, the size of these state programs varied significantly, from as low as $100 in premium income annually to as much as $61 million. The increase in the number of programs that are less than 80 percent adequate for 1995 resulted in part from the addition of large corn programs in four states, totaling about $119 million in premiums. These four programs had been 80 percent or more adequate—often more than 90 percent adequate—in 1991-94 but this percentage dropped dramatically in 1995. This drop occurred because (1) the severe losses in 1993 were added to the historical database for establishing the 1995 rates and a year from the 1970s when losses were lower was deleted and (2) USDA did not increase the rates as much as it could have for corn in these four states. Our analysis of the adequacy of the basic rates is consistent with USDA’s blueprint, which stated that some areas of the country met the legislative requirement while others did not. For the state crop programs that were less than 80 percent adequate, USDA often did not sufficiently increase the basic rates where necessary. Rates that are less than 80 percent adequate in any year would require at least a 14-percent increase (of 80 percent) to reach the 91-percent adequacy requirement. USDA did not always raise the rates sufficiently even though most of the increases imposed were less than the 20-percent statutory maximum. USDA increased the rates most in 1992 and least in 1993, as shown in figure 2.2. In 1992, USDA increased 71 percent of the rates for state crop programs by 10 percent or more. In contrast, in 1993 USDA increased the rates for only 10 percent of the state crop programs by 10 percent or more, while increasing the rates for 68 percent of the state crop programs by less than 5 percent. For 1995, USDA again moved towards greater increases by raising the rates for 59 percent of the state crop programs by 10 percent or more. USDA has not sufficiently raised rates out of concern that higher rates will discourage farmers from buying crop insurance. For example, in 1994 the crop insurance program manager testified that USDA did not want to cause “sticker shock” and drive away the farmers who are buying crop insurance. He said that USDA was “trying to raise rates in a relatively gentle way—10 percent instead of 20 percent a year—to phase them in.” Similarly, USDA’s blueprint stated that increasing the rates to the levels suggested by experience in the most recent 20 years may not be good public policy and “extremely high premium rates will preclude realization of the social benefits and public policy goals of the program because participation will be discouraged.” We recognize that rate increases could cause some farmers to limit their insurance coverage to the free catastrophic insurance program because they conclude that the additional insurance program is not to their financial advantage. However, as long as USDA sets rates that are less than 91 percent adequate, it will not have the premium income necessary to ensure that it meets the legislative requirement of $1 in premiums for each $1.10 in claims paid. Furthermore, USDA does not routinely evaluate and report on the adequacy of its rates. As a result, USDA does not calculate the expected shortfall between the income from premiums and the claims paid. While establishing appropriate basic rates is critical to the financial condition of the crop insurance program, the majority of all insurance is purchased at rates for coverage and production levels that are above or below those covered under the basic rates. For this insurance, our analysis showed that in relationship to the basic rates, the rates are too high for coverage at the 75-percent level and too low at the 50-percent level, too low at the higher levels of production, and either too high or too low for the lower levels of production, depending on the crop. As a result, the rates for both coverage and production levels are not aligned with risk. This occurs because USDA does not periodically review and update the calculations it uses to adjust rates above and below the basic rate. To set the rates for the 75-percent and 50-percent coverage levels, USDA applies preestablished mathematical factors to the basic rate. However, these factors have not resulted in rates that are aligned with risk. According to our analysis, the rates were too high at the 75-percent coverage level and too low at the 50-percent coverage level in relationship to the basic rates. For crops insured at the 75-percent coverage level, USDA set premium rates ranging from 19 to 27 percent more than required. (See table 2.3.) As a result, the 1994 income from premiums was about $30 million more than required for this coverage. Although grain sorghum had the greatest percentage of rates in excess of those required, corn had the greatest amount of additional premium income because the program was much larger. For crops insured at the 50-percent coverage level, the rates were about 11 percent too low, resulting in a shortfall in premium income of about $3 million for crop year 1994. The impact was much less than at the 75-percent coverage level because only about $30 million in insurance was sold at the 50-percent coverage level. However, the potential impact of setting rates too low for the 50-percent coverage level is much greater for future years. Beginning in crop year 1995, USDA provided free catastrophic insurance to farmers at the 50-percent coverage level. In its fiscal year 1996 budget request, USDA estimated that it will need $350 million to cover its costs to pay these premiums for all crops. Assuming the six crops we reviewed represent about 75 percent of the free insurance provided—the proportion of the program they have historically represented—then about $263 million of USDA’s estimate is for these crops. Since the 50-percent coverage rate is 11 percent too low, USDA’s budget request could be understated by about $29 million. USDA also adjusts the basic rates for production, set at the county average, for farmers whose historical production level is above or below the county’s average. As with the varying rates for coverage, however, these adjustments do not result in rates that accurately reflect the risk involved at each production level. Specifically, according to our analysis the rates are too low for all crops at the higher production levels and too high for some crops at the lower production levels. The net effect is that premium income is too low. The greatest dollar shortfall resulting from these problems occurred in the cotton and corn programs. USDA’s basic rate applies to the farmer whose average production is about equal to the average for all producers in the county. However, many farmers’ average production is above or below the county’s average, and USDA’s research shows that the higher a farmer’s production level, the lower the chance of a loss. Therefore, USDA establishes rates for different production levels using a mathematical model that sets rates according to preestablished relationships between production levels. The rates per $100 of insurance coverage decrease as a farmer’s average production increases. The mathematical model USDA applies to the basic rate to calculate rates for production levels higher and lower than the coverage under the basic rate does not result in correct rates. For above-average production, USDA’s rates should have been from 13 to 33 percent higher than currently set. As shown in table 2.4, USDA needed an additional $55 million in premium income in 1994 for the six crops. Although barley would have required the greatest percentage increase in premiums, cotton required the greatest amount of additional premiums because the cotton program is much larger. At below-average production, premiums were about evenly split for 1994 between crops with rates higher or lower than needed. Overall, as shown in table 2.5, the premiums were only slightly too low for this group. As our analysis shows, the inaccurate rates had the greatest impact on income from premiums for cotton (a net shortfall of about $16 million) and for corn (a net shortfall of about $22 million). For cotton, this shortfall occurred because farmers were allowed to insure their crop at production levels higher than their historical production levels, according to USDA officials. As a result, a greater volume of insurance was sold at higher production levels than was warranted on the basis of the farmers’ experience. Beginning in 1994, USDA changed the requirements for calculating farmers’ production levels so that the amount of production insured would be more closely aligned with the farmers’ actual production history. According to USDA officials, this change should result in cotton farmers’ purchasing insurance at reduced production levels. However, as with other types of crops, USDA does not require cotton farmers to decrease the amount of production coverage by more than 10 percent per year until their coverage coincides with their actual production experience. For corn, the shortfall occurs because the rates for production above and below the basic rates were both too low, according to our analysis. This situation indicates that the mathematical model is not appropriate for corn. The misalignment of rates with risk occurs because USDA has not revised the factors it applies to the basic rate to arrive at different coverage and production levels. USDA officials told us that they had not had the time and resources to revise the factors since they were established in the 1980s. Moreover, USDA’s senior actuary told us that they have not developed a plan for how often the factors ought to be evaluated and updated. Nonetheless, these officials said they were working to improve their capability to set rates. USDA is changing its computer database to enable it to more easily evaluate the crop insurance program’s past performance and set new rates. This effort is expected to be completed in time for setting the 1997 crop rates. In September 1994, the Department contracted with an actuarial consulting firm to evaluate its factors for adjusting basic rates to other coverage levels. In addition, in response to our analysis of rates for production levels, USDA’s senior actuary said the Department will have the consulting firm evaluate the accuracy of its mathematical model and recommend any specific changes needed. While USDA is taking a number of actions to improve the crop insurance program’s rate structure, it recently made a decision that will weaken the program’s financial condition. For 1995, USDA increased the benefits provided under the prevented planting provision of the crop insurance program. This decision will increase the claims paid by at least $135 million for 1995, according to USDA’s estimates. Under the prevented planting provision, included in crop insurance policies beginning in crop year 1994, farmers who could not plant crops because of adverse weather conditions could receive insurance payments at 50 percent of the insurance coverage level they purchased. In June 1995, USDA expanded the coverage to 75 percent, for crop year 1995 only. In addition, for 1995 farmers who could not plant the crop they insured but were able to plant a different crop will receive 25 percent of their coverage level in insurance payments, whereas in the past, they would not have received any insurance payments. “. . . could arguably be seen as stretching the statute’s requirement that Federal Crop Insurance Corporation . . . cannot make changes which adversely impact actuarial soundness and must achieve a loss ratio of 1.10 by October 1, 1995.” In advising on this decision, USDA’s Office of General Counsel said that in determining rates and coverages, the manager of the program “should make the specific determination that the action will not adversely affect the ‘actuarial soundness’ of the program.” Despite this advice, the decision memorandum recommended the change, while recognizing its increased cost to the program. USDA’s Acting Deputy Administrator for Risk Management said that USDA’s decision was based on broader policy concerns that had to be considered along with actuarial concerns. As crop year 1995 progressed, many farmers were prevented from planting the crop they had insured and were uncertain about the benefits. USDA believes that farmers were confused about the program’s requirements and restrictions because of the rapid expansion of the crop insurance program in crop year 1995. Moreover, in offering prevented planting coverage for the first time in crop year 1994, USDA recognized that changes would be required in future years as it gained experience with this provision. Also, USDA concluded that it needed to correct an inconsistency in its coverage that resulted in three different levels of claims payments for farmers similarly affected by excessive moisture. USDA was concerned that if changes were postponed, farmers might not accept the new crop insurance program and might call upon the Congress to revise it. Therefore, USDA concluded that changes to the prevented planting program were needed immediately. According to the decision memorandum, the increased claims payments were to be recovered beginning in crop year 1996. However, according to USDA’s Office of General Counsel, the governing legislation does not permit USDA to set premium rates to recover past losses. Instead, USDA can set rates only to cover anticipated future claims payments. Therefore, USDA intends to include the $135 million in claims payments in the historical database that it uses to calculate future premium rates to cover estimated future claims payments. USDA has taken steps to improve the overall financial condition of the crop insurance program for the six crops we reviewed by raising the program’s basic premium rates. On average, the basic rates are approaching the 91-percent adequacy requirement the Congress set for the program. However, this overall improvement masks some serious problems in the basic rates set for some crops and in some states. USDA recognizes the need to raise the basic rates, and it plans to review its weighting methodology to ensure that the basic rates are accurate. At the same time, because of concerns that farmers would stop purchasing crop insurance, USDA has failed to raise the basic rates promptly to ensure achievement of 91-percent adequacy. Keeping farmers in the program is a legitimate goal. However, without sufficient increases in the basic rates, the legislative requirement cannot be met. Currently, USDA’s management and the Congress cannot project the program’s losses because USDA does not annually evaluate and report on the adequacy of the basic rates. Until that is done, USDA and the Congress will be unable to routinely know whether the program is meeting its legislative requirement, and, if not, what adjustments need to be made to the basic rates. In addition to the problems with the basic rates, USDA has not adjusted the factors applied to the basic rates to arrive at accurate rates for coverage and production levels different from those covered by the basic rates. Most purchases of crop insurance occur at these other levels. This lack of accurate rates benefits some farmers and penalizes others: Farmers pay too much for coverage at the higher coverage level and too little at the lower coverage level. Similarly, farmers pay too little for production levels above average and too much or too little for production levels below average, depending on the crop. Ultimately, the crop insurance program loses money. USDA has recognized that these rates will continue to be incorrect because the mathematical factors it uses to set them are incorrect. USDA officials stated that they have not had the time and resources to periodically evaluate these factors. In response to our analysis, USDA officials are evaluating the mathematical factors to determine what changes are needed. However, these officials are not developing a plan to periodically reevaluate whether these factors continue to result in correct rates. Finally, the difficulty in achieving the legislative requirement has been compounded by USDA’s recent program policy decision to increase the coverage for prevented planting, even though USDA’s Office of General Counsel advised against it. This decision added an estimated $135 million in claims payments that were not and cannot be recovered through premium rates because the governing legislation prohibits it. The prohibition raises further doubts about whether USDA’s decision to increase prevented planting levels was appropriate. If the Congress wants to ensure the financial viability of the crop insurance program, it may wish to prevent USDA from making program policy decisions that are not funded under the crop insurance program’s rate structure. To do so, the Congress would need to amend the Federal Crop Insurance Reform Act of 1994 to specifically prohibit the Secretary of Agriculture from making policy decisions that increase benefits without first increasing the rates to cover the anticipated claims. To meet the 1994 legislative requirement that USDA reduce losses and set premiums to cover 91 percent of the claims paid, we recommend that the Secretary of Agriculture direct the Deputy Administrator for Risk Management to take the following actions: Annually raise premium rates up to the 20 percent authorized by the Congress, if needed, to cover future claims under the legislative requirement of 91-percent adequacy. As part of this rate-setting process, the Deputy Administrator should report the expected adequacy of premium rates each year, by crop and by state, so that USDA’s management and the Congress can be kept informed of the program’s financial condition. If the rates are not raised as required, USDA should include in its annual report the estimated cost of subsidizing farmers’ purchase of crop insurance in areas where the rates are inadequate. Develop and implement a plan for periodically evaluating the mathematical factors used to set coverage and production levels above and below the basic rates to ensure that these factors continue to result in correct rates. USDA made a number of comments on our findings and conclusions. Overall, USDA agrees with our conclusion that the basic premium rates for the 1995 crop year are 89 percent adequate. However, USDA believes the program’s financial soundness has been improved even more than these rates suggest when the other changes, such as increasing the premiums of high-risk farmers and improving the calculation of farmers’ insured production levels, are taken into account. In addition, USDA noted that our analysis does not reflect the likely influence of the rates for crop year 1996 on rate adequacy. USDA believes its policy of gradual increases, coupled with the slightly lower rates indicated for 1996 by the 20 years of experience used to set them, should bring the 1996 rates closer to the level required. USDA believes that its actions, in combination, should bring the 1996 crop insurance rates closer to 91-percent adequacy. We recognize that some of the changes to the crop insurance program discussed in this report are improving the program’s financial condition. We also recognize that the estimated savings from these changes, as well as the excess premiums for the 75-percent coverage level, may come close to offsetting the shortfalls in premiums that we have identified. However, when the $135 million shortfall resulting from the prevented planting decision is included, the net shortfall for the program as a whole is substantial. In addition, we cannot determine the extent to which the 1996 premium rates will further improve the program’s financial soundness because they are still being developed. However, in response to USDA’s point that the required 1996 premium rates will be more adequate because of the change in the rolling 20-year database on which the rates are based, we estimate that this change could raise the adequacy of the rates. This assumes that the 1996 rates would, on average, be at least as high as the 1995 rates. However, we estimate that the rates could still be less than adequate for some crops unless the rates are increased. For example, we estimate that the rate for corn would be 87 percent adequate without a rate increase, while the rate for wheat would be 85 percent adequate. USDA recognized that its decision to increase prevented planting coverage for crop year 1995 added to the program’s overall exposure without a matching adjustment to 1995 premium rates, as our report states. However, USDA said the report should recognize that its decision was based on broad policy concerns that farmers were suffering. We recognize USDA’s position, but we still believe that decisions with this magnitude of impact on the program’s financial soundness should be made with congressional consultation. USDA disagreed with our recommendation that it raise rates by up to the 20 percent authorized by legislation when needed but agreed with our recommendation that an annual report showing the expected adequacy of premium rates each year by crop and state was feasible. It did not, however, clearly state whether it would prepare such a report. With respect to our recommendation that rates be raised by 20 percent, USDA repeated its position, which we noted earlier, that raising rates up to the maximum authorized should not be a standard practice because abrupt increases may discourage farmers from purchasing crop insurance. While we also recognize this possibility, as we previously stated, unless rates are raised as much as allowed when needed, the premium rates for many crop programs will continue to fall short of the legislative requirement of 91-percent adequacy. USDA also had several comments on a proposed recommendation in a draft of our report that it report to the Congress as a part of its budget request on the additional funds the program would need to subsidize farmers’ purchase of crop insurance when the rates are inadequate. USDA questions whether this requirement should be a part of the budget process because of the overlap in the preparation of crop-year rates and fiscal year budget requests. Instead, USDA believes that such information could appropriately be included in an annual report to the Congress. We believe USDA’s view has merit and have revised our recommendation accordingly. Beyond establishing a sound overall structure for premium rates, aligning these rates with risk requires USDA to charge higher rates to the individual farmers who present the highest insurance risk. To accomplish this, USDA has instituted a program to identify those farmers with frequent and substantial claims so that it can increase their premiums and/or reduce the production levels they can insure. Without this program, the overall rates would have to be raised more, thereby penalizing lower-risk farmers. This in turn would make lower-risk farmers less likely to purchase crop insurance and contribute to reducing the program’s financial stability. USDA’s program for targeting high-risk farmers for rate increases is generally sound and will reduce the government’s outlays for crop insurance, although not by as much as USDA estimated. The Department implemented the high-risk program in 1991 to reduce the high losses associated with some farmers in the crop insurance program. Over the period 1981 through 1989, USDA had found that about 6 percent of the policies accounted for about 28 percent of the total claims paid. The high-risk program improves the crop insurance program’s financial soundness by (1) reducing the production levels at which high-risk farmers are insured and/or (2) charging high-risk farmers increased rates that are more in line with their claims history. To be placed in this high-risk program, a farmer must have received claims payments in at least 3 years, or if information on more than 5 years’ experience is available, in 60 percent of the years; have had a cumulative adjusted loss ratio of about 4.0 or more (i.e., $4 or more in claims paid for each $1 in premiums); and require a rate increase of at least 10 percent from the previous year. USDA has expanded the high-risk program from one crop in 1991—soybeans—to 37 crops in 1995. By 1993, the program included 11 crops that accounted for about 90 percent of the crop insurance purchased. In addition, in response to its 1994 appropriations legislation, USDA developed a modified high-risk program for counties where losses were high. These were counties that had paid out more than $1.10 in claims for each $1 in premiums in 70 percent of the years (1980-92) in which the crop program was offered. To be placed in this program, a farmer in these counties must have received claims payments in at least 3 years, or if information on more than 5 years’ experience is available, in 60 percent of the years; have had a cumulative adjusted loss ratio of about 2.25 or more (i.e., $2.25 or more in claims payments for each $1 in premiums); and require a rate at least 10 percent higher than would have otherwise been charged. USDA’s plan for targeting high-risk farmers will reduce the government’s outlays for the crop insurance program, although not by as much as the Department had originally estimated. According to USDA’s blueprint, the high-risk program will reduce crop insurance claims from an average of $1.40 in claims for every $1 in premiums to an average of between $1.30 and $1.35. USDA estimated that the program would result in savings of about $70 million for crop year 1993. However, we estimated savings from the program of about $33 million for 1993. Our estimate is lower because we based it on the actual program USDA implemented in 1992 and 1993, which did not include as many farmers as the Department’s estimate assumed. This estimate was based on USDA’s original plan to select 2 percent of the policyholders. In practice, however, after changing its targeting criteria in 1992 and 1993, USDA selected only 1.5 percent. Furthermore, over one-third of those identified had already ceased buying crop insurance before being selected for the program. Therefore, about 1 percent of all policyholders were included in the program. (App. IV contains a more detailed discussion of our calculations and methodology.) Additional savings may not be significant after the first year that farmers are included in the high-risk program. Most of the savings are realized in the first year, when many high-risk farmers choose to stop purchasing crop insurance rather than pay the higher rates. For those who remain, most of the rate increases occur in the first year; the rate increases in succeeding years are similar to those for other farmers. For example, for farmers who remained in the program after being targeted in 1992, the premiums paid averaged 67 percent more in 1992 than in 1991 and 7 percent more in 1993 than in 1992. To ensure that the crop insurance program realizes the congressional requirement of receiving a projected $1 in premiums for each $1.10 in claims paid, USDA needs reasonable estimates of farmers’ normal production. This information will help ensure that farmers do not purchase insurance for production levels higher than they are likely to produce and, as a result, make claims for production losses that are not real. To achieve this objective, USDA has recently changed the way it establishes farmers’ production levels to more closely align them with actual production history. USDA’s action should reduce the government’s outlays, although not as much as USDA had anticipated. However, this change has a critical weakness. USDA does not require that loss adjusters verify the accuracy of the production history supplied by the farmers and therefore lacks assurance that it is insuring production at the appropriate level. According to USDA’s blueprint, the level of production insured may be the single most important factor in determining the success or failure of the crop insurance program. The insured production level is key because it forms the basis for calculating insurance premiums and payments on claims. Consequently, a production level that is too high compared with the productive potential of the farmer and the land will increase the frequency and amount of a farmer’s claim. Conversely, a production level that is too low will not effectively protect farmers from loss and, because the production level is regarded as insufficient, will discourage farmers from buying insurance. Before crop year 1994, farmers could base the level of production for which they purchased crop insurance on 10 years’ actual production, or, for those years for which farmers did not report actual production, on a modified average production level for the county. USDA concluded that the option of basing a production level on a modified county average was adversely affecting the crop insurance program’s financial condition. This option benefited farmers whose production was below the modified county average. It enabled them to get a higher level of production coverage than their historic production levels would have warranted. Therefore, some farmers may have paid lower premiums than they should have and received claims that exceeded what would have been warranted by their historic production levels. To address this problem, in crop year 1994 USDA began penalizing farmers who did not have at least 3 years of production history. The revised rules should discourage farmers from using the modified average production level for the county and encourage them to provide their actual production history. Under the revised rules, USDA uses 65 percent of the modified average production level for the county for 4 years if the farmer reports no actual production, 80 percent of the modified average production level for the county for 3 years if the farmer reports actual production for 1 year, 90 percent of the modified average production level for the county for 2 years if the farmer reports actual production for 2 years, and 100 percent of the modified average production level for the county for 1 year if the farmer reports actual production for 3 years. After the first 4 years, the level of production that can be insured is the simple average of the actual production reported for up to 10 years. The actions USDA has taken to revise production levels will reduce the government’s outlays, but not by as much as it estimated. USDA estimated in its blueprint that its actions would reduce crop insurance claims over time from $1.40 for every $1 in premiums to between $1.25 and $1.30. This estimate equates to a savings of between $75 million and $113 million annually. However, we estimated that these savings would be about $44 million for crop year 1994. Our estimate differs from USDA’s primarily because USDA limited any reduction in a farmer’s insured production level to no more than 10 percent annually. In addition, with the change in the calculation of production levels, farmers with 4 to 8 years of production history had increases in their production levels. (App. V contains a detailed discussion of our methodology for calculating the savings.) Although USDA recognizes the importance of accurate production levels to the program’s integrity, it does not require that loss adjusters verify the production history provided by farmers. Therefore, USDA cannot be assured that it is paying claims accurately. USDA allows farmers to certify the production level that they insure. It also requires farmers to retain records supporting their certified production level for 3 years. However, USDA does not require insurance adjusters to verify the accuracy of the production levels supplied by farmers. Over the years, we and USDA’s Office of Inspector General have consistently found that USDA’s process for verifying production histories has been inadequate. In 1988, we reported that USDA did not have adequate procedures to ensure that farmers’ reported production levels were accurate. According to our analysis of USDA’s data, 37 percent of the production levels examined were inaccurate, largely because of inaccurate certifications by farmers. Therefore, we recommended that for each claim, USDA require loss adjusters to verify the production data supporting the production level insured. We noted that such verifications could be minimized by spot-checking the supporting data for a farmer’s production level for some, rather than all, years. Likewise, in 1989 USDA’s Inspector General found inaccurately reported production levels in about half of the cases reviewed and recommended that USDA require review of the production levels for each claim until an acceptable error rate is achieved. Despite these recommendations, USDA has not established an acceptable error rate and is not requiring verification. USDA’s changes in the way it calculates farmers’ production levels should improve the program’s financial condition because the revised methodology will result in more accurate estimates of a farmer’s expected production. However, USDA will not get the full short-term benefit it anticipated from the change because it limited the reduction in a farmer’s insured production level to no more than 10 percent annually. Therefore, farmers can continue for some time to insure at production levels higher than their experience justifies. Moreover, a long-standing problem that could erode the positive benefit of more accurate production levels is the fact that USDA does not require verification of production when claims are adjusted. We believe this problem needs to be addressed. We recommend that the Secretary of Agriculture direct the Deputy Administrator for Risk Management to take the following actions: Remove the 10-percent annual limit on reduction in farmers’ insured production levels so that the level of production insured is aligned with the farmers’ actual production history. If not, USDA should include in an annual report to the Congress the estimated cost of subsidizing the additional losses that will be incurred. Require that the production history provided by farmers be verified when claims are adjusted. In commenting on our recommendation concerning the 10-percent limit in farmers’ insured production levels, USDA recognizes that there is a cost associated with its policy of limiting reductions in insured yields. However, USDA believes that this policy provides a more “gentle landing” for farmers than would occur in instances in which farmers have recently suffered severe losses. USDA also agreed that reporting on the impact of this policy on the estimated cost of subsidizing additional losses in an annual report to the Congress is workable. Concerning our recommendation aimed at improving the verification of production history, USDA agrees that it needs to look at ways to better ensure that it is obtaining an adequate number of verifications. However, USDA believes that it needs time to identify the most appropriate point in the process for such verification. USDA plans to consult with the companies with which it has insurance contracts and arrive at a workable verification plan by May 31, 1996. Until recently, USDA allowed farmers to purchase crop insurance after they knew whether early growing conditions, such as the amount of moisture in the subsoil, might result in poor production. Such late deadlines for purchasing insurance increased the likelihood that those who bought insurance during the planting period would file claims. In recognition of the importance of purchasing deadlines to the crop insurance program’s financial soundness, 1994 crop insurance legislation required USDA to set deadlines that were 30 days earlier than in 1994. This was to prevent farmers from buying crop insurance close to or in the planting period, when they can better evaluate the probability of a loss. While the revised deadlines will reduce the extent of this problem, the underlying problem of USDA’s approach to setting these deadlines remains. The legislation builds on the proposal USDA included in its blueprint for setting the last date for purchasing insurance for the 1995 crop year 15 to 30 days earlier than it had in 1994. USDA’s proposal was in response to our analysis of three crops in 111 crop-producing areas. For 33 percent of these areas, USDA allowed insurance sales to continue well into the planting period (8 to 60 days past the initial planting date). In another 32 percent of these areas, the deadlines for purchasing insurance were near the start of the planting period (up to 7 days before or after USDA’s initial planting date). Although USDA moved the deadline for new purchases of crop insurance 30 days earlier in the year, it generally did not move the related deadline for cancelling insurance. The cancellation deadline is the last date that current insurance purchasers may cancel their coverage before it continues in force for another year. Before crop year 1995, the purchasing and cancellation deadlines were on the same date, as would be expected. By not moving the cancellation deadline, USDA allows many current purchasers to make the decision to renew or cancel their crop insurance coverage well into the planting period. USDA officials explained that they could not change the cancellation date without first publishing the proposed change for comment in the Federal Register. USDA officials said that they were in the process of making this change and expect to have revised all cancellation dates by crop year 1997. While USDA had set its purchasing deadlines 30 days earlier in the year as the legislation required, it has not addressed the underlying problem—these deadlines are not designed to address actual production situations. Instead, the dates have historically been set, and continue to be set, to ease the administration of the crop insurance program. The dates are set for a several-state area rather than for local growing conditions. Specifically, USDA has two principal purchasing deadlines for spring-planted crops and two for fall-planted crops. These deadlines have historically fallen into the planting period for many crops in many areas. However, USDA does not have written procedures or criteria for its field offices to follow in reviewing and updating the purchasing deadlines on the basis of the planting dates in each crop-growing region. Consequently, the revised national deadlines correct the situation we identified in most cases, but not all. With this change, 12 percent of the areas we reviewed—compared with 66 percent formerly—had purchasing deadlines that continued into the planting period. Without establishing a procedure for routinely reviewing and updating these deadlines on the basis of planting practices in each region, USDA will continue to have some deadlines that extend into the planting period. Moreover, USDA does not record crop insurance sales dates in its database. Therefore, it cannot evaluate the relationship between the claims paid and the number of days before the planting period that the insurance was purchased. USDA has improved the financial condition of the crop insurance program by moving purchasing deadlines 30 days earlier in the year. However, by not routinely setting these deadlines by crop-growing regions, USDA enables some farmers to better evaluate growing conditions and increases the likelihood that they will purchase crop insurance when growing conditions are poor. As result, USDA increases the probability of a shortfall—that claims paid will exceed $1.10 for each $1 in premiums. Furthermore, by not recording purchase dates in its database, USDA cannot adequately evaluate the relationship between the claims paid and the number of days before the planting period that insurance was purchased. We recommend that the Secretary of Agriculture direct the Deputy Administrator for Risk Management to set purchasing deadlines before the initial planting date in all areas of the country and establish criteria and procedures for routinely reviewing these deadlines to ensure that they continue to occur before initial planting dates and record the date that insurance is purchased in order to better evaluate the relationship between purchasing deadlines and claims payments. USDA agrees with our recommendation that USDA set purchasing deadlines before the initial planting date in all areas of the country and establish criteria and procedures for routinely reviewing these deadlines to ensure that they continue to occur before the initial planting dates. USDA noted that the legislative requirement to move all purchase deadlines 30 days earlier for crop year 1995 resulted in some purchase deadlines being too early and inconsistent.
Pursuant to a congressional request, GAO examined whether the Department of Agriculture (USDA): (1) set insurance rates to achieve the legislative requirement of 91-percent adequacy; (2) reduced the losses caused by high-risk farmers; (3) based payments to farmers for claimed losses on their actual production history; and (4) set deadlines for farmers to purchase crop insurance before planting their crops. GAO found that USDA: (1) has improved the overall financial condition of the crop insurance program by raising the premium rates, but the basic rates still do not meet the requirement of 91-percent adequacy set by Congress; (2) sets higher rates for high-risk farmers to help reduce the government's losses; (3) has made changes to more accurately calculate farmers' production levels based on their historical experience; and (4) generally sets the same deadline for an area covering several states rather than considering the local growing conditions, and as a result some farmers are able to more precisely evaluate growing conditions at planting time and are more likely to purchase crop insurance when growing conditions are poor.
Wildland fire triggered by lightning is a normal, inevitable, and necessary ecological process that nature uses to periodically remove excess undergrowth, small trees, and vegetation to renew ecosystem productivity. However, various human land use and management practices, including several decades of fire suppression activities, have reduced the normal frequency of wildland fires in many forest and rangeland ecosystems and have resulted in abnormally dense and continuous accumulations of vegetation that can fuel uncharacteristically large and intense wildland fires. Such large intense fires increasingly threaten catastrophic ecosystem damage and also increasingly threaten human lives, health, property, and infrastructure in the wildland-urban interface. Federal researchers estimate that vegetative conditions that can fuel such fires exist on approximately 190 million acres––or more than 40 percent––of federal lands in the contiguous United States but could vary from 90 million to 200 million acres, and that these conditions also exist on many nonfederal lands. Our reviews over the last 5 years identified several weaknesses in the federal government’s management response to wildland fire issues. These weaknesses included the lack of a national strategy that addressed the likely high costs of needed fuel reduction efforts and the need to prioritize these efforts. Our reviews also found shortcomings in federal implementation at the local level, where over half of all federal land management units’ fire management plans did not meet agency requirements designed to restore fire’s natural role in ecosystems consistent with human health and safety. These plans are intended to provide program direction for fuel reduction, preparedness, suppression, and rehabilitation actions. The agencies also lacked basic data, such as the amount and location of lands needing fuel reduction, and research on the effectiveness of different fuel reduction methods on which to base their fire management plans and specific project decisions. Furthermore, coordination among federal agencies and collaboration between these agencies and nonfederal entities were ineffective. This kind of cooperation is needed because wildland fire is a shared problem that transcends land ownership and administrative boundaries. Finally, we found that better accountability for federal expenditures and performance in wildland fire management was needed. Agencies were unable to assess the extent to which they were reducing wildland fire risks or to establish meaningful fuel reduction performance measures, as well as to determine the cost- effectiveness of these efforts, because they lacked both monitoring data and sufficient data on the location of lands at high risk of catastrophic fires to know the effects of their actions. As a result, their performance measures created incentives to reduce fuels on all acres, as opposed to focusing on high-risk acres. Because of these weaknesses, and because experts said that wildland fire problems could take decades to resolve, we said that a cohesive, long- term, federal wildland fire management strategy was needed. We said that this cohesive strategy needed to focus on identifying options for reducing fuels over the long term in order to decrease future wildland fire risks and related costs. We also said that the strategy should identify the costs associated with those different fuel reduction options over time, so that the Congress could make cost-effective, strategic funding decisions. The federal government has made important progress over the last 5 years in improving its management of wildland fire. Nationally, it has established strategic priorities and increased resources for implementing these priorities. Locally, it has enhanced data and research, planning, coordination, and collaboration with other parties. With regard to accountability, it has improved performance measures and established a monitoring framework. Over the last 5 years, the federal government has been formulating a national strategy known as the National Fire Plan, composed of several strategic documents that set forth a priority to reduce wildland fire risks to communities. Similarly, the Healthy Forests Restoration Act of 2003 directs that at least 50 percent of funding for fuel reduction projects authorized under the act be allocated to wildland-urban interface areas. While we have raised concerns about the way the agencies have defined these areas and the specificity of their prioritization guidance, we believe that the act’s clarification of the community protection priority provides a good starting point for identifying and prioritizing funding needs. Similarly, in contrast to fiscal year 1999, when we reported that the Forest Service had not requested increased funding to meet the growing fuel reduction needs it had identified, fuel reduction funding for both the Forest Service and Interior more than quadrupled by fiscal year 2005. The Congress, in the Healthy Forests Restoration Act, also authorized $760 million per year to be appropriated for hazardous fuels reduction activities, including projects for reducing fuels on up to 20 million acres of land. Moreover, appropriations for both agencies’ overall wildland fire management activities, including preparedness, fuel reduction, and suppression, tripled from about $1 billion in fiscal year 1999 to nearly $3 billion in fiscal year 2005. The agencies have strengthened local wildland fire management implementation by making significant improvements in federal data and research on wildland fire over the past 5 years, including an initial mapping of fuel hazards nationwide. Additionally, in 2003, the agencies approved funding for development of a geospatial data and modeling system, called LANDFIRE, to map wildland fire hazards with greater precision and uniformity. LANDFIRE—estimated to cost $40 million and scheduled for nationwide implementation in 2009––will enable comparisons of conditions between different field locations nationwide, thus permitting better identification of the nature and magnitude of wildland fire risks confronting different community and ecosystem resources, such as residential and commercial structures, species habitat, air and water quality, and soils. The agencies also have improved local fire management planning by adopting and executing an expedited schedule to complete plans for all land units that had not been in compliance with agency requirements. The agencies also adopted a common interagency template for preparing plans to ensure greater consistency in their contents. Coordination among federal agencies and their collaboration with nonfederal partners, critical to effective implementation at the local level, also has been improved. In 2001, as a result of congressional direction, the agencies jointly formulated a 10-Year Comprehensive Strategy with the Western Governors’ Association to involve the states as full partners in their efforts. An implementation plan adopted by the agencies in 2002 details goals, time lines, and responsibilities of the different parties for a wide range of activities, including collaboration at the local level to identify fuel reduction priorities in different areas. Also in 2002, the agencies established an interagency body, the Wildland Fire Leadership Council, composed of senior Agriculture and Interior officials and nonfederal representatives, to improve coordination of their activities with each other and nonfederal parties. Accountability for the results the federal government achieves from its investments in wildland fire management activities also has been strengthened. The agencies have adopted a performance measure that identifies the amount of acres moved from high-hazard to low-hazard fuel conditions, replacing a performance measure for fuel reductions that measured only the total acres of fuel reductions and created an incentive to treat less costly acres rather than the acres that presented the greatest hazards. Additionally, in 2004, to have a better baseline for measuring progress, the Wildland Fire Leadership Council approved a nationwide framework for monitoring the effects of wildland fire. While an implementation plan is still needed for this framework, it nonetheless represents a critical step toward enhancing wildland fire management accountability. While the federal government has made important progress over the past 5 years in addressing wildland fire, a number of challenges still must be met to complete development of a cohesive strategy that explicitly identifies available long-term options and funding needed to reduce fuels on the nation’s forests and rangelands. Without such a strategy, the Congress will not have an informed understanding of when, how, and at what cost wildland fire problems can be brought under control. None of the strategic documents adopted by the agencies to date have identified these options and related funding needs, and the agencies have yet to delineate a plan or schedule for doing so. To identify these options and funding needs, the agencies will have to address several challenging tasks related to their data systems, fire management plans, and the assessment of the cost- effectiveness and affordability of different options for reducing fuels. The agencies face several challenges to completing and implementing LANDFIRE, so that they can more precisely identify the extent and location of wildland fire threats and better target fuel reduction efforts. These challenges include using LANDFIRE to better reconcile the effects of fuel reduction activities with the agencies’ other stewardship responsibilities for protecting ecosystem resources, such as air, water, soils, and species habitat, which fuel reduction efforts can adversely affect. The agencies also need LANDFIRE to help them better measure and assess their performance. For example, the data produced by LANDFIRE will help them devise a separate performance measure for maintaining conditions on low-hazard lands to ensure that their conditions do not deteriorate to more hazardous conditions while funding is being focused on lands with high-hazard conditions. In implementing LANDFIRE, however, the agencies will have to overcome the challenges presented by the current lack of a consistent approach to assessing the risks of wildland fires to ecosystem resources as well as the lack of an integrated, strategic, and unified approach to managing and using information systems and data, including those such as LANDFIRE, in wildland fire decision making. Currently, software, data standards, equipment, and training vary among the agencies and field units in ways that hamper needed sharing and consistent application of the data. Also, LANDFIRE data and models may need to be revised to take into account recent research findings that suggest part of the increase in wildland fire in recent years has been caused by a shift in climate patterns. This research also suggests that these new climate patterns may continue for decades, resulting in further increases in the amount of wildland fire. Thus, the nature, extent, and geographical distribution of hazards initially identified in LANDFIRE, as well as the costs for addressing them, may have to be reassessed. The agencies will need to update their local fire management plans when more detailed, nationally consistent LANDFIRE data become available. The plans also will have to be updated to incorporate recent agency fire research on approaches to more effectively address wildland fire threats. For example, a 2002 interagency analysis found that protecting wildland- urban interface communities more effectively—as well as more cost- effectively—might require locating a higher proportion of fuel reduction projects outside of the wildland-urban interface than currently envisioned, so that fires originating in the wildlands do not become too large to suppress by the time they arrive at the interface. Moreover, other agency research suggests that placing fuel reduction treatments in specific geometric patterns may, for the same cost, provide protection for up to three times as many community and ecosystem resources as do other approaches, such as placing fuel breaks around communities and ecosystems resources. Timely updating of fire management plans with the latest research findings on optimal design and location of treatments also will be critical to the effectiveness and cost-effectiveness of these plans. The Forest Service indicated that this updating could occur during annual reviews of fire management plans to determine whether any changes to them may be needed. Completing the LANDFIRE data and modeling system and updating fire management plans should enable the agencies to formulate a range of options for reducing fuels. However, to identify optimal and affordable choices among these options, the agencies will have to complete certain cost-effectiveness analysis efforts they currently have under way. These efforts include an initial 2002 interagency analysis of options and costs for reducing fuels, congressionally-directed improvements to their budget allocation systems, and a new strategic analysis framework that considers affordability. The Interagency Analysis of Options and Costs: In 2002, a team of Forest Service and Interior experts produced an estimate of the funds needed to implement eight different fuel reduction options for protecting communities and ecosystems across the nation over the next century. Their analysis also considered the impacts of fuels reduction activities on future costs for other principal wildland fire management activities, such as preparedness, suppression, and rehabilitation, if fuels were not reduced. The team concluded that the option that would result in reducing the risks to communities and ecosystems across the nation could require an approximate tripling of current fuel reduction funding to about $1.4 billion for an initial period of a few years. These initially higher costs would decline after fuels had been reduced enough to use less expensive controlled burning methods in many areas and more fires could be suppressed at lower cost, with total wildland fire management costs, as well as risks, being reduced after 15 years. Alternatively, the team said that not making a substantial short-term investment using a landscape focus could increase both costs and risks to communities and ecosystems in the long term. More recently, however, Interior has said that the costs and time required to reverse current increasing risks may be less when other vegetation management activities—such as timber harvesting and habitat improvements—are considered that were not included in the interagency team’s original assessment but also can influence wildland fire. The cost of the 2002 interagency team’s option that reduced risks to communities and ecosystems over the long term is consistent with a June 2002 National Association of State Foresters’ projection of the funding needed to implement the 10-Year Comprehensive Strategy developed by the agencies and the Western Governors’ Association the previous year. The state foresters projected a need for steady increases in fuel reduction funding up to a level of about $1.1 billion by fiscal year 2011. This is somewhat less than that of the interagency team’s estimate, but still about 2-1/2 times current levels. The interagency team of experts who prepared the 2002 analysis of options and associated costs said their estimates of long-term costs could only be considered an approximation because the data used for their national-level analysis were not sufficiently detailed. They said a more accurate estimate of the long-term federal costs and consequences of different options nationwide would require applying this national analysis framework in smaller geographic areas using more detailed data, such as that produced by LANDFIRE, and then aggregating these smaller-scale results. The New Budget Allocation System: Agency officials told us that a tool for applying this interagency analysis at a smaller geographic scale for aggregation nationally may be another management system under development—the Fire Program Analysis system. This system, being developed in response to congressional committee direction to improve budget allocation tools, is designed to identify the most cost-effective allocations of annual preparedness funding for implementing agency field units’ local fire management plans. Eventually, the Fire Program Analysis system, being initially implemented in 2005, will use LANDFIRE data and provide a smaller geographical scale for analyses of fuel reduction options and thus, like LANDFIRE, will be critical for updating fire management plans. Officials said that this preparedness budget allocation systemwhen integrated with an additional component now being considered for allocating annual fuel reduction funding—could be instrumental in identifying the most cost-effective long-term levels, mixes, and scheduling of these two wildland fire management activities. Completely developing the Fire Program Analysis system, including the fuel reduction funding component, is expected to cost about $40 million and take until at least 2007 and perhaps until 2009. The New Strategic Analysis Effort: In May 2004, Agriculture and Interior began the initial phase of a wildland fire strategic planning effort that also might contribute to identifying long-term options and needed funding for reducing fuels and responding to the nation’s wildland fire problems. This effortthe Quadrennial Fire and Fuels Reviewis intended to result in an overall federal interagency strategic planning document for wildland fire management and risk reduction and to provide a blueprint for developing affordable and integrated fire preparedness, fuels reduction, and fire suppression programs. Because of this effort’s consideration of affordability, it may provide a useful framework for developing a cohesive strategy that includes identifying long-term options and related funding needs. The preliminary planning, analysis, and internal review phases of this effort have been completed and an initial report is expected in July 2005. The improvements in data, modeling, and fire behavior research that the agencies have under way, together with the new cost-effectiveness focus of the Fire Program Analysis system to support local fire management plans, represent important tools that the agencies can begin to use now to provide the Congress with initial and successively more accurate assessments of long-term fuel reduction options and related funding needs. Moreover, a more transparent process of interagency analysis in framing these options and their costs will permit better identification and resolution of differing assumptions, approaches, and values. This transparency provides the best assurance of accuracy and consensus among differing estimates, such as those of the interagency team and the National Association of State Foresters. In November 2004, the Western Governors’ Association issued a report prepared by its Forest Health Advisory Committee that assessed implementation of the 10-Year Comprehensive Strategy, which the association had jointly devised with the agencies in 2001. Although the association’s report had a different scope than our review, its findings and recommendations are, nonetheless, generally consistent with ours about the progress made by the federal government and the challenges it faces over the next 5 years. In particular, it recommends, as we do, completion of a long-term, federal, cohesive strategy for reducing fuels. It also cites the need for continued efforts to improve, among other things, data on hazardous fuels, fire management plans, the Fire Program Analysis system, and cost-effectiveness in fuel reductions––all challenges we have emphasized today. In conclusion, Mr. Chairman, the progress made by the federal government over the last 5 years has provided a sound foundation for addressing the problems that wildland fire will increasingly present to communities, ecosystems, and federal budgetary resources over the next few years and decades. As yet, however, there is no clear single answer about how best to address these problems in either the short or long term. Instead, there are different options, each needing further development to understand the trade-offs among the risks and funding involved. The Congress needs to understand these options and trade-offs in order to make informed policy and appropriations decisions on this 21st century challenge. This is the same message we provided in 1999 when we first called for development of a cohesive strategy identifying options and funding needs. But it has not been completed. While the agencies are now in a better position to do so, they must build on the progress made to date by completing data and modeling efforts underway, updating their fire management plans with the results of these data efforts and ongoing research, and following through on recent cost-effectiveness and affordability initiatives. However, time is running out. Further delay in completing a strategy that cohesively integrates these activities to identify options and related funding needs will only result in increased long-term risks to communities, ecosystems, and federal budgetary resources. Because there is an increasingly urgent need for a cohesive federal strategy that identifies long-term options and related funding needs for reducing fuels, we recommended that the Secretaries of Agriculture and of the Interior provide the Congress, in time for its consideration of the agencies’ fiscal year 2006 wildland fire management budgets, with a joint tactical plan outlining the critical steps the agencies will take, together with related time frames, to complete such a cohesive strategy. In an April 2005 letter, Agriculture and Interior said that they will produce by August 2005, for the Wildland Fire Leadership Council’s review and approval, a joint tactical plan that will identify the steps and time frames for completing a cohesive strategy. We look forward to the agencies completing this important step. However, as noted at the outset of this testimony, the window of opportunity for effectively addressing wildland fire is rapidly closing. Thus, developing a cohesive strategy should not wait until 2009, when LANDFIRE and the Fire Program Analysis are fully developed. As we have noted, the 2002 interagency analysis of long-term options and costs is a good starting point that can serve as a basis for providing the Congress with interim updates on options and funding needs to respond to wildland fires. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Robert A. Robinson at (202) 512-3841 or [email protected], or Robin M. Nazzaro at (202) 512-3841or [email protected]. Individuals making key contributions to this testimony included David P. Bixler, Janet Frisch, Richard Johnson, and Chester Joy. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Wildland fires are increasingly threatening communities and ecosystems. In recent years, these fires have become more intense due to excess vegetation that has accumulated, partly as a result of past management practices. Experts have said that the window of opportunity for effectively responding to wildland fire is rapidly closing. The federal government's cost to manage wildland fires continues to increase. Appropriations for its wildland fire management activities tripled from about $1 billion in fiscal year 1999 to nearly $3 billion in fiscal year 2005. This testimony discusses the federal government's progress over the past 5 years and future challenges in managing wildland fires. It is based primarily on GAO's report: Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy ( GAO-05-147 , Jan. 14, 2005). Over the last 5 years, the Forest Service in the Department of Agriculture and land management agencies in the Department of the Interior, working with the Congress, have made important progress in responding to wildland fires. Most notably, the agencies have adopted various national strategy documents addressing the need to reduce wildland fire risks, established a priority to protect communities in the wildland-urban interface, and increased efforts and amounts of funding committed to addressing wildland fire problems, including preparedness, suppression, and fuel reduction on federal lands. In addition, the agencies have begun improving their data and research on wildland fire problems, made progress in developing long-needed fire management plans that identify actions for effectively addressing wildland fire threats at the local level, and improved federal interagency coordination and collaboration with nonfederal partners. The agencies also have strengthened overall accountability for their investments in wildland fire activities by establishing improved performance measures and a framework for monitoring results. Despite producing numerous planning and strategy documents, the agencies have yet to develop a cohesive strategy that explicitly identifies the long-term options and related funding needed to reduce the excess vegetation that fuels fires in national forests and rangelands. Reducing these fuels lowers risks to communities and ecosystems and helps contain suppression costs. As GAO noted in 1999, such a strategy would help the agencies and the Congress to determine the most effective and affordable long-term approach for addressing wildland fire problems. Completing this strategy will require finishing several efforts now under way, each with its own challenges. The agencies will need to finish planned improvements in a key data and modeling system--LANDFIRE--to more precisely identify the extent and location of wildland fire threats and to better target fuel reduction efforts. In implementing LANDFIRE, the agencies will need more consistent approaches to assessing wildland fire risks, more integrated information systems, and better understanding of the role of climate in wildland fire. In addition, local fire management plans will need to be updated with data from LANDFIRE and from emerging agency research on more cost-effective approaches to reducing fuels. Completing a new system designed to identify the most cost-effective means for allocating fire management budget resources--Fire Program Analysis--may help to better identify long-term options and related funding needs. Without completing these tasks, the agencies will have difficulty determining the extent and location of wildland fire threats, targeting and coordinating their efforts and resources, and resolving wildland fire problems in the most timely and cost-effective manner over the long term.
Before a drug can be marketed in the United States, its sponsor must demonstrate to FDA that the drug is safe and effective for its intended use. FDA approves a drug for marketing when the agency judges that its known benefits outweigh its known risks. However, because premarket evaluations are limited in their ability to always predict safety and efficacy with absolute certainty, FDA continues to assess a drug’s risks and benefits after it has been marketed. If the agency identifies a postmarket safety issue, it makes a decision regarding whether to take a regulatory action, such as withdrawing the approval of a drug, which it rarely does, or communicating new safety information to the public and healthcare providers. The decision-making process for postmarket drug safety is complex, multidisciplinary, and relies on an iterative interaction between OND, OSE, and other FDA components. OND, which primarily conducts premarket reviews of drug applications submitted by drug sponsors, also has postmarket drug safety as one of its responsibilities. Although it interacts with OSE and staff from other offices concerning the postmarket safety of drugs, OND has ultimate responsibility to decide whether to take regulatory action regarding these issues. The office is organized into 17 review divisions that generally reflect certain therapeutic areas, such as gastroenterology or oncology drugs. The review of safety and efficacy data from drug applications is conducted by OND medical reviewers, who typically are physicians who have expertise in specific therapeutic areas and are skilled in the review of clinical trials. OSE’s primary focus is on postmarket safety, although it is also involved in certain premarket drug safety issues. OSE has traditionally operated primarily in a consultant capacity to OND and has not had any independent decision-making responsibility. When a safety issue is identified, OSE staff may conduct an analysis and produce a written report called a “consult” to assist OND. Safety consults could include analyses of adverse event reports and assessments of postmarket study designs. In contrast to OND’s organization by therapeutic area, OSE is organized into five divisions that each reflect different areas of its drug safety responsibilities. Two divisions analyze adverse event reports, one division reviews epidemiologic studies completed by drug sponsors and conducts its own studies, one division reviews risk management plans submitted by drug sponsors, and one division reviews proposed proprietary drug names submitted by drug sponsors for their new products and postmarket studies of medication errors completed by drug sponsors and others. To help it provide oversight of important, high-level safety decisions, FDA established the Drug Safety Oversight Board in spring 2005. The board is comprised primarily of FDA staff, including OND and OSE officials, but also includes officials from other federal agencies, such as the National Institutes of Health. It was established with the goal of providing independent oversight and making recommendations to the CDER Director about the management of important drug safety issues. An important part of the drug approval and postmarket monitoring process is the advice the agency receives from CDER’s 16 drug-related scientific advisory committees composed of external experts. The committees are generally organized into specific therapeutic areas, such as gastrointestinal drugs or oncologic drugs. In 2002, FDA established DSaRM, which is one of the 16 committees. In contrast to the committees focused on a specific therapeutic area, DSaRM was established to advise FDA on drug safety and risk management issues across therapeutic areas. The committee’s charter states that DSaRM is to be composed of 14 members—13 voting members with drug safety expertise and 1 nonvoting member to represent the drug industry. DSaRM members can also be asked to participate in other scientific advisory committee meetings when safety issues are discussed. OSE sets the agenda for DSaRM meetings, whereas OND sets the agenda for meetings of the other 15 committees. Advisory committees may make recommendations to FDA that are not binding on the agency’s decision making. If individuals within CDER have differences of professional opinion or scientific disputes regarding a decision taken by the agency, they are generally expected to try to resolve them through their supervisory chain. If staff cannot resolve the dispute through this process, they can access CDER’s differing professional opinion (DPO) program. First implemented as a pilot program in November 2004, it provides a process through which individuals can protest agency actions or inaction when they believe there is a risk of a significant negative impact on public health. Under this process, a dispute filed by a CDER employee could be reviewed by an ad hoc panel of three to four employees. The panel chair, who is appointed by the CDER Director, appoints the additional members one of whom is nominated by the employee initiating the dispute. The panel would make a recommendation for resolving the dispute to the CDER Director. Several elements of this process are overseen by the CDER Ombudsman’s Office, in consultation with the CDER Director. FDA uses evidence from multiple data sources to inform its postmarket decision-making process, each of which has certain strengths and weaknesses. FDA uses randomized clinical trial data to assess drug safety prior to approval. However, these data have inherent weaknesses. Therefore, the agency uses other data to continue to assess drug safety once drugs are on the market. One method of assessing postmarket drug safety is through the collection and analysis of reports of adverse events associated with drug use. FDA requires drug sponsors to submit adverse event reports for the drugs they market. In addition, healthcare providers and patients may voluntarily submit adverse event reports to FDA’s Medwatch program by telephone, by mailing or faxing a paper form, or through a Web-based application on the Medwatch Web site. In 1997, CDER implemented the Adverse Event Reporting System (AERS), which it uses to store reports of adverse events. Adverse events are often a basis for postmarket safety actions; however, adverse event reporting has limitations that make it hard to establish the magnitude of a safety problem or to compare risks across similar drugs. Therefore, once a “safety signal” is identified for a marketed drug, FDA may use data from observational epidemiologic studies to further examine relationships between a drug’s use and reported adverse events. To conduct these studies, the agency seeks data from large, external databases of electronic health information—including claims data collected by health insurance companies and electronic medical records of care provided through large healthcare systems. (See table 1 for a description of these data sources used to inform drug safety decision making before and after approval.) In 2006, we reported that FDA’s process for overseeing postmarket drug safety was limited by a lack of clarity about OSE’s role in decision making. For example, while OSE often made recommendations to OND in the consults that it completed, the agency had no policy explicitly stating whether this was part of OSE’s role. OSE staff also reported that these consults sometimes fell into a “black hole” or “abyss” and OSE staff would not be informed of the results of their recommendations. Also in 2006, IOM noted that an imbalance in authority, formal role, and resources between OND and OSE constituted a major obstacle to a healthy organizational culture in CDER. Furthermore, IOM reported that FDA’s challenges are reflective of how premarket and postmarket functions have been divided historically. OSE generally takes a population-based perspective in their drug safety work by utilizing adverse event reporting and observational studies, while OND generally takes a clinical perspective that focuses primarily on randomized clinical trials. They reported that OND staff often view the observational data used by OSE as “soft” and unconvincing, while OSE staff view these data as informative and carrying great weight. IOM noted that the imbalance in roles and responsibilities denoted a subservience of the safety function and a devaluation of OSE’s discipline and approach by agency management. We also identified several specific limitations to FDA’s postmarket decision-making process. Several years prior to the release of our 2006 report, FDA started drafting a policy intended to clarify the role of staff, including those from OSE, in the decision-making process. However, the policy had not been finalized and implemented by the time our 2006 report was issued. In addition, we reported that the role of OSE staff in planning for and participating in advisory committee meetings, other than those involving DSaRM, was not clear. We also found that the DPO program had not been used and may not have been viewed as sufficiently independent because it did not offer employees a forum for resolving disputes that was independent of the CDER Director. We reported, for example, that the CDER Director would help decide whether a dispute warranted review and would also make the final decision about how the dispute would be resolved. We also found that OSE management had not effectively overseen postmarket drug safety and lacked systematic information on this process. Specifically, although OSE maintained a database of consult requests it received from OND, the database did not include information about whether OSE staff had made recommendations to OND regarding safety actions. It also did not include information on how the safety issues were resolved, including whether OSE’s recommended safety actions were implemented by OND. In addition, in 2006, OIG found weaknesses in the extent to which FDA tracked another element of postmarket drug safety, the progression of postmarketing studies that FDA had requested drug sponsors to complete. OIG found that FDA could not readily identify whether or how timely these studies were progressing toward completion. We also found in 2006 that FDA faced constraints in its access to data that allow it to monitor the safety of marketed drugs. For example, FDA staff and external drug safety experts told us that OSE did not have enough funding to support the purchase of data for postmarket drug surveillance. Similarly, IOM found that funding for purchasing data was severely limited and had changed little in over 20 years. IOM also found that FDA devoted limited resources for staff training and supportive technology that was needed to fully utilize purchased data. Furthermore, IOM concluded that AERS was outdated and inefficient and the agency had given little attention to using systematic methods for screening AERS for adverse events. We made multiple recommendations to FDA in 2006 that were intended to improve its oversight of postmarket drug safety. We recommended that FDA revise and implement its draft policy on major postmarket drug safety decisions, clarify OSE’s role in FDA’s scientific advisory committee meetings involving postmarket drug safety issues, improve CDER’s dispute resolution process by revising the DPO program to increase its independence, and establish a mechanism for systematically tracking OSE’s recommendations and subsequent safety actions. (See app. I for a summary of FDA actions taken in response to these recommendations.) The Food and Drug Administration Amendments Act of 2007 (FDAAA) provided the agency with additional responsibilities intended to improve its oversight of postmarket drug safety. For example, FDAAA provided FDA with new authority to require drug sponsors to complete postmarketing studies to identify a serious risk or assess a known serious risk. Prior to the enactment of FDAAA, FDA only had the authority in limited circumstances to require drug sponsors to conduct a postmarket drug safety study; outside of these circumstances, the agency could request that drug sponsors voluntarily agree to conduct such studies. FDAAA also provided FDA with new authority to require drug sponsors to complete risk management plans. Previously, FDA issued guidance to drug sponsors to assist in the development of voluntary risk management plans. FDA may now require drug sponsors to implement a risk management plan through specific approaches, known as a Risk Evaluation and Mitigation Strategy (REMS). FDAAA also provided the agency with authority to impose civil monetary penalties on drug sponsors who violate these requirements. FDAAA also requires FDA to conduct several other postmarket drug safety activities. For example: FDA must, in collaboration with public, academic, and private entities, develop a postmarket risk identification and analysis system that can be used to analyze safety data from multiple sources. FDA is required to screen AERS biweekly and publish quarterly reports of new safety information or potential signals of serious risks associated with the use of a drug. FDA is required to use DSaRM to seek input on certain activities, such as elements of REMS and the analysis of drug safety data. In addition to increasing FDA’s authorities, FDAAA also reauthorized the Prescription Drug User Fee Act of 1992 (PDUFA). Originally, PDUFA authorized FDA to collect user fees from drug sponsors in order to support the review of drug applications and it established performance goals, such as time frames for the review of applications. The increase in attention to timely drug approval decisions led to greater awareness of the need for FDA to strengthen its monitoring of postmarket drug safety, which was reflected in the 2002 reauthorization of PDUFA. The most recent authorization of PDUFA, in September 2007 as part of FDAAA, expanded the postmarket drug safety activities for which FDA is authorized to apply user fees. For example, the law identified the development of adverse event data collection systems as an activity that could be funded through user fees. In addition to amounts authorized to be used for all user fee activities, both premarket and postmarket, the PDUFA reauthorization identified specific annual fee revenues to be used for postmarket drug safety activities. In total, FDA reported that it plans to increase its allocation of annual user fees to support postmarket drug safety from about $54 million in fiscal year 2008 to about $102 million in fiscal year 2012. Overall premarket and postmarket funding for OSE and OND increased since fiscal year 2006. From fiscal year 2006 through fiscal year 2008, OSE funding increased from about $31 million to about $71 million. During that same period, OND funding increased from about $115 million to $144 million. For both OSE and OND, much of the increase occurred in fiscal year 2008 and can be attributed to increased user fees. (See fig. 1.) Additionally, across all of CDER, funding for postmarket drug safety increased from about $54 million in fiscal year 2006 to $139 million in fiscal year 2008. Of the $139 million in fiscal year 2008, about $84 million was from fiscal year appropriations and $55 million was from user fees. FDA has begun to implement a new process and initiatives intended to clarify roles related to postmarket safety decision making, but faces a variety of challenges. Several initiatives have not been fully implemented and the agency has not increased the independence of its dispute resolution program. To enhance postmarket drug safety, FDA has begun to formalize interactions between OND and OSE, although some key elements of this new process have not been implemented. In the past, FDA has not afforded the same focus and attention to postmarket drug safety as it has to the drug approval process. For example, an agency official said that, unlike for the premarket process, roles and responsibilities for the postmarket process have not been clearly defined. Therefore, in January 2008, the agency began to establish a new framework for drug safety— which it calls the Safety First Initiative—that is intended to provide this structure. Under the initiative, the agency has adopted a multidisciplinary approach based on the principles the agency refers to as Equal Voice, which are intended to ensure that all necessary parties contribute to decision making. In addition, OSE and OND signed a memorandum of agreement (MOA) in June 2008 that states FDA’s intent for the two offices to contribute equally in determining regulatory actions related to drug safety. However, in most cases, OND retains the authority to decide whether to take regulatory action. According to FDA, OND retains these authorities because, for most decisions related to postmarket drug safety, OND staff have the broadest expertise in evaluating and managing clinical risks and benefits of drugs. However, as part of the MOA, FDA has transferred authority for one regulatory responsibility related to premarket drug safety from OND to OSE and plans to transfer authority for two postmarket responsibilities, but has not set a time frame for doing so. The MOA describes the agency’s intent to transfer to OSE the authority to make final decisions for those activities in which the office has expertise. Initially, these include three drug safety activities that reside with OND: (1) review of proprietary drug names submitted by sponsors, (2) review of protocols and findings of observational epidemiologic studies, and (3) review of protocols and studies that assess medication error risks. In April 2009, OSE was transferred authority for the first regulatory responsibility, the premarket review of proprietary drug names, which gives OSE final decision-making authority for the activity and allows the office to communicate directly with the drug sponsor and issue letters approving or rejecting drug names. An OND official said that the transfer of authority for this responsibility has been beneficial because proprietary name review was not an area in which OND had much expertise. An OSE official said that, since the transfer, decisions have been more consistent and the decision letters issued to drug sponsors have been more transparent. Agency officials said they selected proprietary name reviews as the first authority to transfer to OSE because the process is well defined and self contained, and it will give OSE experience leading a significant drug safety activity while building its expertise to assume authority for the additional responsibilities named in the MOA. Officials said the agency intends to transfer authority for the two postmarket drug safety responsibilities to OSE, but it has not set a time frame for doing so. Agency officials added that coordinating some elements of the remaining responsibilities will be more complex and OSE still needs to increase its staff to assume these additional responsibilities. FDA has established multiple opportunities for staff from different disciplines to discuss drug safety issues. As part of the MOA, postmarket safety issues would be managed by an interdisciplinary team process that is similar to FDA’s process for managing drug approvals. FDA issued an interim policy describing these safety issue teams in May 2009. Teams would be created as needed and include the OSE, OND, and other staff necessary to evaluate a given safety issue and make a decision about any needed regulatory actions. As part of this process, the teams would establish target dates for evaluating the safety issue and later monitor the implementation of any regulatory actions. FDA officials said that teams have been formed in the past to discuss safety issues, but this new policy formalizes existing team-based review practices to provide consistency in resolving safety issues. Officials said that they began training staff on the new policy in July 2009, but they could not provide an estimate of the number of teams that have been formed. In addition, FDA established routine joint safety meetings between OND divisions and their OSE counterparts. In contrast to the safety issue teams, which are established to manage a specific issue, the joint safety meetings focus on broader scientific matters and status updates of joint interest to both OND and OSE. The agency also continues to hold meetings of its Drug Safety Oversight Board. FDA indicated that the board serves as a forum to discuss emerging and often controversial drug safety issues. The board recently expanded its membership to include representatives from additional federal agencies, including the Department of Defense and HHS’s Indian Health Service. According to FDA, board members from other federal agencies allow FDA to hear perspectives on how its drug safety decisions affect federal healthcare systems. OSE and OND employees in our small group interviews generally identified positive outcomes from FDA’s initiatives, although most OSE employees indicated that OND still has more authority in the postmarket decision-making process. Many of the OND and OSE employees who participated in our small group interviews told us that the more formalized process for managing safety issues has helped improve interactions between the two offices since our last report. For example, several OSE employees said that they now consistently receive a response from OND about their consults and recommendations, even if they are not always followed, and these reports no longer fall into a “black hole,” as we reported in 2006. Employees also described increased communication between the two offices, which some said improved tracking of safety issues but others said slowed the decision-making process. With regard to OSE’s influence in the postmarket decision-making process, 75 percent (39 of 52) of OND and OSE employees who completed our DCI indicated that OSE’s influence has increased since 2006. However, OND and OSE employees differed in whether they thought OSE currently serves as an equal partner in decision making. Of the OND employees who completed our DCI, 64 percent (14 of 22) indicated that OSE now serves as an equal partner. In contrast, 57 percent (17 of 30) of OSE employees indicated that OND’s perspective still carries more weight, although 60 percent (18 of 30) indicated that they thought OSE would serve as an equal partner once the new initiatives were fully implemented. Despite changes to FDA’s postmarket decision-making process, OND and OSE employees report that differences still exist in how the two offices view information used to make decisions. For example, one OSE employee said that OND staff trust the results of randomized clinical trials over the epidemiologic data used by OSE, and another OSE employee said that OND is generally more resistant to accepting drug safety recommendations based on epidemiologic data. Some OND employees also said that physicians are better at identifying the direct clinical impact of a drug than other types of staff, such as epidemiologists, who may be more skilled in data analysis. OSE is taking steps to address these differences. For example, an official said that OSE has provided training to OND staff on the methods it uses to do its work. In addition, officials told us that OSE plans to increase clinical expertise by hiring additional medical reviewers to assist it with the review of adverse event reports. FDA implemented both staffing and tracking initiatives intended to improve oversight of postmarket drug safety issues. In January 2008, OND created two new safety management positions within each of its 17 review divisions to reduce variability in how the divisions oversee postmarket drug safety. In addition to coordinating interactions between the offices, employees in these new management positions are to provide leadership and to ensure that adequate OND resources and attention are focused on safety issues. They also track postmarket safety activities which may reduce the burden on individual medical reviewers, who are also responsible for reviewing and recommending whether to approve drug applications. Several OND medical reviewers indicated during the small group interviews that the OND safety management positions have helped to track and coordinate management of postmarket safety issues. For example, one medical reviewer noted that medical reviewers have competing premarket deadlines related to PDUFA and it is helpful to have safety staff who do not have these deadlines and can focus on postmarket drug safety. In addition, OSE reorganized its existing safety project manager positions into a single group in October 2006 to oversee the management of safety issues across OSE divisions. These safety project manager positions serve as OSE counterparts to the OND management positions and are responsible for, among other things, coordinating meetings with OND and monitoring OSE activities. These project manager positions were each previously assigned to a specific OSE division. An OSE official said this reorganization was intended to provide OND staff with a single point of contact within OSE, rather than having separate contacts for each OSE division. Since the reorganization, the total number of safety project manager positions in this group has expanded from 9 to 25. However, several OSE employees in our small group interviews cited challenges related to their interactions with those holding these OSE safety project manager positions. Some said individuals in these positions still seem to be learning their new roles and responsibilities. An employee also said that turnover among the safety project manager positions has made it difficult for the individuals holding those positions to gain experience. As of July 2009, 20 of the 25 OSE safety project manager positions were filled, but an official stated that turnover has been a problem and only one of the individuals has been in that position since October 2006. The official said that the expansion of responsibilities resulting from the reorganization was challenging for some of the individuals and noted that a lack of training and clear policies and procedures for these new positions may have contributed to the high turnover. The official said OSE is hoping to improve retention by implementing training and other support systems for these staff. FDA is also implementing a new tracking system to assist OSE and OND staff in overseeing identified safety issues, although the system has limitations. In January 2007, in response to our 2006 recommendation, FDA began to incorporate a safety module within its Document Archiving, Reporting, and Regulatory Tracking System (DARRTS) to track the agency’s management of and response to significant safety issues identified with the use of marketed drugs. FDA requires that each significant safety issue identified by OND and OSE be tracked within DARRTS by creating a “tracked safety issue” file. As of July 14, 2009, there were 394 active issues. DARRTS is used, among other things, to generate a workplan and assign responsibilities for managing these issues, as well as to provide updates on the status of these issues. Officials told us that while the system contains documents describing specific recommendations and safety actions, it does not, as we recommended, allow FDA to systematically track how issues were resolved and whether OSE’s recommendations were implemented. For example, an FDA official told us that DARRTS cannot provide the agency with a summary of the recommendations for safety actions that OSE has made to OND or how the safety issues were ultimately resolved. FDA indicated that, due to limited resources, it does not plan to incorporate this capability into DARRTS in the next year or two. In addition, FDA has identified certain limitations with the system, such as problems of completeness and accuracy and the need for a mechanism to notify relevant staff when a new tracked safety issue is created. According to FDA, some of the identified problems have been corrected while others will be addressed at a later date. An official said that the agency expects that future problems will be minimized by improved preimplementation testing. For example, the official noted that the July 2009 update of DARRTS, which allows the system to be used for monitoring both postmarket studies and risk management plans, was more rigorously tested by users prior to its implementation. FDA is also utilizing contractors to improve oversight of specific new authorities created by FDAAA. We and others have identified problems in the agency’s tracking of required and requested postmarketing studies, such as OND reviewers not meeting their goals for reviewing in a timely manner the annual status reports submitted by drug sponsors. In 2008, FDA hired a contractor to monitor and provide support for postmarketing studies, including the review of these annual status reports. FDA officials said that this contract has been very productive because it allows the review of the annual status reports to be completed, which is very time consuming, while allowing the agency to move ahead in its oversight of the new postmarketing studies it is requiring under its FDAAA authority. The agency is also hiring a contractor to help oversee the required risk management plans. FDA is revising CDER’s program for resolving scientific disputes raised by individual employees, but the changes do not sufficiently address our prior recommendation for improving the independence of the process. Beginning in 2007, FDA conducted a review of each of its centers’ dispute resolution processes, including CDER’s DPO program. As a result of this review, FDA developed a list of mandatory elements for all centers to implement during fiscal year 2008 and a list of voluntary best practices for scientific dispute resolution activities. For example, FDA now requires that employees of each center who file a DPO have the option to appeal to FDA’s Office of the Commissioner for a review to determine if the center followed its own dispute resolution process correctly. CDER indicated that its DPO policy is being revised to reflect this “process review” and other new agencywide requirements, but noted that CDER plans to make few other changes. As of October 2009, the revised policy had not been finalized. While CDER continues to make changes to its DPO policy, the planned changes do not address a weakness we identified in our 2006 report—that the program it established to resolve scientific disputes may not be viewed as independent as a result of the CDER Director’s extensive involvement. According to a July 2009 draft of the revised policy, as was the case in 2006, the Ombudsman, whom the policy designates as the focal point for overseeing the resolution of disputes, would consult with the CDER Director before deciding whether a dispute warrants review. An agency official told us that this consultation is important because the Ombudsman does not have the same scientific expertise as the CDER Director. The official acknowledged that, while the Ombudsman is included as a way to improve the independence of the DPO program, this position does not meet the standards of independence established by the Coalition of Federal Ombudsmen. In addition, according to the draft DPO policy, the CDER Director would still appoint the chair of the ad hoc review panel and decide how the dispute should be resolved, in consideration of the panel’s recommendation. The draft DPO policy includes the required option of a process review by the Office of the Commissioner, which would not involve the center director or other center staff in decision making. However, this review is limited to determining whether CDER followed its own processes correctly, and it does not consider the scientific merits of the dispute. As a result, CDER’s revised DPO program still may not be viewed as sufficiently independent for resolving disputes. As of July 2009, CDER’s DPO program had not been used to resolve a difference of opinion. The Ombudsman attributed the lack of use to the CDER Ombudsman’s Office’s management of disputes so that they never reach the level of a formal DPO. FDA also indicated that the DPO program is narrowly focused on individual disagreements that employees have been unable to resolve within their supervisory chain; if agreement has not been reached between scientific disciplines, the principles of Equal Voice are intended to help different disciplines express differences of opinion. OND and OSE employees who completed our DCI reported a variety of reasons for why they chose not to file a formal DPO. Of the 52 OND and OSE employees who completed our DCI, 36 indicated that they had not had a difference of opinion that would have qualified for filing a dispute. However, 13 of the employees did report having a difference of opinion where they thought that FDA’s action or lack of action had the potential to have a significant negative impact on public health. When asked why they did not use CDER’s program to resolve this difference, these employees most frequently indicated that they preferred to express the opinion in written documentation (7) or were not aware of the program (6). In addition, 3 of these 13 employees noted concerns about the fairness of the DPO program as one reason for why they did not utilize it. None of the 13 employees indicated that they preferred the option of discussing the differing opinion informally with the Ombudsman. FDA plans to improve its identification of drug safety issues by developing new adverse event systems to collect and store adverse event reports and by increasing access to external sources of data. However, the adverse event systems and a new network of external data providers have not yet been implemented. FDA is developing two new adverse event systems to help it identify drug safety problems—one to improve the collection and processing of adverse event reports and another to store reports and provide FDA staff with improved tools for analyzing them. FDA’s complete adverse event system for human drugs will not be implemented until the end of 2010. The new adverse event report collection and processing system, MedWatchPlus, is intended to increase the accuracy and timeliness of reports accessible to FDA staff and is scheduled to be implemented for human drugs by summer 2010. The current MedWatch Web site collects adverse event reports about prescription drugs by providing forms that patients and healthcare providers can submit online or download and send to FDA in paper form. Drug manufacturers may also use this system to download forms, although they may elect to submit electronically through an alternative system, the Electronic Submissions Gateway (ESG). Although reports submitted through ESG go directly into CDER’s database of adverse events, AERS, paper reports, and reports submitted using the MedWatch online form must be processed and manually entered into AERS before they are available to FDA staff. FDA estimates that reports submitted on paper may take from 2 weeks to 2 months from the time of receipt to be entered into AERS where they can be analyzed by FDA staff. The new MedWatchPlus system will allow online reports to be processed automatically and transferred directly into the agency’s adverse event system, reducing the need to process and enter reports manually. According to FDA, automatic processing will cut down on errors related to data entry and should allow for more timely availability of reports for analysis. FDA estimates that electronic submissions are generally available in AERS within 2 days of their receipt. FDA expects that MedWatchPlus will enable the agency to increase the electronic submission rate of reports, increase the number of reports accessible to FDA staff for analysis, and improve report quality. In fiscal year 2008, 61 percent of reports from manufacturers were submitted electronically. In August 2009, FDA issued notice of a proposed rule that would require manufacturers to submit adverse event reports electronically, which would mean that manufacturers who do not currently submit reports electronically would either use ESG or would need to use the MedWatchPlus online form. Increasing the electronic submission rate should allow for more reports to be available to FDA staff. Currently, FDA does not routinely enter all paper reports from manufacturers into AERS, which an official said is because of the cost to the agency. However, all reports from manufacturers submitted electronically through MedWatchPlus will be automatically entered into AERS, which should reduce costs and allow for more reports to be available for analysis. FDA also expects to increase the number of electronic submissions from patients and healthcare providers by making the system easier to use. As part of MedWatchPlus, FDA will use an interactive questionnaire that will guide submitters through a series of questions, which FDA expects will increase the accuracy and completeness of reports. For example, submitter errors, such as inaccurate drug names, create a burden for FDA. Through MedWatchPlus, the submitter will be provided with a menu of choices for the name of the drug. The questionnaire will also audit the information received and prompt for missing information. FDA is also developing a new database to store adverse event reports once they have been submitted that should offer integrated data analysis features to facilitate the identification of safety issues. The new database, the FDA Adverse Event Reporting System (FAERS), is expected to receive reports from MedWatchPlus and other FDA applications for all FDA- regulated products and store them in a single location. In addition to avoiding redundancy among the center databases, FDA has stated that a consolidated database would benefit drug safety, for example, by facilitating the sharing of adverse event reports across centers for combination products. FAERS will replace AERS and is intended to address some current AERS limitations that affect how OSE staff do their work. FDA officials told us that OSE staff view the current version of AERS as a giant “filing cabinet,” which lacks integrated software for data mining and signal management that could help them to monitor drug safety more effectively. FDA officials said that, currently, to use the software, staff have to periodically extract the data from AERS and transfer them to another system for analysis, which means that analyses cannot be conducted in real time. In contrast, FDA plans to include integrated signal management and data mining software in FAERS, which will make these features easier to use and allow for analyses of safety signals closer to real time. FDA officials said that the agency plans to address other adverse event report quality problems by including new features in FAERS. For example, an adverse event reviewer told us that AERS lacks a dedicated data field (such as a checkbox) to indicate whether a female patient described in an adverse event report is pregnant. As a result, reviewers must manually review the narrative of reports for women aged 15 to 45 to determine whether the patient was pregnant. FDA officials said that FDA plans to include a dedicated data field to indicate whether a report identified the patient as pregnant in FAERS. An adverse event reviewer also identified the lack of a link between an adverse event report and FDA-approved label information as a problem because it hinders staff in determining whether the adverse event is new or has already been identified and included in the drug’s label. FDA officials said that linkage to label information is a goal for inclusion in FAERS, but it is complex and the agency does not have a time frame for its inclusion. FAERS development has experienced delays, but FDA expects that it will be partially implemented by the end of 2010. FDA began developing an update to AERS in 2004. However, according to a 2006 report by an FDA contractor, deficiencies in FDA’s procurement practices and the agency’s decision to expand the project’s scope to develop an agencywide database for all FDA-regulated products resulted in delays. The contractor reported that these obstacles in development resulted in a 4- to 5-year delay and an estimated $25 million in additional development costs. Currently, FDA indicated that it is prioritizing FAERS requirements to determine what features and capabilities are possible for the first version of FAERS. FDA plans to complete the first version of FAERS, for drugs and biologics, by the end of 2010. However, this version will not include fully integrated data mining and signal management software. FDA does not have an estimated time frame for when these features will be fully integrated. FDA increased funding for acquiring the external data that it uses to examine drug safety issues from about $5 million in fiscal year 2007, to about $28 million in fiscal year 2008. FDA recently added additional funds to existing contracts with four private companies that conduct drug safety studies using their own databases of electronic health information. Since FDA initially awarded about $5.4 million in total to these compani in fiscal year 2005, these contracts have yielded five complete d epidemiologic studies on drug safety, including a study on how antidepressant use in pregnancy affects the health of newborns. In fiscal year 2008, FDA added about $9 million in total to the four contracts. However, FDA officials said that under the current contracts it is difficult to expand funding in response to the agency’s needs and they will be es changing to a different contract type when these contracts end in 2010. They said the new contract type will make it easier to add funds as the need arises for additional epidemiologic studies to examine previously unknown drug safety issues. FDA has also used the increased data acquisition funds for contracts with private companies that allow FDA staff direct access to data that can be used to conduct drug safety studies internally. These contracts provide the agency with access to drug utilization data, which are useful to FDA for, among other things, providing an estimate of how many people have been exposed to a drug, which provides context for adverse event analyses. These contracts allow FDA to download the data onto the agency’s servers where staff can access the data to conduct drug safety studies. In 2008, FDA awarded contracts valued collectively at over $14 million for a base year and 3 option years. The three new contracts replaced an existing contract with a single vendor and, according to an FDA official, represent an approximate tripling of funding for access to drug utilization data. The official also said that contracts with three vendors allow shortcomings in one data set to be compensated by information from another. For example, one contractor has mail order pharmacy claims data, which are not available from the other two contractors. In addition to funding contracts with private companies, FDA is in the early stages of forming partnerships with the Department of Veterans Affairs (VA), the Department of Defense (DOD), and the Centers for Medicare & Medicaid Services (CMS) to access their databases of electronic health information for drug safety research. FDA signed memoranda of understanding with VA and DOD in 2007 to enable these agencies to share information necessary to evaluate drug safety with FDA. FDA allocated about $3.6 million to fund these agreements in 2008, which among other things, provided funding for research projects, such as a study of the relationship between the use of smoking cessation drugs and suicidal behavior, and funding for staff to support such studies. In addition, FDA signed an interagency agreement with CMS in August 2008 to access both Medicaid and Medicare data. As part of this agreement, FDA transferred $1 million to CMS in part to fund a project to create a Medicaid database amenable to research on drug safety. FDA is also working on several pilot projects using Medicare prescription drug data. These data on Medicare beneficiaries provide the agency with access to new information on the elderly and disabled—groups that are generally underrepresented in traditional clinical trials that FDA uses to assess safety prior to approval. FDA officials said that partnering with federal agencies is beneficial because they have large databases of electronic health information that may be accessed more cheaply than contracting with private entities. FDA is also taking steps to improve identification of safety issues by creating a network of external drug safety data providers, but the agency is in the early stages of developing it. The FDAAA-mandated surveillance system, known as the Sentinel System, will be a network of databases of electronic health information that can be utilized for safety signal evaluation for drugs and other marketed medical products. FDA officials said one of the purposes of the Sentinel System will be to provide the agency with an active surveillance tool that will be capable of generating safety signals that are not identifiable through AERS. For example, AERS relies on patients and doctors to submit adverse event reports, but if they do not recognize an event as being potentially drug-related, they may not file an adverse event report. In addition, FDA expects that the Sentinel System will build on the current data contracts the agency uses to conduct formal epidemiologic studies, which are generally used to confirm safety signals after they have been identified, by allowing researchers to specify potential safety problems in advance and monitor for these problems in near real time. The Sentinel System is in the early stages of development and as of June 2009 there were no established milestones. Thus far, FDA has established a senior management team, conducted a series of meetings with stakeholders, and created a working group of federal agencies that are developing complimentary initiatives. FDA officials said they have not finalized funding or staffing plans for the system. In addition, many other key decisions have yet to be made, including: sources of data, an information technology infrastructure, and methods of analysis. In 2008, FDA awarded eight contracts to investigate these and other issues. Seven of the reports from these contracts have been completed and FDA expects that the remaining report will be completed by the end of 2009. FDA’s workload related to postmarket drug safety has increased as a result of new authorities and other factors. While the agency received increased funding and is hiring staff to conduct postmarket drug safety activities, it faces difficulties in recruiting the additional staff and external experts needed to meet its increasing responsibilities. FDA reports that new postmarket drug safety responsibilities and other factors have led to an increased workload for which FDA has identified a need for increased staff. Of the OSE and OND staff that completed our DCI, 77 percent (40 of 52) indicated that their workload had increased or greatly increased since 2006. In addition, 60 percent (31 of 52) of the employees said that they either were not able to meet their postmarket drug safety responsibilities during an average workweek or were only able to meet these responsibilities by working overtime. Many employees told us during our small group interviews that one source of this increased workload has been the new postmarket drug safety responsibilities added by FDAAA. FDA officials said that requiring a drug sponsor to conduct postmarketing studies is more time consuming for FDA staff than the past process of requesting such studies. For example, to require a study, officials said the agency needs to document its rationale in a legally enforceable contract with a sponsor that may describe specific elements of the study design. The agency also works with sponsors to establish milestones for the completion of these studies. In addition, officials said the process of overseeing the development and implementation of a drug sponsor’s required risk management plan has led to additional meetings between OND and OSE, as well as additional interactions with drug sponsors to review the proposal and discuss even minor modifications to it. FDA officials said that the new FDAAA authorities are especially time consuming because the agency is still developing processes for how to conduct this new work. Officials said that proposals for requiring postmarketing studies and REMS are being reviewed by others within FDA to ensure consistency in the application of the authorities. FDA officials expect that some of this additional workload will decrease as the process becomes more routine. OND medical reviewers described challenges meeting their premarket and postmarket responsibilities. Several reviewers noted that their primary focus is on completing premarket work within PDUFA time frames, and issues related to postmarket safety receive lesser priority. Two medical reviewers said that important identified safety issues would take priority over meeting PDUFA deadlines, but other reviewers told us that their workload prevents them from conducting reviews that would allow them to identify new postmarket safety issues. For example, reviewers said they are unable to fully review the Periodic Safety Update Reports submitted by drug sponsors, which are comprehensive reports containing information on serious and nonserious adverse events. According to some OND reviewers, medical reviewers do not have the time to fully analyze these reports to look for potential safety issues. OSE staff told us that workload demands prevent them from reviewing these reports. G nonserious adverse events may not be entered into AERS, failure to fully review Periodic Safety Update Reports may result in FDA missing safety signals for nonserious adverse events. OSE also reported that competing demands impact its ability to meet its postmarket responsibilities, such as its new premarket responsibilities for reviewing proposed proprietary drug names within PDUFA deadlines and communicating its decisions to drug sponsors. The staff involved in these reviews estimated that approximately 90 percent of their time is spent on such premarket activities, which leaves little time to spend on their other postmarket drug safety responsibilities, such as analyzing reports of medication errors. For example, an FDA employee told us that they do monitor AERS to identify safety signals, but they do not have time to complete follow-up reviews of these signals. Although employees agreed that the most important safety issues do get resolved, one employee said that follow-up reviews are often lower priority than fulfilling premarket responsibilities. In addition, other OSE staff identified competing demands that hampered their ability to conduct postmarket safety work. For example, OSE adverse event reviewers told us that consult requests from OND consumed the majority of their time, leaving them less time to conduct self-initiated safety analyses of adverse event data. According to FDA, each OSE adverse event reviewer receives an average of about 44 adverse events reports per day, and reviewers told us that given competing priorities, they are not able to review them all. A contractor reviewing OSE’s increasing workload found that additional staff will be needed in order to fulfill the new responsibilities related to FDAAA and the MOA. According to the contractor’s December 2008 report, OSE would need an estimated total of 453 full-time equivalent employees by 2011 to meet its increased workload, more than double OSE’s current staffing. While the contractor identified workload increases throughout OSE, it found that the greatest increases would be related to the review of risk management plans and postmarket safety data, such as adverse events. OSE and OND officials described fiscal year 2008 as a very successful hiring year, due in part to specific hiring initiatives. FDA indicated that since the start of fiscal year 2008, OND increased its staff from 736 to 928 and OSE increased its staff from 114 to 193. The staff hired included OND medical reviewers who conduct premarket and postmarket reviews and OSE staff with postmarket drug safety responsibilities, such as epidemiologists and risk management experts. Agency officials attributed this success to specific hiring initiatives. For example, officials told us that both OSE and OND used a summer 2008 job fair and direct-hire authority to hire staff more quickly. While the agency has had direct-hire authority for medical reviewers since 2003, FDA indicated that it temporarily obtained direct-hire authority from April 2008 through September 2008 for epidemiologists. The OSE and OND Directors said that they hired candidates within weeks under the authority, rather than the 3 to 6 months it can typically take to announce positions, screen applications, conduct interviews, and hire individuals. The OSE Director told us that without the authority, interested candidates have sometimes accepted employment offers elsewhere before FDA could extend its own offer. In addition, an official said that CDER’s ability to offer hiring bonuses, relocation reimbursement, and student loan repayment contributed to its hiring success during fiscal year 2008. Although OSE significantly increased its staff in fiscal year 2008, hiring and staffing challenges could make it difficult for the office to meet the workload generated by its new postmarket drug safety responsibilities. While the contractor estimated that OSE would need 453 full-time equivalent employees by 2011, the OSE Director did not know if the agency planned to increase OSE’s fiscal year 2009 staff ceiling of 211 in fiscal year 2010. However, officials said that recruiting the right people with the desired drug safety expertise is difficult. For example, an OSE official said that it is hard to find candidates who have experience with the specific epidemiologic activities conducted by FDA, and the agency therefore looks for candidates with epidemiologic skills who can then be trained by the agency once they are hired. Officials indicated that while the new hires can bring up-to-date skills, their lack of experience means that it can take up to 3 years before newly hired employees can work independently. In addition, an official said it is difficult for OSE to compete with drug companies, who can offer higher compensation, for the same pool of talent. Given the estimated workload increases identified in the FDA contractor’s December 2008 review, OSE may be challenged to hire staff quickly enough to meet its increasing workload. FDA officials said that they lack adequate computational capacity and enough staff to make full use of external sources of data for drug safety studies, and FDA expects the number of such studies to grow. OSE has increased funding for acquiring external data and a recent workload planning report prepared by an FDA contractor indicates that OSE intends to triple the number of epidemiological studies it conducts using such data from 13 in 2008 to 39 in 2011. An OSE official told us that currently, most of the epidemiologic studies are conducted by contractors, but that OSE would like to conduct more studies internally. The official said that internal studies afford FDA more control over the analyses, as well as provide increased professional opportunities to OSE staff, which may lead to greater staff retention. However, the official said that conducting more internal studies would require greater computational capacity and more staff. OSE officials told us, for example, that the current technological infrastructure limits staff to running a single analysis at a time and that the computer servers in CDER “routinely crash” when dealing with large data sets. OSE officials also said that they lack programmers who are needed to extract data from databases and prepare data sets for analysis. OSE officials said that the office has faced difficulties hiring programmers because the position descriptions that it would use to hire these programmers are currently only available to the agency’s Office of Information Management, which has meant that such staff may not be hired by OSE. They indicated that, without enough programmers, this work is shifted to epidemiologists, who must then spend more time on each study and have less time to devote to developing and carrying out additional studies. CDER is developing a computational science center that is intended to address some of these challenges, but this center is in the early stages of development. FDA indicated that the center is intended to support both pre- and postmarket quantitative analyses of the safety, efficacy, and quality of drugs. FDA officials said it should address current problems by providing increased computational capacity and more staff, including programmers and data managers that can be utilized by OSE. However, they said that the center is currently in the developmental stages, and that there is no time frame for its completion. In the interim, OSE is using short-term fixes, such as increasing the memory capacity of existing servers. OSE officials noted that OSE may also contract out some programming work, although they described challenges associated with contracting out this type of work. The officials said that each drug safety study can take 1 to 2 years to complete and receiving programming support on a task-by-task basis requires OSE to spend time reeducating new programmers each time there is a new task. In contrast, an OSE official said that having programmers within CDER could allow them to gain expertise on the kind of work OSE does. FDA increasingly utilized external drug safety experts serving on DSaRM to participate in advisory committee meetings to discuss identified safety issues of specific products, but the agency faces challenges recruiting new members. From 2002 through 2006, DSaRM met 9 times in 5 years— 5 times on its own as a committee and 4 times as part of joint meetings with other advisory committees. DSaRM met more frequently from January 2007 through December 2008, meeting 9 times—once on its own and 8 times as part of joint meetings. Most DSaRM meetings, and all 9 of the meetings in 2007 and 2008, have been held to discuss drug-specific issues. In addition to attending joint advisory committee meetings, individual DSaRM members served temporarily to supplement expertise during 12 meetings of other CDER advisory committees that occurred from 2007 through 2008. While several DSaRM members acknowledged the important expertise in drug safety that they can bring to discussions with other advisory committees, some members told us that the small number of meetings involving only DSaRM has resulted in a lack of cohesion among committee members. In addition, some members noted that meeting as a single committee would allow them to discuss broad principles of drug safety, rather than specific drug products, and to examine lessons learned across meetings. One member noted that without meeting as a single group on broad safety issues, the committee is unable to take advantage of the cumulative learning that comes with a coherent process. An FDA official said that the agency recognizes that temporarily serving on other advisory committees has been a burden for DSaRM members. The official said that, therefore, the agency has been expanding a pool of consultants that can instead provide temporary drug safety expertise at these other advisory committee meetings. Despite the increased demand for DSaRM’s drug safety expertise, the agency has been challenged to fill all of the committee’s vacancies. For the past few years DSaRM has had between 6 and 9 of its 14 slots vacant. In contrast, from 2003 through 2006, DSaRM had no more than one vacancy. A few of the DSaRM members that we interviewed told us that additional members are needed to reduce the existing members’ workload. The OSE Director said that a more intensive effort to recruit members to the committee began in 2008, but it has been difficult to find qualified individuals who have no financial conflicts of interest. Recruiting new members will be especially important because 3 members’ terms expired on May 31, 2009. An official said that the agency appointed 3 new members to the committee on July 1, 2009. While this gives the committee a total of 8 members, 3 of these members’ terms expire on May 31, 2010. An official said the agency is reviewing approximately 43 candidates for potential conflicts of interest, with the goal of filling the DSaRM vacancies as soon as possible. The number of vacancies may present challenges to FDA’s implementation of new FDAAA requirements for seeking advice from DSaRM on risk management plans and the analysis of drug safety data. FDA indicated that it plans to convene DSaRM in accordance with the FDAAA requirements, although officials said that the agency has not yet done so and the requirements will result in FDA using DSaRM differently than in the past. An official said that the agency is therefore in the process of determining how to best involve the committee in these new activities. Some of the DSaRM members with whom we spoke noted that the FDAAA provisions appear to relate to broader drug safety issues than the committee has generally considered. One member noted that the committee would not be able to fulfill the new FDAAA requirements at product-specific meetings; rather, the complete committee would probably have to meet on its own. If the agency continues to have a large number of vacancies with DSaRM, it could be difficult for the committee to fulfill these additional duties while also participating in discussions of specific drug products. FDA’s oversight of postmarket drug safety has been a long-standing concern, with various groups reporting problems for more than 30 years. Our 2006 report on this topic cited the need for FDA to improve its decision-making process for postmarket drug safety. To enhance this process, FDA has recently begun to take steps that respond to our concerns, as well as those expressed by others. However, many of its initiatives are new and are in the early stages of development and implementation. For example, the agency’s efforts to begin formalizing its decision-making process, hire more staff, and establish dedicated safety positions within OND are an encouraging start. As FDA has gone about planning to improve its postmarket oversight, it has also needed to respond to changes brought about by FDAAA, which resulted in increased responsibilities for postmarket drug safety. FDA employees have since cited several instances in which increases in their workload and competing premarket demands and other priorities have prevented them from fully carrying out their postmarket drug safety responsibilities. We recognize that with a growing workload, come additional challenges. The agency’s initiatives will require time and resources before they can make a significant impact on previously identified problems. While we view FDA’s plans as positive, it is not yet clear if or when FDA’s decision-making process will be substantially improved as a result of its efforts. As one of its efforts to enhance postmarket decision making, the agency plans to transfer additional authorities from OND to OSE. Transferring these authorities could help FDA better align decision-making responsibilities with the division of expertise between the two offices. However, the agency has set no time frames for their transfer and has stated that OSE needs increased experience and resources before the office is able to assume the new authorities. FDAAA provided the agency with greater flexibility to allocate funds to postmarket drug safety. Therefore, as FDA considers this transfer, it is important that it take advantage of this flexibility to align its resources in such a way that it strike an appropriate balance between its competing premarket and postmarket priorities and ensure postmarket safety receives sufficient attention. Establishing a time frame for this transfer and adequately preparing OSE to assume these authorities are important next steps to ensuring appropriate oversight of postmarket drug safety. To address weaknesses in FDA’s oversight of postmarket drug safety, we recommend that the Commissioner of FDA develop a comprehensive plan for transferring the additional regulatory authorities from OND to OSE that includes time frames for the transfer and steps to ensure resources are properly aligned to allow OSE to assume these responsibilities. We provided a draft of this report to HHS for review. HHS provided comments from FDA, which agreed with our recommendation. FDA’s comments are reprinted in appendix II. FDA also provided technical comments, which we incorporated as appropriate. Regarding our recommendation, FDA agreed that developing a comprehensive plan to prepare OSE for the transfer of additional regulatory authorities is desirable. However, it noted that the details of such a plan, including time lines, remain dependent upon available funding and the agency’s ability to recruit and retain the necessary staff to assume additional responsibilities. While we agree that both funding and staff are important to the successful transfer of these regulatory authorities, we believe that FDA has the flexibility to align its resources in such a way as to ensure that postmarket drug safety receives appropriate attention. Furthermore, we believe that the development of a comprehensive plan and time line is an important step towards ensuring that necessary funding levels and staffing needs are identified and secured. In addition to commenting on our recommendation, FDA addressed several other issues. First, it emphasized that, since our 2006 report was issued, it has undertaken a comprehensive set of activities to improve its postmarket drug safety program. We agree that FDA has begun to take some important steps to improve its decision-making process, but as we noted earlier, we believe that it is too early to judge the effectiveness of these steps. Second, FDA stressed that postmarket drug safety decisions are often complex and frequently require the involvement of staff from a number of scientific disciplines. The agency noted that for each of the many regulatory decisions that need to be made, a decision maker must have the delegated responsibility and authority to make these decisions. It indicated, for example, that in most cases OND has the broadest expertise to make decisions about postmarket drug safety. We understand that, while multiple areas of expertise are brought to bear in assessing safety issues, there may need to be a single office responsible for making final decisions. We added language in the report to clarify FDA’s position on OND expertise in postmarket decision making. Third, it also noted that we implied that OND and OSE are the only significant participants in drug safety decision making. We understand that, depending on the safety issue, a variety of FDA offices and scientific disciplines may be involved in decision making and our draft report acknowledged this. However, our work appropriately focused on OND and OSE because of the key roles they play in postmarket decision making and because of the concerns that were raised about the relationship between these two offices in our 2006 report. Finally, FDA said that our report omitted the contribution its Drug Safety Oversight Board has made to postmarket decision making. We recognize that this board plays a role in postmarket safety, as discussed in our 2006 report. The focus of our current report was to describe new initiatives underway at FDA. However, we have added information about the board to our report in response to FDA’s comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Commissioner of FDA and appropriate congressional committees. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In our 2006 report, we made recommendations to the Food and Drug Administration (FDA) that were intended to improve its oversight of the postmarket drug safety decision-making process. Specifically, we recommended that FDA: 1. revise and implement its draft policy on major postmarket drug safety 2. clarify the Office of Surveillance and Epidemiology’s (OSE) role in FDA’s scientific advisory committee meetings involving postmarket drug safety issues, 3. improve the Center for Drug Evaluation and Research’s (CDER) dispute resolution process by revising the pilot program for resolving differing professional opinions (DPO) to increase its independence, and 4. establish a mechanism for systematically tracking OSE’s recommendations and subsequent safety actions. Regarding the draft policy on major postmarket drug safety decision making, according to FDA, the agency no longer plans to complete it. This policy was intended to ensure that all major postmarket safety recommendations be discussed by the relevant officials and present a process for making recommendations and resolving disagreements. An official said that, in light of the multidisciplinary approach it has established through the Safety First Initiative and principles of Equal Voice, FDA’s postmarket decision-making process has changed, and as a result, the process described in the draft policy was no longer relevant. The official said that the agency determined that it was not necessary to issue a separate policy on major postmarket drug safety decision making. Regarding the clarification of OSE’s role at scientific advisory committee meetings, an FDA official told us that instead of developing such a policy, the agency added language to the manual for the agency staff responsible for managing the advisory committees. The manual instructs these staff to ask the OND division coordinating an advisory committee meeting involving drug safety issues whether OSE should be involved in the meeting. This manual does not specifically address the role of presentations by OSE staff in those advisory committee meetings. However, an FDA official we spoke with was not aware of any recent instances in which OSE employees were excluded from presenting at an advisory committee meeting. Of the 30 OSE employees who completed our data collection instrument, 15 indicated that they had no opinion about the extent to which CDER has become more or less accepting of employees expressing dissenting views at advisory committee meetings. However, of the remaining 15 employees, 10 indicated that CDER has been more accepting of such presentations since 2006. Regarding CDER’s DPO program, FDA initiated an agencywide review of its dispute resolution process that instituted new requirements for each center to follow. CDER indicated that it is making few changes to its DPO policy, which an official told us already incorporated most of the new elements resulting from the agencywide review. However, according to a July 2009 draft of that policy, the planned changes do not address our recommendation to increase the program’s independence. A CDER official indicated that, under the revised policy, the Ombudsman would still consult with the CDER Director before deciding whether a dispute warrants formal review. In addition, the CDER Director is still the final decision maker regarding how the dispute should be resolved. Regarding the implementation of a mechanism for systematically tracking OSE’s recommendations and subsequent safety actions, FDA is in the process of implementing the Document Archiving, Reporting, and Regulatory Tracking System (DARRTS). In January 2007, in response to our 2006 recommendation, FDA began to incorporate a safety module within DARRTS to track the agency’s response to significant safety issues identified with the use of marketed drugs. For each significant safety issue, FDA creates a “tracked safety issue” within DARRTS that allows staff, among other things, to generate a workplan and assign responsibilities for managing these issues, as well as update their status. While the system contains documents describing specific recommendations and safety actions, an official told us that it does not, as we recommended, allow FDA to systematically track how issues were resolved and whether OSE’s recommendations were implemented. Marcia Crosse, (202) 512-7114, [email protected]. In addition to the contact named above, Geraldine Redican-Bigott, Assistant Director; William Hadley; Cathleen Hamann; Rebecca Hendrickson; Hannah Sypher Locke; Lisa Motley; Coy J. Nesbitt; and Suzanne Worth made key contributions to this report.
There have been long-standing concerns regarding the Food and Drug Administration's (FDA) oversight of postmarket drug safety. In 2006, GAO reported that FDA had not clearly defined the roles of two offices involved in making decisions about postmarket safety--the Office of New Drugs (OND) and the Office of Surveillance and Epidemiology (OSE). GAO and others reported additional concerns such as limitations in the data FDA relies on to identify postmarket drug safety issues and the systems it uses to track such issues. At that time, GAO made recommendations, including that FDA improve the independence of its program for resolving scientific disputes related to postmarket drug safety. In 2007, legislation further expanded FDA's postmarket responsibilities. This report examines the steps that FDA is taking to (1) enhance its processes for making decisions about the safety of marketed drugs, (2) improve access to data that help the agency identify drug safety issues, and (3) build its capacity to fulfill its postmarket drug safety workload. GAO reviewed FDA policies and planning documents, and interviewed FDA officials. FDA is beginning to address previously identified weaknesses in its oversight of postmarket drug safety issues, but challenges remain. The agency is changing its postmarket decision-making process as part of its Safety First Initiative, which includes formalizing interactions between OND and OSE and providing OSE with added responsibilities. The one authority FDA transferred from OND to OSE is a premarket review responsibility. FDA officials said the agency plans to transfer authority for two postmarket responsibilities for reviewing certain types of drug safety studies, but the agency does not have a time frame for their transfer. Officials said that OSE must still gain experience leading the one transferred responsibility and expand its staff before it can assume these additional responsibilities. While most of the OSE and OND employees GAO interviewed indicated that OSE's role in managing safety issues has increased since 2006, most OSE employees GAO interviewed said that OND's perspective still carries more weight in decision making. OND recently created safety management positions in each of its 17 divisions; OSE expanded its similar positions from 9 to 25, although an employee said turnover has made it difficult for the OSE managers to gain experience. FDA is also revising its program for resolving scientific disputes, but these changes have not increased its independence, as GAO recommended. FDA plans to implement new data systems and is increasing access to external data to assist with drug safety decisions. FDA plans to implement new systems in 2010 to improve the timeliness, quality, and analysis of reports of adverse events associated with human drug use. FDA has also increased funding for contracts with private companies and is in the early stages of forming partnerships with federal data holders to access external data. As mandated in the 2007 legislation, FDA is developing the Sentinel System, a network of external data providers intended to enhance drug safety surveillance, but the agency is in the early stages of developing it. FDA faces challenges meeting an expanding workload. The agency indicated that expanded responsibilities resulting from the 2007 legislation increased its workload, and both OND and OSE employees described difficulties meeting their responsibilities. FDA indicated that since fiscal year 2008, OND staff increased from 736 to 928 and OSE staff increased from 114 to 193. However, an agency review suggests that OSE may still need to more than double its staff of 193 by fiscal year 2011 to meet its new responsibilities. Although OSE has increased its staff, officials cited hiring challenges, such as competition from the private sector, that may make it difficult to hire staff quickly enough to meet the increasing workload. FDA also expects to complete a growing number of drug safety studies, but technological and staffing challenges limit its capacity to conduct these studies. To assist its decision making, FDA has increasingly sought advice from members of its external drug safety advisory committee. However, the agency has encountered difficulty filling several committee vacancies. An official said FDA is reviewing candidates with the goal of filling these vacancies as soon as possible.
Export credits are financing arrangements designed to mitigate risks to buyers and sellers associated with international transactions. Export credits generally take the form of direct loans, loan guarantees, and export credit insurance, and may be short-term (0-1 year), medium-term (1-7 years), and long-term (7 plus years). (See textbox below.) Buyers and sellers in international transactions face unique risks, such as foreign exchange risk, difficulties in settling disputes when damages to shipments occur, or instability in the buyer’s country. For these reasons, lenders may be reluctant to finance a buyer’s purchase of foreign goods. Export credit products are meant to facilitate international transactions by mitigating these risks. Official ECAs are organizations that provide export credits with explicit government backing, where either the government, or the government-owned ECA, assumes the risk and is financially liable for reimbursing the exporter or the lending institution if the buyer fails to pay. ECA Export Credit Products Defined Export credit insurance: An insurance policy that protects the exporter from the risk of nonpayment by foreign buyers for commercial and political reasons. Loan guarantee: An ECA guarantees a lender’s financing to an international buyer of goods or services, promising to pay the lender if the buyer defaults. Direct loan: The ECA makes a fixed-rate loan directly to an international buyer of goods and services. Interest make-up: In lieu of making direct loans, an ECA pays a lender the difference between the OECD minimum interest rate and commercial interest rates. Ex-Im, the official export credit agency of the United States, is an independent government agency operating under the Export-Import Bank Act of 1945, as amended. Ex-Im currently has about 400 employees. Its mission is to support U.S. exports and jobs by providing export financing on terms that are competitive with those of official export credit support offered by other governments. Since fiscal year 2008, Ex-Im has been “self-sustaining” for appropriations purposes, financing its operations from receipts collected from its borrowers.insurance, direct loans, and loan guarantees in support of U.S. exports. In fiscal year 2011, Ex-Im authorized $32.7 billion: $7.0 billion in export credit insurance, $6.3 billion in direct loans, and $19.4 billion in loan guarantees. Ex-Im has a risk exposure limit of $100 billion, meaning that the total outstanding value of all loans, guarantees, and insurance contracts cannot exceed this number; at the end of fiscal year 2011, Ex- Im had a total exposure of $89.2 billion. The other G-7 countries, which include some of the largest exporters, all have at least one ECA. See figure 1. G-7 ECAs differ in the magnitude and types of their activities. All offer medium- and long-term officially supported export credits. According to Ex-Im, this financing is subject to the most intense international competition, where the support of an ECA can influence who wins overseas deals. Ex-Im’s annual competitiveness reports compare ECAs on the basis of their medium- and long-term export credit support programs. ECAs also can provide other products and services in addition to these medium- and long-term officially supported export credits; some of the G-7 ECAs offer short-term export credits, market-based export credits (called “market windows”), and other non-export credit products such as investment insurance. This can complicate comparisons among institutions, as some ECAs offer products that are also offered by other types of institutions, such as development or finance institutions, in other countries. ECAs do not typically compete with one another in the area of short-term credits. Figure 2 shows each ECA’s total new business in 2010, providing a comparison between medium- and long-term officially supported export credits and other new business in that same year. Germany was the largest provider of medium- and long-term export credits, followed by France and the United States. Japan’s two ECAs, combined, had the largest amount of total new business in 2010, but only a very small portion was for officially supported medium- and long-term export credits; the remainder included other products such as overseas investment loans, untied loans, and overseas untied loan insurance. This was also true of Canada’s ECA, EDC, which had the second highest volume of total new business. A large proportion of the new business was attributable to short-term credit insurance. The G-7 ECAs have historically accounted for the majority of medium- and long-term officially supported export credits, according to Ex-Im. The share of national exports financed by official export credit agencies is not large; on average, medium- and long-term ECA financing as a share of total exports for each of the G-7 countries in 2008 was 0.6 percent. The United Kingdom’s (UK) share was lowest at 0.1 percent, and Italy’s was the highest at 1.2 percent; the remaining G-7 countries ranged between those values. However, ECAs do play a large role in certain sectors, such as aircraft. According to Ex-Im, at its peak in 2009, ECA financing represented about 40 percent of the total worldwide market for aircraft financing. In addition, the recent financial crisis has increased the amount of ECA support for exports; most G-7 ECAs saw notable increases in the volume of their medium- and long-term officially supported export credits starting in 2008 or 2009 because private sector lenders and insurers were either unwilling or unable to support transactions on their own. See figure 3 for estimates of the volume of medium- and long-term officially supported export credits provided by each G-7 country over the past 5 years. The OECD Arrangement on Officially Supported Export Credits is a set of nonbinding rules among some OECD countries concluded in 1978 amid increases in ECAs’ provision of officially supported export credits. The purpose of the Arrangement is to provide a framework for the use of officially supported export credits; to promote a level playing field, where competition is based on the price and quality of the exported goods and not the financial terms provided; and to provide transparency over programs and transactions. Participants to the Arrangement are Australia, Canada, the European Union (EU), Japan, South Korea, New Zealand, Norway, Switzerland, and the United States. Other countries may join following invitation and OECD membership is not required. In addition, countries may belong to one or more of the Arrangement’s sector agreements—for example, Brazil is a member of the Aircraft Sector Understanding—without being a full-fledged participant. The Arrangement applies to officially supported export credits with repayment terms of 2 years or more. It places limitations on the terms and conditions, such as interest rates, length of repayment terms, and risk fees, of export credits that benefit from official support, and it also contains a variety of reporting requirements to ensure transparency. Another requirement is a down payment of 15 percent of an export contract’s value. The Arrangement and its various sector agreements are negotiated among participants and updated on an as-needed basis. In addition to the OECD Arrangement, another export credit committee at the OECD, the Working Party on Export Credits and Credit Guarantees (Export Credit Group), was set up in 1963. Its general objectives are to evaluate export credit policies, identify and resolve problems by multilateral discussion, work out common guiding principles, and improve cooperation between countries. To date, the Export Credit Group has been the venue for important export credit agreements on antibribery, environmental screening, and sustainable lending. All OECD countries except Chile and Iceland are members. The U.S. Department of the Treasury’s Office of Trade Finance has lead responsibility within the U.S. government for the development, implementation, and enforcement of international trade and aid finance policy, and its primary goal is to create and maintain a market-based, competitive environment in which governments’ financing of national exports contains minimal subsidies. The Office of Trade Finance leads the U.S. delegation to the OECD Arrangement. Other members of the delegation include Ex-Im, the Departments of Commerce and State, the Office of the U.S. Trade Representative, the U.S. Agency for International Development, the U.S. Trade and Development Agency, and other agencies whose programs or roles might be affected by the negotiations. Ex-Im is different from other G-7 ECAs in several significant ways, including its mission, which is explicitly focused on creating domestic jobs through exports. It is similar in its role as a lender/insurer of last resort to four other ECAs, while Canada and Italy have commercial market orientations and are not restricted from competing with the private market. As an independent government agency, Ex-Im’s governance and organization type differ from those of other ECAs, which range from government departments to private companies contracted by governments. Ex-Im and other G-7 ECAs offer a different mix of export credit and other financing products, which provides them with different tools to help exporters. Ex-Im has an advantage over some of the other ECAs because it offers direct loans, which were useful during the financial crisis. Ex-Im’s mission strongly emphasizes supporting domestic jobs through exports, which is unique among the G-7 ECAs (see table 1). This aim underlies certain Ex-Im policies, such as its economic impact analysis requirement and its domestic content policy. Other ECA missions range from promoting and supporting domestic exports to securing natural resources. Along with its mission is the ECA’s “market orientation”—whether an export credit agency supplements or competes with private markets for export credit support. Most G-7 ECAs are directed to supplement the private market; that is, they play a role as a “lender or insurer of last resort,” providing financing, guarantees, or insurance for transactions that are too risky or are undesirable for commercial support. In addition, according to G-7 ECA officials, European ECAs must abide by EU law prohibiting them from supporting short-term export credits to other EU member states and most OECD countries—transactions that the private market is willing to support. Ex-Im’s role as a lender of last resort is emphasized in its charter. It must report the purpose for each transaction it supports, either to provide financing where private sector financing is unavailable or to meet foreign competition. Canada’s ECA, in contrast, has a commercial market orientation and is not restricted from competing with the private sector. Italy’s ECA, while having to abide by EU law, also has a commercial market orientation, according to Italian officials. G-7 export credit agencies range from government agencies to private companies contracted by governments, with different organization types, governing structures, and processes for approving transactions (see table 1). The ECAs that are managed by private companies, such as those in France and Germany, experience more direct political oversight, as their governments take a more direct role in approving transactions and can take policy considerations into account on an individual transaction basis. G-7 ECAs each offer a different mix of export credit and other financial products. In general, a greater mix of products allows an ECA more flexibility in responding to its customers’ needs, particularly during an economic crisis. Most ECAs offer standard export credit products such as export credit insurance and loan guarantees. However, ECAs may offer additional export credit products, such as direct loans (United States, Japan, and Canada) and interest make-up programs, where the ECA pays the difference between commercial lending rates and fixed OECD minimum rates (Italy, France). ECAs also may offer products that are not technically “export-related,” but that, according to Ex-Im, could possibly be used in lieu of or in addition to standard export credits, such as investment insurance and untied lending. Figure 4 shows a comparison of selected export credit and other financial products offered by G-7 ECAs. Ex-Im’s provision of direct loans proved to be useful during the financial crisis, when commercial financing was expensive or unavailable. While its direct lending program was little used in the early 2000s, Ex-Im experienced a surge in demand for direct loans over recent years, from $350 million in fiscal year 2008 (3 percent of its total authorizations) to $6.3 billion in fiscal year 2011 (19 percent of its total authorizations). See figure 5. The Japanese and Canadian direct loan programs also experienced increases. The G-7 European ECAs, which do not have direct loan programs, sought alternative solutions to mitigate the lack of such financing. Several U.S. officials and a G-7 official stated that Ex-Im’s direct loan program gave it an advantage in responding to the needs of exporters and their customers during the crisis. Ex-Im receives specific mandates from Congress and generally operates under more policy requirements than other G-7 ECAs. Ex-Im’s mandates include specific targets from Congress for small business and environmentally beneficial exports. Other G-7 ECAs may have broad directives from their governments or ministries to focus on these areas. Ex-Im also faces additional mandates and legal requirements that other ECAs generally do not. For example, Ex-Im is statutorily required to perform an economic impact analysis to assess whether a project will negatively affect U.S. industries. Ex-Im receives mandates from Congress that include specific targets in the areas of small business and environmentally beneficial exports, whereas some other G-7 ECAs have been given broad directives to focus on these areas by their governments or ministries. Specifically, four G-7 ECAs have received external directives to encourage small business exporters, and two ECAs have received external directives to support environmentally beneficial exports (fig. 6). According to OECD officials, Ex-Im is unique in that Congress gives it explicit policy goals to pursue in addition to its general mandate to support domestic exports. By contrast, other ECAs generally receive limited specific policy guidance from their respective legislatures and oversight ministries. Since the 1980s, Congress has required that Ex-Im make available a certain percentage of its export financing for small business. In 2002, Congress established several new requirements for Ex-Im relating to small business, including increasing the small business financing requirement from 10 to 20 percent. Related congressional directives have included requirements to create a small business division and to define standards to measure the bank’s success in financing small businesses. From fiscal years 2006 to 2010, Ex-Im met the 20 percent small business financing target, but did not in 2011. Ex-Im’s small business financing percent ranged from about 27 percent in fiscal year 2007 to 18.4 percent in fiscal year 2011. financing in 2009 and 2010, meeting the 20 percent target has meant large annual increases in small business financing. For example, Ex-Im small business financing increased by about 58 percent between 2008 and 2010. According to Ex-Im officials, the bank allocates significant resources to meeting its small business mandate. About 27 staff work exclusively on small business marketing, primarily in regional offices, according to Ex-Im officials. In fiscal years 2002-2005, Ex-Im did not reach the goal, with its small business financing share ranging from 16.9 percent to 19.7 percent. companies to offer insurance to small business exporters that have had difficulty obtaining short-term export credit since the financial crisis. Congress also mandates Ex-Im to support environmentally beneficial exports and provides specific targets for such exports. However, specific targets in this area have greatly exceeded Ex-Im financing. In fiscal year 2008, Congress directed Ex-Im to allocate 10 percent of its annual financing to renewable energy and environmentally beneficial products and services. For fiscal years 2009 and 2010, Congress directed Ex-Im to allocate 10 percent to a subset of those exports—renewable energy and energy efficient end-use technologies. We previously reported that Ex-Im had not come close to meeting this 10 percent target when it is applied to all of its environmentally beneficial financing. Ex-Im has reported significant increases in its renewable energy financing, from $101 million in 2009 to $332 million in 2010. In 2011, Ex-Im authorized about $889 million in environmentally beneficial exports, of which about $721 million was for renewable energy. In contrast to Ex-Im’s mandates, some ECAs are broadly directed by their governments or ministries to support small business exports and environmentally beneficial exports with, generally, no specific targets to meet, according to G-7 officials. Specifically: The British government asks the Export Credits Guarantee Department to promote small and medium-sized exporters, but the guidance ECGD receives is suggestive, rather than a specific directive, according to officials at the ECA. The French Ministry of Finance provided a target for its export credit agency, Coface, to support 10,000 small and medium-sized exporters by 2012, according to Coface officials. However, officials stated that this target only applies to one product—market survey insurance— and there are many other products that Coface uses to promote exports of small businesses that are not associated with specific external targets. Japan’s Parliament also asks JBIC to support small and medium- sized businesses, but provided a general directive to support such exporters and did not include specific targets, according to JBIC officials. Italy’s ECA, SACE, is directed by an interministerial decree to designate renewable energy exports as a strategic sector, but no specific targets are provided, according to SACE officials. Some ECAs that do not have external directives from their governments to support small businesses and environmentally beneficial exports have developed internal initiatives to support such exports. For example, in January 2011, Germany’s ECA introduced a new product that provides a fast-tracked application process allowing exporters to receive export credit coverage for transactions of up to 5 million euros in 4 days. Additionally, Canada’s ECA has developed internal initiatives to support environmentally beneficial exports. For example, a team within Export Development Canada has been tasked by EDC executives to come up with a clean technology strategy, and the team is in the early stages of putting a strategy in place, according to EDC officials. Ex-Im has additional mandates and legal requirements that other ECAs generally do not. These include (1) promotion of exports to sub-Saharan Africa, (2) requirements to ship certain exports on U.S. flag carriers, (3) carbon policy, (4) economic impact analysis, and (5) congressional notification. All of these requirements, except for the carbon policy, are the result of congressional mandates. Although most ECAs do not have similar requirements (see fig. 7), as noted above, ECAs have different organizational and governance structures. These differences can affect how a government exercises policy considerations through its ECA. For example, Germany has an interministerial board that approves individual transactions and takes into account policy considerations on a case-by- case basis; in contrast, Ex-Im is an independent agency and Congress exercises policy considerations through programmatic mandates, according to a Treasury official. Specifically, Ex-Im must consider the following mandates and legal requirements when financing transactions: Promoting exports to sub-Saharan Africa. Congress mandates that Ex-Im promote the expansion of its financial commitments in sub- Saharan Africa under Ex-Im’s loan, guarantee, and insurance programs. No other G-7 ECAs have specific external requirements to support exports to sub-Saharan Africa. In 2010, Ex-Im financed 132 transactions totaling $812 million in 20 sub-Saharan African countries. Ex-Im dedicates two full-time employees to promoting exports to sub- Saharan Africa; others work part-time on the issue. Requirement to ship certain exports on U.S.-flagged carriers. In 2002, Ex-Im’s energy financing, specifically its financing for fossil fuel projects, was the subject of a lawsuit brought against the bank and the Overseas Private Investment Corporation by environmental nongovernmental organizations and four U.S. cities. Friends of the Earth, Inc., et al. v. Spinelli, et al. (Civ. No. 02-4106, N.D. Cal.) The lawsuit asserted that Ex-Im and the Overseas Private Investment Corporation (OPIC) provided assistance for fossil fuel projects that caused greenhouse gas emissions without complying with provisions of the National Environmental Policy Act requiring assessments of their projects’ impacts on the U.S. environment resulting from their emissions. The lawsuit was settled in 2009 with Ex-Im agreeing to develop and implement a carbon policy for Ex-Im’s financing; provide the Board of Directors with additional information about carbon dioxide emissions associated with potential fossil fuel transactions; and take a leadership role in consideration of climate change issues, promoting emissions mitigation measures within the Organisation for Economic Cooperation and Development and among export credit agencies. transaction valued at $887 million that was subject to the enhanced due diligence review. Economic impact analysis. Congress requires Ex-Im to perform an economic impact analysis to assess whether a project will negatively affect U.S. industries either by reducing demand for goods produced in the United States or by increasing imports to the United States. Other G-7 ECAs do not have similar requirements, according to G-7 ECA officials. As we have previously reported, Ex-Im uses a screening process to identify projects with the most potential to have an adverse economic impact, and then subjects the identified projects to a detailed analysis. Of medium- and long-term transactions Ex-Im authorized in 2010, 82 transactions, valued at $2.8 billion, were subject to Ex-Im’s economic impact analysis, with a small percentage of those subject to detailed analysis. Congressional notification. Congress requires Ex-Im to submit a detailed statement describing and explaining a transaction to Congress prior to the Board of Directors’ final approval if the transaction is (1) in an amount equal to or greater than $100,000,000 or (2) related to nuclear power or heavy water production facilities. According to Hermes officials, Germany also sends a notification to the German parliament’s Committee on Budgets for transactions exceeding 1 billion euros. According to Ex-Im, 38 transactions valued at about $16 billion in 2010 were sent to Congress before the Board of Directors’ final approval of the transactions. Ex-Im’s domestic content requirements are generally higher and less flexible than those of other ECAs. To fully finance a medium- or long-term transaction, Ex-Im requires that 85 percent of the value of the transaction be supplied domestically. Other G-7 ECAs generally require between zero to 51 percent domestic content. Additionally, key elements of Ex- Im’s domestic content policy have remained relatively unchanged over two decades; at the same time, manufacturing patterns have evolved toward greater integration in production and data show that the domestic content of exports has decreased. Several ECAs have modified their policies in recent years, often citing the increasingly global content of industrial production as a primary reason for the change. Ex-Im’s domestic content policy places limits on the amount of foreign goods and services making up the exports it finances. Domestic content refers to the portion of an exported good or service that is sourced domestically. Ex-Im’s policy is not the result of a statutory requirement; according to Ex-Im, the policy reflects an attempt to balance the interests of multiple stakeholders and Ex-Im’s mission to support U.S. jobs through export financing. Ex-Im’s domestic content policy for medium- and long- term transactions limits its level of support to the lesser of (1) 85 percent of the total value of all eligible goods and services in the U.S. export transaction, or (2) 100 percent of the U.S. content in all eligible goods and services in the U.S. export transaction.percent domestic content to receive full financing for medium- and long- term transactions but does not require a minimum amount of domestic content to receive a portion of financing. See the sidebar for two examples of how the domestic content policy affects the level of support Ex-Im can provide. In effect, Ex-Im requires 85 Ex-Im has separate domestic content requirements for short-term transactions—the percentage required to receive maximum coverage is lower than for medium- and long-term transactions and the calculation method differs. The short-term policy is generally more lenient for small businesses than for other exporters. For example, small businesses can satisfy the short-term domestic content requirement based on aggregating all of the products in an export contract, while non-small businesses must meet the minimum domestic content threshold on a product-by-product basis. In addition, small businesses include indirect costs in the calculation of domestic content. According to Ex-Im, the difference reflects Ex-Im’s directive to consider the unique business requirements of small businesses. Other G-7 ECAs have lower domestic content requirements than Ex-Im, generally requiring between zero and 51 percent domestic content (see table 2). Some ECAs with domestic content polices have more flexibility in implementing their policies by allowing for exceptions to their minimum domestic content requirements on a transaction-by-transaction basis. For example, according to Japanese officials, Japan’s ECAs require a minimum of 30 percent domestic content, but the institutions can make exceptions for projects deemed to be of strategic importance. Ex-Im makes no exceptions to its content policy for specific transactions, except for those involving tied aid or raw materials. According to Canadian and Italian officials, Canada and Italy do not require a certain level of domestic content; rather, both consider domestic content in the context of a broad range of factors to determine whether supporting a transaction benefits national interest. EDC’s Canadian Benefits policy considers the research and development spending by the company and the potential for increased access to global markets, among other factors, when deciding to finance a transaction (see textbox). According to Canadian officials, the Canadian Benefits model is designed to capture all benefits that accrue from Canadian companies’ involvement in international trade. In the early 2000s, Export Development Canada implemented a national benefits policy rather than a domestic content requirement, referred to as the Canadian Benefits model. With this model, EDC measures its contribution to Canada’s economy through the economic benefits generated by the exports and investments it supports. EDC takes the following steps under the Canadian Benefits model: 1. Calculate economic benefits. The economic benefits are based on the amount of gross domestic product (GDP) in the exports it finances by determining the amount of Canadian content in the export. The Canadian content is provided by Statistics Canada’s Input/Output Model, which tracks the production chain of Canadian industries, and identifies and measures inputs and outputs. 2. Calculate base grade. EDC then calculates a base grade by dividing the level of GDP generated by the transaction by the amount of EDC support that was requested. EDC assigns letter grades A-F according to these support percentages. Identify upgrades. Where a transaction generates a base grade of less than B, additional benefits are considered in order to upgrade the transaction. Each applicable secondary benefit boosts the base grade by one letter grade. (An F rating cannot receive upgrades.) Reasons for upgrades include the following: Above average research and development spending by the Canadian company. The transaction allows increased access to global markets. The transaction has an above average employment impact. The Canadian exporter is a small or medium-sized business. The transaction supports an environmentally beneficial product. Although transactions rarely receive low final grades, low grades do not prevent EDC from financing the transaction, according to EDC officials. In addition to the domestic content policies presented in table 2, ECAs’ policies for supporting local costs can also affect the level of support they can provide related to goods and services that are not sourced domestically. Local costs are for goods and services manufactured or originated in the buyer’s country, such as on-site construction costs. Ex- Im’s policy allows it to support up to 30 percent of the value of the export contract in local costs, in addition to 15 percent foreign content. In contrast, according to Ex-Im, the other G-7 ECAs generally include local costs in their calculation of foreign content, and this can reduce the gap between the level of foreign content that Ex-Im can support and that of its foreign counterparts. Ex-Im reported that, in 2010, 21 percent of its non- aircraft medium- and long-term transactions contained some local cost support. The degree to which countries rely on domestic components in producing its exports differs. U.S. exports generally have higher domestic content than those of other G-7 countries. OECD data show that domestic content accounts for less than 75 percent of manufacturing exports for five of the seven G-7 countries, but accounts for over 80 percent of manufacturing exports for the United States, as well as in Japan. While Ex-Im has modified its method for calculating the amount of domestic content in a transaction, its minimum threshold for receiving full financing for medium- and long-term transactions has not changed since 1987. Before 1987, Ex-Im financed only the domestic portion of medium- and long-term transactions. If less than 100 percent of an export’s content was domestic, the foreign part would be carved out and Ex-Im would finance 85 percent of the domestic portion. In 1987, Ex-Im adopted its current policy to allow transactions with up to 15 percent foreign content to receive 85 percent of the total contract value. Ex-Im’s rationale for allowing up to 15 percent foreign content was that the 15 percent down payment required by the OECD Arrangement would cover the portion of foreign content, according to Ex-Im officials. In 2001, Ex-Im modified its method of calculating the domestic content of exports in medium- and long-term transactions. Previously, Ex-Im required exporters to report the domestic content of individual items in a contract, line by line. In 2001, Ex-Im moved to a whole contract value calculation where exporters report the domestic content of the contract’s entire value, rather than item by item. This allowed Ex-Im to finance contracts that may have individual items that contain less than 85 percent domestic content as long as the total amount in the contract has 85 percent or more domestic content. There have not been subsequent changes to the policy for medium- and long-term transactions. Production patterns have changed in the past few decades as global manufacturing has become more integrated. Companies increasingly rely on parts sourced from other countries, and as a result, the domestic content in exported goods has declined. OECD data show that from 1995 to 2005 the percentage of domestic content of manufactured U.S. exports declined from 87.4 percent to 82.4 percent, a 5 percentage point decline in 10 years (see fig. 9). Ex-Im does not differentiate among sectors in its domestic content policy, although domestic content in U.S. exports varies by sector. Among manufactured exports, medical, precision and optical instruments showed a greater decline in domestic content, almost 6 percentage points, than food products, beverages, and tobacco, which experienced a 3.6 percentage point decrease. As figure 9 shows, as of 2005, the domestic content of U.S. exports of motor vehicles, trailers, and semitrailers was around 72 percent, and other transportation equipment, which includes aircraft, was 83.5 percent. Given varying levels of domestic content by product and industry, Ex-Im may be unable to provide full financing for exports in certain industries if trends continue. Domestic content in Ex-Im transactions fluctuated from 1997 to 2010, showing an overall downward trend. The average domestic content for medium- and long-term transactions containing foreign content was 91 percent in 1997 and 86 percent in 2010 (see figure 10). This value is near the 85 percent minimum domestic content required for a transaction to receive full Ex-Im financing. Exporters and lenders have expressed concerns about Ex-Im’s policy, although obtaining clear evidence about the policy’s impact is difficult. In cases where domestic content falls below 85 percent, Ex-Im’s policy could potentially have a negative impact on U.S. competitiveness by deterring exporters from using Ex-Im’s products. According to respondents of Ex-Im’s most recent survey concerning its competitiveness, Ex-Im’s content policy is the bank’s most significant impediment to competitiveness.its criteria for support beyond domestic content and to consider support based on national interests. Exporters and lenders have also suggested Exporters have urged Ex-Im to expand that Ex-Im should explore extending exceptions to its content policy to support priority sectors, such as environmentally beneficial projects. According to Ex-Im officials, information on exporters that have not applied for Ex-Im financing because of its domestic content policy or deals that have been lost as a result of incomplete financing is not tracked. Co-financing is a tool that some exporters can use to address financing challenges posed by domestic content requirements, but it is not available for all transactions. Co-financing arrangements allow an exporter to offer a single ECA support package to a buyer interested in procuring products from two or more countries. The G-7 ECAs have multiple framework agreements that govern co-financing among themselves. Ex-Im officials stated that co-financing is increasingly used in situations where foreign content exceeds 15 percent and there is a gap in Ex-Im’s financing coverage. In 2010, Ex-Im co-financed more than $6.5 billion in transactions, with the vast majority of transactions involving aircraft. However, co-financing is not an option for all U.S. transactions, because it requires meeting the financing requirement of another country’s ECA, particularly the production of a product or service that would qualify as an export from that country. Some ECAs have revised their domestic content policies to reflect changes in global production patterns. For example, following an evaluation 10 years ago, EDC and the Canadian government determined that EDC’s 50 percent domestic content rule had become onerous, and that the global marketplace had changed, with more production involving foreign content, according to Canadian officials. EDC adopted an “integrative trade model” to reflect multiple benefits brought to Canada from international transactions. As a result, EDC moved to the National Benefits policy discussed above, where exporters using little or no domestic content are eligible for support, as long as their export is determined to benefit Canada. According to UK officials, ECGD also substantially changed its policy in 2007, determining that its domestic content requirement of 60 percent was an artificial barrier and unnecessary restriction, in light of declines in the size of the UK manufacturing base, increased globalization, and multisourcing of goods under UK export contracts. It lowered the requirement to 20 percent. In 2008, Germany moved from a 90 percent domestic content requirement to its current three-tier policy that attaches various limits to differing levels of domestic content. Germany’s federal government made these changes in response to the repeated appeals of German exporters, who increasingly viewed the previous system as overly restrictive in light of international competition, according to Hermes documents and officials. There are differing views on the ultimate impact of different domestic content requirements, and limited analytical evidence on which to base decisions is available. While lowering a domestic content requirement can increase the number and type of transactions that an ECA can support, it could lessen the incentive for some companies seeking ECA support to source goods and services domestically. The potential impact on U.S. employment of any changes in the policy would depend on the balance of job gains that might accrue from supporting additional transactions against any job losses from reduced domestic content. As other ECAs have loosened their domestic content policies, Ex-Im’s policy remains relatively unchanged. Congress directs Ex-Im to provide financing that is fully competitive with the financing of its competitors. According to Ex-Im officials, Ex-Im reviews its domestic content policy on a regular basis to identify ways to increase flexibilities for exporters. However, Ex-Im has not conducted a systematic review of its policy in recent years to assess to what extent the overall impact of the policy is consistent with Ex-Im’s mission of supporting U.S. jobs. While the scope of the OECD Arrangement has expanded to cover additional aspects of officially supported export credit terms among member ECAs, the increasing activities of nonmembers, particularly China, threaten the future ability of the agreement to provide a level playing field for exporters. Several agreements establish guidelines for pricing and reporting on export credit support. However, these agreements apply only to officially supported activities of participant ECAs. Several countries, including Brazil, China, and India, have growing ECA financing activity but are not part of the Arrangement. Officials from several G-7 ECAs and other institutions identify engagement with these countries to increase transparency and promote broader discussion of export credit issues as a major challenge that must be addressed if the Arrangement is to remain effective. The scope of the OECD Arrangement has expanded over time to regulate additional aspects of participating countries’ use of officially supported export credits, decreasing export subsidies in the process. Since the Arrangement was formed, in 1978, there have been several important agreements among member countries that have regulated pricing or other aspects of export credit support. These agreements include the following: Minimum interest rates. Arrangement members adopted a system of minimum interest rates, which has reduced the interest rate subsidy component in ECA support. These rates, called Commercial Interest Reference Rates (CIRR), are adjusted on a monthly basis to reflect commercial lending rates for borrowers in the domestic market of the relevant currency. Minimum premium rates (risk fees). Agreements on minimum premium rates, or risk fees, are designed to encourage convergence in pricing, further decreasing opportunities for subsidies among ECAs. The first agreement on risk-based premium rates, in 1997, established a set of minimum premium rates to reflect country credit risk. Countries were free to charge higher rates than these minimums. A new agreement, effective as of September 2011, expanded on this earlier agreement by including buyer credit (commercial), as well as country-based, aspects of risk. This agreement reduces ECAs’ flexibility in pricing commercial transactions, thus further narrowing differences in ECA financing terms. Tied aid. Two agreements have restricted the use of tied aid, that is, aid conditioned on the purchase of goods and services from the donor country. In 1987, Arrangement members agreed to raise the minimum concessionality level for tied aid permitted under the Arrangement to 35 percent. In 1991, a further agreement prohibited tied and partially untied aid to richer developing countries, as well as for projects that were considered commercially viable. Sector agreements. Sector agreements have been reached for civil aircraft, nuclear power plants, renewable energies and water projects, and ships. Some of these agreements have different rules for minimum interest and premium rates and maximum repayments terms than those that apply to standard transactions through the Arrangement. The Aircraft Sector Understanding is especially significant because it regulates aircraft support terms for ECAs of major aircraft-exporting countries, including the United States, Brazil, Canada, France, Germany, and the United Kingdom. A large share of some countries’ ECA support is in the aircraft sector. The Arrangement also has a variety of reporting requirements in conjunction with its overall and sector agreements that provide transparency about ECA activities to Arrangement members. ECAs must report all of their long-term officially supported export credit transactions to the OECD as they occur and, twice a year, report the amount of outstanding officially supported export credits. Further, separate reporting requirements apply with respect to minimum premium rates as well as the aircraft sector agreement. OECD officials said they are hoping to streamline these reporting requirements and are in the process of approving a new data-reporting directive. However, certain export credit transactions of member ECAs fall outside the Arrangement and its reporting requirements, which lessens the transparency of ECA activities. These include “market windows,” or support that an ECA provides on market terms. Canada’s ECA currently provides this type of support. The use of market windows has historically been an issue of concern for the United States, because of limited transparency and the potential for unfair advantage stemming from an ECA’s government connection. A second type of transaction outside the scope of the Arrangement is non-export credit financing activities, such as untied lending and investment finance. A majority of G-7 ECAs offer untied lending, which takes the form of loans extended to other countries for strategic reasons. While these loans are not directly linked to the purchase of exports from the lending country, the terms can take whatever form the two countries agree upon. For instance, Japan provided an untied loan to a commercial bank in Malaysia in order to provide long-term financing to Japanese companies located there, as well as local companies within their supply chain. Ex-Im officials have expressed concern about the growing use of this financing tool because of its potential linkage to exports and uncertainty about how its utilization could affect Ex-Im. Official export credits from emerging economies such as China, India, and Brazil have experienced rapid growth. As nonparticipants in the OECD Arrangement, these countries can offer terms more favorable than terms under the Arrangement. More favorable terms to buyers do not necessarily constitute subsidies—the terms may be market-based and compliant with World Trade Organization requirements—but can be more generous than those allowed by the Arrangement, according to Ex-Im. However, since these countries are exempt from the Arrangement’s requirement to report each transaction, there is uncertainty regarding the terms that they offer. As total exports from emerging economies have increased, so have their officially supported export credits. From 2006 to 2010, total exports from China, India, and Brazil increased over 60 percent, while medium- and long-term official export credits for China and Brazil are estimated to have more than doubled—and for India nearly doubled—during the same time period (see fig. 11). China is now estimated to be the largest supplier of medium- and long-term export credits. Ex-Im estimated that China offered $45 billion in official medium- and long-term export credits in 2010, twice as much as Germany, the largest provider among G-7 ECAs. According to China Ex-Im Bank annual reports, it provided more than $36 billion in total export credit support in 2010, more than five times the $4 billion provided in 2000. India’s Ex-Im Bank experienced similar growth, increasing its activities from about $500 million in 2000 to $11 billion in 2010. Over the same time period, U.S. Ex-Im’s financing increased from about $13 billion to $24.5 billion. Figure 12 compares the activities and relative growth of the Export-Import Banks of China, India, and the United States. SINOSURE, China’s other ECA, also has experienced sharp growth. SINOSURE stated in its 2010 annual report that it underwrote an aggregate amount of $196.4 billion for that year, an increase of 68.5 percent over 2009. This follows a growth of 85.8 percent from 2008 to 2009. Given the increase in China’s officially supported export credits, officials from some G-7 countries have expressed interest in additional information about the terms and volume of China’s activities, but officials reported that obtaining such information is difficult. Some information on total volume of export credits is provided through annual reports, but in limited detail. As discussed above, China’s position outside the OECD Arrangement limits its reporting requirements relative to G-7 ECAs. Thus, determining the nature of its activities and the extent to which financing terms (in contrast to other factors, such as production cost) are the key reason for Chinese companies’ securing deals is difficult. An expert on China reported having obtained information on China’s export financing activities from recipient countries rather than from China. In addition, officials from several G-7 countries told us that they obtain anecdotal information on China’s activities from their exporters, who may be facing Chinese competition. One OECD official expressed the view that China will become more competitive with the G-7 ECAs over the next 10 years as the technology differential between Chinese and G-7 exports decreases. According to officials from the OECD and several G-7 ECAs, engagement with emerging economies, especially China, on practices related to export credit financing is increasingly important and presents challenges for the OECD Arrangement and its participants. A senior OECD official stated that the rise of this export financing competition threatens the Arrangement’s ability to maintain a level playing field among exporting nations. Various ECAs, governments, and the OECD have made efforts to engage China on export credit issues, including encouraging participation in various forums, but have generally reported limited success. For example, Canadian officials reported encouraging their Chinese counterparts to join multilateral forums. Japanese officials said they reach out to Chinese officials on a regular basis, including at meetings among Asian ECAs. U.S. Treasury officials noted that export credits were mentioned at the U.S.-China Strategic and Economic Dialogue, a high- level forum between U.S. and Chinese government officials. They also reported that OECD and country officials have made attempts to invite China to export credit-related meetings. However, several ECA, government and OECD officials reported that China is often unwilling to attend or sends lower-level representatives to these meetings, such as a recent G-20 meeting in Paris. In some cases, an ECA in an emerging economy will see an incentive to joining international agreements or institutions. In 2004, Brazil participated in the negotiations on the Aircraft Sector Understanding and in 2007 joined the actual agreement. One U.S. official points to Brazil’s interest in obtaining information on Canada, its primary competitor, and a desire to help shape the rules, as strong incentives that brought it to the negotiating table. Another institution, the Berne Union, which is an association of export credit and investment insurance providers, has a broad base of membership, including some of the ECAs from China, India, and Brazil. Through membership, they have agreed to follow certain principles, including a pledge not to subsidize exports. This institution may provide an additional venue by which these emerging economies can be engaged in discussions concerning export support and related issues. However, some ECA and other officials point to China’s current lack of incentive to engage. OECD and other officials have stressed to China one benefit of joining the Arrangement now: the opportunity to shape the rules by which their competitors must abide. Established with a mission to support U.S. jobs and an explicit charge to provide export financing competitive with that of other governments, Ex- Im is expected to play a key role in increasing U.S. exports, be self- sustaining in terms of its budget, and fulfill a number of policy directives beyond those of other G-7 ECAs. In terms of its volume of export credit support, Ex-Im’s performance in recent years has been quite strong; the bank’s total authorizations have increased steadily as demand for its services has been high during a period of global financial turmoil. Whether Ex-Im will see an increasing tension across its mission and requirements remains to be seen, but there is some evidence of that now, as the bank’s small business financing share for fiscal year 2011 was below its 20 percent target for the first time in 5 years. Although small business financing grew in 2011, it grew less than Ex-Im’s overall financing. Ex-Im’s domestic content requirement for receiving full medium- and long- term financing, which Ex-Im determines, is generally higher than that of other ECAs and less flexible. While other ECAs have loosened their domestic content policies in recent years, key elements of Ex-Im’s policy remain relatively unchanged. Ex-Im officials state that Ex-Im’s policy reflects its attempt to balance the interests of multiple stakeholders and its mission to support U.S. jobs. However, to what extent Ex-Im’s current policy affects its support of U.S. jobs is not clear-cut. It may provide an incentive for certain exporters to buy from U.S. suppliers. On the other hand, to the degree that the requirement limits the ability of a larger number of exporters to obtain full Ex-Im financing, it may deter foreign buyers from sourcing from U.S. firms. Given these factors, and trends toward increasing global economic production, a better understanding of how Ex-Im’s policy may affect U.S. exporters and jobs is needed. Strong increases in export financing by several emerging countries present competitive challenges that Ex-Im alone cannot readily address. The OECD Arrangement has made important strides toward decreasing subsidies in export credits and leveling the playing field for exporters. However, emerging economies with rapidly growing export credit support levels that are outside the Arrangement are exempt from its reporting requirements and rules and can offer terms that are more generous than parties to the Arrangement can. Member countries have taken some steps within the OECD and beyond it to engage countries including Brazil, China, and India on export credit issues. However, some acknowledge that China is not currently motivated to join any type of agreement. There is concern that, in particular, the rise of China’s export financing threatens the Arrangement’s ability to support a level playing field among exporting nations. To maintain Ex-Im’s competitiveness and enhance its ability to support U.S. exports, we recommend that the Ex-Im Bank conduct a systematic review of its domestic content policy in the context of changing production patterns to ensure this policy effectively serves the objective of creating U.S. jobs while also providing financing that is competitive with that of other ECAs. To preserve and enhance the competitiveness of U.S. exports and to promote transparency, we recommend that the Secretary of the Treasury, in conjunction with Ex-Im and working with international counterparts, develop strategies to further encourage and increase engagement of emerging economy countries in discussions and agreements on export credit support. Ex-Im and Treasury provided comments on a draft of this report. In its written comments, which are reproduced in appendix II, Ex-Im stated that GAO’s findings are generally consistent with Ex-Im’s findings in its 2010 Competitiveness Report and that the lack of transparency from non- OECD ECAs is the major challenge to a level playing field globally. Ex-Im did not directly address GAO’s recommendation that it conduct a systematic review of its domestic content policy and its impacts but stated that it disagreed with GAO’s characterization of how Ex-Im has addressed the issue of domestic content. Treasury provided a statement concerning its full support of engaging market economy countries on export credit issues, but did not state whether it agreed or disagreed with our recommendation. With respect to its domestic content policy, Ex-Im stated that the policy should be considered in light of Ex-Im’s specific mandate to focus on jobs. Our report emphasizes that Ex-Im’s explicit mandate to support U.S. jobs is unique among G-7 ECAs. In addition, the report states that Ex- Im’s policy is the result of an attempt to balance the interests of multiple stakeholders with its mission to support U.S. jobs through export financing. Ex-Im also stated that its co-financing and local cost policies are important in evaluating the competitiveness of its domestic content policies. In discussing foreign content policy, our report acknowledges the role of co-financing as a tool for some exporters, and explicitly notes that Ex-Im provided more than $6.5 billion for co-financing in 2010. With respect to local cost, we agree that the treatment of local cost financing is relevant to the discussion of foreign content, and we have added related information to the report. Ex-Im stated it disagrees with the report’s characterization of how Ex-Im has addressed the issue of content, stating that it has regularly reviewed the policy as part of its annual competitiveness report, and has made changes. We do not believe that Ex-Im’s competitiveness reports constitute the systematic review of the content policy recommended in our report, and we maintain that a more comprehensive review, including its impact on U.S. jobs, is needed. Ex-Im’s competitiveness reports have consistently identified its content policy as a major competitive barrier, with Ex-Im stating in its latest report, published in June 2011, that “Ex-Im Bank’s content requirements and implementation of those requirements are significantly more restrictive than those of its G-7 counterparts” and that “in cases where foreign content exceeds 15 percent Ex-Im Bank’s policy and practice can have a negative impact on U.S. competitiveness because it may deter exporters from using Ex-Im’s products.” Ex-Im reported that its exporters and lenders identified foreign content as their “most significant impediment to competitiveness.” In terms of changes made to Ex-Im’s policy over time, our report states that Ex-Im last changed its level of minimum domestic content required for receiving full financing for medium- and long-term transactions (85 percent) in 1987 and in 2001 changed its method for calculating the percentage of domestic content in a transaction. The report also clearly lays out Ex-Im’s content policy for short-term financing, including specific provisions for small businesses. We have clarified summary language regarding what aspects of the policy have not changed since 1987. Ex-Im also provided technical comments, which we incorporated as appropriate. Treasury provided the following response: “Treasury fully supports and encourages emerging market economy countries with major medium/long-term export credit programs to join in discussions and agreements on export credit support, and is actively engaged in that endeavor.” We describe in the report that member countries, including the United States, have taken some steps within the OECD and beyond it to engage emerging market economy countries on export credit issues, and that the issue of export credits was raised at the U.S.-China Strategic and Economic Dialogue, a high-level forum between U.S. and Chinese government officials, including the Treasury Secretary. However, we believe it is important that Treasury take further steps to encourage and increase engagement of these countries on export credit issues. We slightly modified the wording of the recommendation to reflect this. We will send copies of this report to the appropriate congressional committees as well as the Chairman of the Export-Import Bank and the Secretaries of State and Treasury. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this report were to examine (1) Ex-Im’s mission, organization, market orientation, and product offerings compared with those of other Group of Seven (G-7) export credit agencies (ECAs), (2) Ex-Im’s policy requirements compared with those of other G-7 ECAs, (3) Ex-Im’s domestic content policy compared with those of the other G-7 ECAs, and (4) the role of the Organisation for Economic Cooperation and Development (OECD) Arrangement in governing ECA activities. To assess how Ex-Im’s mission, organization, market orientation, and product offerings compared with those of other G-7 ECAs, we reviewed relevant documents, including ECA annual reports and other publications, such as Ex-Im’s annual Competitiveness Reports, OECD reports, and legislation authorizing various ECAs. We also reviewed ECAs’ websites for additional information regarding product offerings. We interviewed officials from each of the G-7 ECAs and government organizations that have oversight over the ECAs. These organizations included Ex-Im in the United States; Export Development Canada (EDC), Department of Foreign Affairs and International Trade and Department of Finance in Canada; Coface and the Ministry of Economy, Finance and Industry in France; Euler Hermes, PricewaterhouseCoopers and the Interministerial Council, represented by the Federal Ministry of Economics and Technology, in Germany; Servizi Assicurativi del Commercio Estero (SACE) and the Ministry of Economic Development in Italy; Japan Bank for International Cooperation (JBIC) and Nippon Export and Investment Insurance (NEXI) (via telephone) in Japan; and the Export Credits Guarantee Department (ECGD) in the United Kingdom. We also interviewed officials from the U.S. Departments of Treasury and State, as well as the OECD and the Berne Union. In addition, we spoke with several institutions that work in conjunction with official ECAs: Societa Italiana per le Imprese all’Estero (SIMEST) in Italy and KfW IPEX-Bank in Germany. To assess how Ex-Im’s policy requirements compared with those of the other G-7 ECAs, we first examined Ex-Im’s policy requirements by reviewing Ex-Im annual reports; Ex-Im Competitiveness Reports; Ex-Im’ 2010-2015 Strategic Plan; GAO reports on Ex-Im’s small business mandate, environmentally beneficial mandate, and economic impact analysis requirement; Congressional Research Service (CRS) reports; testimony from congressional hearings; and academic articles. We also interviewed Ex-Im officials to talk about the various policy requirements, and we interviewed officials from the Small Business Administration (SBA) to discuss Ex-Im’s small business mandate. To examine the other G-7 ECAs’ policy requirements and how they compared with those of Ex- Im, we interviewed officials from the G-7 ECAs, as well as any government organizations that play an oversight role for these ECAs. We asked them directly whether they shared any of Ex-Im’s policy requirements, and more generally, whether they had other policy requirements, such as requirements to focus on promoting certain types of exports, export destinations, or exporters, and whether this resulted from external directives or internal decisions. We also asked about the nature of their relationships with oversight organizations and the extent to which they received external policy guidance from these organizations or their legislatures. In cases where ECA officials told us that there was legislation that authorized or otherwise governed their activities, and there were English versions available, we reviewed this legislation. We also reviewed ECA annual reports. We sent follow-up questions to all of the ECAs to confirm the information they had given us during interviews regarding their policy requirements. We also provided ECAs the opportunity to provide technical comments on portions of the report that contain information pertaining to the ECA. To examine how Ex-Im’s domestic content policy compares with those of other G-7 ECAs, we first collected information on Ex-Im’s policy from its competitiveness reports and website. We reviewed testimony transcripts from congressional hearings and literature on domestic content of exports and global manufacturing production patterns. We also interviewed Ex-Im officials responsible for administering the policy, as well as officials at the Treasury Department and the Coalition for Employment through Exports, an advocacy organization on matters affecting U.S. government export finance. To obtain information on other G-7 ECAs’ domestic content policies, we reviewed their annual reports. We interviewed G-7 ECA officials, who explained their policies and provided additional documentation. We analyzed global manufacturing production trends using OECD data on the foreign content of the United States’ and other countries’ exports by sector, from the mid-1990s to the mid-2000s. To assess the reliability of the OECD data, we reviewed the data documentation, tested for internal consistency of the data, and compared the trends with other sources. We found that the data were sufficiently reliable for the purposes of presenting global manufacturing trends, demonstrating country and sector variances. We collected data from Ex- Im on the percentage of foreign content in the exports it finances. Ex-Im reports this data annually in its Competitiveness Report. We found that the data were sufficiently reliable for the purposes of presenting the amount of foreign versus domestic content in exports it finances. To analyze the role of the OECD Arrangement in governing ECA activities, we reviewed the text of the OECD Arrangement as well as a variety of OECD and other reports on the Arrangement, Export Credit Group, and export credit activities. We interviewed OECD officials, as well as G-7 ECA officials, to discuss the history and evolving role of the Arrangement as well as current challenges. We conducted a literature search and reviewed academic literature on the Arrangement and ECAs. To obtain information on China’s export credit activities, we met with U.S. Treasury and State Department officials in Washington and Beijing and interviewed experts, including academic experts at American University and the Brookings Institution. We also discussed China’s activities with G- 7 ECA and other officials. To obtain data on China’s, India’s, and Brazil’s ECA activities, we reviewed information from the International Monetary Fund, ECA annual reports, and Ex-Im’s 2010 Competitiveness Report. We also used data published in the annual reports from China’s and India’s Ex-Im Banks to compare the growth of export financing from China and India to that of the U.S. Ex-Im Bank. We found the data were sufficiently reliable for the purpose of comparing levels of growth. We conducted this performance audit from February 2011 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the person named above, José Alfredo Gómez (Acting Director), Celia Thomas (Assistant Director), Jennifer Young, Vaughn Baltzly, Ming Chen, Laura Erion, and Arthur Lord made key contributions to this report. Also contributing to this report were Lynn Cothern, David Dornisch, and Ernie Jackson.
The U.S. Export-Import Bank (Ex-Im), the United States’ official export credit agency (ECA), helps U.S. firms export goods and services by providing a range of financial products. Ex-Im, whose primary mission is to support jobs through exports, has a range of policy requirements, including support of small business. The Organisation for Economic Cooperation and Development (OECD) Arrangement governs aspects of U.S. and some foreign countries’ ECAs. GAO examined (1) Ex-Im’s mission and organization compared with ECAs from other Group of Seven (G-7) countries (major industrialized countries that consult on economic issues), (2) Ex-Im’s policy requirements compared with other G-7 ECAs, (3) Ex-Im’s domestic content policy compared with other G-7 ECAs, and (4) the OECD Arrangement’s role in governing ECA activities. The United States and other G-7 countries have ECAs that support domestic exports, but Ex-Im differs from other ECAs in several important ways, including its explicit mission to promote domestic employment. The G-7 ECAs range from government agencies to private companies contracted by governments. Most of these ECAs, including Ex-Im, are expected to supplement, not compete with, the private market. Ex-Im offers direct loans, which were increasingly utilized during the recent financial crisis, while European ECAs do not. Ex-Im has specific mandates in areas where other G-7 ECAs have broad directives. Ex-Im has specific mandates to support small business and environmentally beneficial exports, while other ECAs are broadly directed to support such exports. In addition, Ex-Im has other mandates and legal requirements, such as shipping certain exports on U.S.-flagged carriers and conducting economic impact assessments for large transactions, which other G-7 ECAs do not. Ex-Im’s requirements for the level of domestic content in the exports it fully finances are higher and generally less flexible than those of other G-7 ECAs. Ex-Im requires 85 percent domestic content for medium- and long-term transactions to receive full financing, while other ECAs’ domestic content requirements generally range between zero and 51 percent. Ex-Im’s policy on supporting local costs can result in more foreign content support in some transactions. While Ex-Im has modified its method for calculating domestic content, its threshold for receiving full financing for medium- and long-term transactions has not changed since 1987, and the policy and its overall impact on jobs has not been studied systematically. Other ECAs have modified their policies in recent years, citing increasing global content of industrial production. In its charter, Ex-Im is directed to provide financing competitive with that of other ECAs, as well as to support U.S. jobs. The OECD Arrangement has expanded to regulate additional aspects of officially supported export credits, but increasing activity of nonmembers threatens its ability to provide a level playing field for exporters. Several agreements have been made that decrease subsidies and increase transparency among ECAs. However, these agreements apply only to participant ECAs, and important emerging countries, including China, are not part of the Arrangement. Officials from several G-7 ECAs and other institutions identified effective engagement with these countries on export credit issues as being increasingly important and presenting challenges for the OECD Arrangement and its participants. GAO recommends (1) that Ex-Im conduct a systematic review to assess how well its domestic content policy continues to support Ex-Im’s mission, and (2) that the Department of the Treasury, with Ex-Im and international counterparts, develop strategies for further engagement on export credit issues with emerging economy countries. Ex-Im stated it considers content policy in its annual competitiveness assessments, but did not comment directly on the recommendation. Treasury stated it supports encouraging emerging market economies’ participation concerning export credit issues and is engaged in that activity, but did not state whether it agreed with the recommendation.
SSA administers the Old Age, Survivors, and Disability Insurance programs under Title II of the Social Security Act. About 96 percent of the nation’s work force is in social security-covered employment and pays tax on its annual earnings. When workers pay social security taxes, they earn coverage credits, and 40 credits—equal to at least 10 years of work— entitle them to social security benefits when they reach retirement age. In 1977, the Congress authorized the President to enter into totalization agreements with other countries. These bilateral agreements are intended to accomplish three purposes. First, they eliminate dual social security coverage and taxes that multinational employers and employees encounter when they operate and their workers temporarily reside and work for the corporation, usually no more than 5 years, in a foreign country with its own social security program. Under the agreements, U.S. employers and their workers sent temporarily abroad would benefit by paying only U.S. social security taxes, and foreign businesses and their workers would benefit by paying only social security taxes to their home country. Second, the agreements provide benefit protection to workers who have divided their careers between the United States and a foreign country, but lack enough coverage under either social security system to qualify for benefits, despite paying taxes into both systems. Totalization agreements allow such workers to combine (totalize) work credits earned in both countries to meet minimum benefit qualification requirements. Third, most totalization agreements improve the portability of social security benefits by removing rules that suspend benefits to noncitizens who live outside the benefit-paying country. By law, proposed agreements are sent to the Congress, which has 60 legislative days to review them. The agreements become effective unless either House of the Congress adopts a resolution of disapproval. Table 1 shows agreements in effect and the years they became effective. To qualify for totalized U.S. social security benefits, a worker must have at least 6 but no more than 39 U.S. coverage credits. Benefit amounts are based on the portion of time a foreign citizen worked in the United States, and thus, are almost always lower than full social security benefits. The average monthly, totalized social security benefit at the end of 2001 was $162, compared with the average nontotalized monthly social security benefit of $825. In 2001, SSA paid about $173 million under totalization agreements to about 89,000 persons, including their dependents. (Appendix I compares the amount of U.S. totalized benefits for different coverage credits and earnings levels with a minimum benefit that would be paid to a worker with 40 credits.) Under U.S. law, immigrants may not work in the United States unless specifically authorized. Nevertheless, immigrants often do work without authorization and pay social security taxes. Under the Social Security Act, all earnings from covered employment in the United States count towards earning social security benefits, regardless of the lawful presence of the worker, his or her citizenship status, or country of residence. Immigrants become entitled to benefits from unauthorized work if they can prove that the earnings and related contributions belong to them. However, they cannot collect such benefits unless they are either legally present in the United States or living in a country where SSA is authorized to pay them their benefits. Mexico is such a country. A lack of transparency in SSA’s processes, and the limited nature of its review of Mexico’s program, cause us to question the extent to which SSA will be positioned to respond to potential program risks should a totalization agreement with Mexico take place. SSA officials told us that the process used to develop the proposed totalization agreement with Mexico was the same as for prior agreements with other countries. The process—which is not specified by law or outlined in written policies and procedures—is informal, and the steps SSA takes when entering into agreements are neither transparent nor well-documented. Current law does not prescribe how SSA should select potential agreement countries. According to SSA, interest in a Mexican agreement dates back more than 20 years. SSA officials noted that increased business interaction between the two countries due to the North American Free Trade Agreement (NAFTA) was a factor in the renewed negotiations. In addition, because there is a totalization agreement with Canada, our other NAFTA partner, SSA believed that equity concerns required consideration of an agreement with Mexico. In February 2002, SSA sought clearance from the Department of State to begin such negotiations. The law also does not specify which elements of other countries’ social security systems must be evaluated during totalization agreement negotiations. SSA officials met with Mexican officials to exchange narrative information on their respective programs. Senior SSA officials also visited Mexico for 2 days in August 2002. During their visit, these officials told us that they toured social security facilities, observed how Mexico’s automated social security systems functioned, and identified the type of data maintained on Mexican workers. SSA took no technical staff on this visit to assess system controls or data integrity processes. In effect, SSA only briefly observed the operations of the Mexican social security program. Moreover, SSA did not document its efforts or perform any additional analyses then, or at a later time, to assess the integrity of Mexico’s social security data and the controls over that data. In particular, SSA officials provided no evidence that they examined key elements of Mexico’ s program, such as its controls over the posting of earnings and its processes for obtaining key birth and death information for Mexican citizens. Nor did SSA evaluate how access to Mexican data and records is controlled and monitored to prevent unauthorized use or whether internal and external audit functions exist to evaluate operations. Because all totalization agreements represent a financial commitment with implications for social security tax revenues and benefit outlays, a reasonable level of due diligence and analysis is necessary to help federal managers identify issues that could affect benefit payment accuracy or expose the nation’s system to undue risk. Our Internal Control Management and Evaluation Tool provides a risk assessment framework to help federal managers mitigate fraud, waste, abuse, and mismanagement in public programs, such as social security. A key component of this framework is the identification of internal and external risks that could impede the achievement of objectives at both the entity and program levels. Identified risks should then be analyzed for their potential effect and an approach devised to mitigate them. SSA did not conduct these types of analyses in previous agreements or in the case of the proposed Mexican agreement, despite documented concerns among Mexican government officials and others regarding the integrity of Mexico’s records, such as those for birth, death, and marriage, as well as its controls over assigning unique identification numbers to workers for benefit purposes. Such information will likely play a role in SSA’s ability to accurately determine Mexican workers’ initial and continuing eligibility for benefits under a totalization agreement. A totalization agreement with Mexico will increase the number of Mexican citizens who will be paid U.S. social security benefits in two ways. First, the agreement will make it easier for Mexican workers to qualify for benefits. Second, it will remove some nonpayment restrictions that affect benefit payments to non-U.S. citizens’ family members residing in another country, thus providing U.S. social security benefits to more survivors and dependents of entitled Mexican workers. Under current law, a worker must earn sufficient coverage credits to qualify for benefits under the U.S. Social Security program. For example, a worker who was born in 1929 or later generally needs 40 coverage credits to be insured for retirement benefits. Credits are based on a worker’s annual earnings in social security-covered employment. At most, 4 credits can be earned per year so that it takes at least 10 years of covered earnings in the United States for a worker to accumulate the necessary 40 credits and become insured for retirement benefits. Currently, social security credits are earned by anyone who has worked in covered employment in the United States. This is true even if the person was unauthorized to work when he or she earned coverage credits. For example, noncitizens, including Mexicans, who are at least 62 years old and lawfully present in the United States, will receive retirement benefits today as long as they meet the coverage credit threshold. Even Mexican citizens who are not lawfully present in this country can receive social security benefits earned through unauthorized employment if they later return to live in Mexico. Similarly, under current law, noncitizen dependents and survivors can also receive social security benefits under some circumstances. Totalization agreements generally expand benefits to both authorized and unauthorized workers and create new groups of beneficiaries. This would be the case for a totalization agreement with Mexico if it follows the same pattern as all prior totalization agreements. Mexican citizens with fewer than 40 coverage credits will be permitted to combine their annual earnings under their home country’s social security program with their annual earnings under the U.S. Social Security program to meet the 40- credit requirement. In addition, more family members of covered workers will qualify for dependent and survivor benefits. Totalization agreements generally override Social Security Act provisions that prohibit benefit payments to noncitizens’ dependents and survivors who reside outside the United States for more than 6 months, unless they can prove that they lived in the United States for 5 years in a close family relationship with the covered worker. If a totalization agreement with Mexico is structured like others already in force, the 5-year rule for dependents and survivors will be waived. However, it is important to understand that not all unauthorized Mexican citizens who have worked in the United States will receive totalization benefits. Some will have earned at least 40 coverage credits and can receive social security benefits without a totalization agreement. Still others may have worked under false identities and may not be able to prove that they have the necessary coverage credits to be entitled to benefits. Others still may not accumulate sufficient credits under the Mexican social security system to totalize with their U.S. social security coverage. The cost of a totalization agreement with Mexico is highly uncertain. In March 2003, the Office of the Chief Actuary estimated that the cost of the Mexican agreement would be $78 million in the first year and would grow $650 million (in constant 2002 dollars) in 2050. SSA’s actuarial cost estimate assumes the initial number of newly eligible Mexican beneficiaries was equivalent to the 50,000 beneficiaries living in Mexico today and would grow sixfold over time. However, this proxy figure is not directly related to the estimated millions of current and former unauthorized workers and their family members from Mexico and appears small in comparison to those estimates. Furthermore, even if the baseline estimate is used, a sensitivity analysis performed by OCACT shows that an increase of more than 25 percent—or 13,000 new beneficiaries—would produce a measurable impact on the long-range actuarial balance of the trust funds. Our review of cost estimates for prior totalization agreements shows that the actual number of beneficiaries has frequently been underestimated and far exceeded the original actuarial estimates. OCACT develops estimates of expected costs of totalization agreements by analyzing pertinent data from prior agreements, work visas issued, foreign corporations operating in the United States, and U.S. Census data. Because of extensive unauthorized immigration from Mexico, OCACT concluded that U.S. Census data, that would typically be used to estimate the number of new beneficiaries under an agreement, were not reliable. Instead, OCACT used the number of fully insured beneficiaries—U.S. citizens and others living in Mexico—currently receiving U.S. social security benefits as a proxy for the number of Mexican citizens who would initially receive totalized benefits. The principal basis for this assumption was a 1997 study of Mexican immigration patterns conducted by a private nonprofit organization. This study indicated that the percentage of Mexican immigrants who returned to Mexico after more than 10 years and, therefore, could qualify for benefits is roughly equal to the percentage that returned after staying 2 to 9 years and would not have the required credits. Thus, OCACT assumed that the potential totalized initial new beneficiaries would be equivalent to the 50,000 persons currently receiving benefits in Mexico. For the proposed Mexican agreement, both a short-term (covering the first 8 years of the agreement) and a long-term (covering 75 years) cost estimate were developed. The estimated cost to the Social Security Trust Funds would be about $78 million in the first year of the agreement. For the long-term cost estimate, OCACT projected that the number of beneficiaries would ultimately increase sixfold to 300,000 over a 45-year period after the agreement took effect and equal about $650 million (in constant 2002 dollars) in 2050. However, the actuarial analysis notes that the methodology was indirect and involved considerable uncertainty. As a rough check on the reasonableness of using current beneficiaries in Mexico for its cost estimate, OCACT analyzed totalized beneficiary data for Canadian citizens because Canada, like Mexico, is a NAFTA trading partner and shares a large contiguous border. After determining the ratio of Canadians receiving totalized versus fully insured benefits, OCACT applied this ratio to the number of Mexican-born U.S. social security beneficiaries and found that about 37,000 beneficiaries would be expected under the agreement initially, if the Canadian experience proves predictive of the Mexican outcome. According to OCACT, this comparison increased its confidence that the assumed 50,000 new beneficiaries under the agreement was within a reasonable range. Estimated Cost of Mexican Limited data about unauthorized workers make any estimate of the Agreement Is Highly Uncertain expected costs of a Mexican totalization agreement highly uncertain. A significant variable of any totalization agreement cost estimate is the identification of the number of potential beneficiaries. Estimates of the number of unauthorized Mexican immigrants living in the United States vary The federal government’s estimate was published in January 2003 and comes from the former Immigration and Naturalization Service (INS)INS estimated that, as of January 2000, about 5 million, or 69 percent of all unauthorized immigrants in the United States, were from Mexico. INS’s estimate also indicated that this figure was expected to increase by about 240,000 persons annually. The INS estimate, however, does not include unauthorized Mexican workers and family members who no longer live in the United States and could also conceivably benefit from a totalization agreement. Economic disparity between the United States and Mexico has fostered longstanding immigration from Mexico to the United States dating back many decades. Various studies also show that fewer than a third of Mexican immigrants stay more than 10 years in the United States, the minimum amount of time needed to qualify for social security retirement benefits. For cost analysis purposes, little is known about the population of former immigrants who have returned to Mexico in terms of their age, work history, dependents, and social security coverage. These factors increase the inherent uncertainty of any long-range forecasts with regard to Mexico. It is under this backdrop that OCACT set about developing an estimate of the costs of the potential totalization agreement. We have several concerns about OCACT’s estimate of the number of expected beneficiaries and cost of an agreement with Mexico. First, the use of the 50,000 fully insured beneficiaries receiving benefits in Mexico as a proxy for individuals who might initially benefit from an agreement, does not directly consider the estimated millions of unauthorized Mexican immigrants in the United States and Mexico who are not fully insured and might receive totalized benefits. Furthermore, despite the availability of key data about earnings, work histories, years of employment, and dependents for the 50,000 fully insured beneficiaries, OCACT did not analyze this population to determine whether they represented a good proxy for individuals likely to qualify for totalized benefits. The cost estimate also inherently assumes that the behavior of Mexican citizens would not change after a totalization agreement goes into effect. Under totalization, unauthorized workers would have an additional incentive to enter the United States to work and to maintain the appropriate documentation necessary to claim their earnings under a false identity. Thus, a large number of Mexican citizens have likely earned some social security coverage credits through both authorized and unauthorized work to meet the 40-credit threshold requirement and are not directly accounted for in SSA’s estimate. Second, SSA’s reasonableness check using Canadian data faces similar questions. While Mexico and Canada are NAFTA partners and share a common border with the United States, there is a dramatic difference in the extent of unauthorized immigration from these two countries and, in our view, the Canadian experience is not a good predictor of experience under an agreement with Mexico. Recent INS data show that Mexican citizens account for about 69 percent of unauthorized U.S. immigrants, whereas Canadian citizens account for less than 1 percent, and all other totalization agreement countries combined account for less than 3 percent. It is this population of unauthorized immigrants that makes estimating the cost of a totalization agreement with Mexico particularly problematic. Finally, even though SSA’s actuarial analysis increases the number of beneficiaries sixfold over time, the expected 300,000 beneficiaries in 2050 represents only about 6 percent of the estimated number of unauthorized Mexicans in the United States today, and thus appears relatively low. Although it would be unreasonable to expect all unauthorized Mexicans in the United States to qualify for totalized benefits, the very large difference between estimated and potential beneficiaries underscores the uncertainty of the estimate and the potential costs of an agreement could be higher than OCACT projects. Indeed, it would take only a relatively small increase in new beneficiaries from the original actuarial assumption of 50,000 initial new beneficiaries to have a measurable impact on the long-range actuarial balance of the trust funds. OCACT has estimated that the agreement would not generate a measurable impact on the long-range actuarial balance. However, a subsequent sensitivity analysis performed at our request shows that a measurable impact on the long-range actuarial balance of the trust funds will occur if the baseline figure is underestimated by more than 25 percent—just 13,000 additional beneficiaries above the estimated 50,000 new beneficiaries. Our analysis of past actuarial estimates of expected beneficiaries under totalization agreements shows that exceeding the 25 percent threshold has not been unusual, even in agreements where uncertainty about the number of unauthorized workers is substantially less. Our review of prior estimates shows that OCACT frequently either overestimated or underestimated the number of expected beneficiaries, usually by more than 25 percent (see table 2). In fact, where underestimates occurred, the differences were huge, involving several orders of magnitude. However, it is important to note that the number of estimated beneficiaries for prior agreements is substantially smaller than for the proposed Mexican agreement. Therefore, the differences in actual beneficiaries from estimated beneficiaries have a higher proportional impact. Furthermore, OCACT has not underestimated the number of expected beneficiaries for the agreements we analyzed since the 1991 agreement with Austria. Nevertheless, the numerous uncertainties and data gaps associated with the Mexican agreement elevate the risks associated with any cost estimate. Totalization agreements between the United States and other countries often foster enhanced diplomatic relations and provide mutually beneficial business, tax, and other incentives to employers and employees affected by these agreements. At the same time, the agreements impose a financial cost to both countries’ social security programs. SSA’s processes for entering into these agreements have been informal and have not included specific steps to assess and mitigate potential risks. Regardless of the country under consideration, sound management practices dictate that SSA managers have a risk management process in place to ensure that the interests of the United States and the Social Security Trust Funds are protected. Most totalization agreements have been with countries that are geographically distant to the United States, have developed economies, and represent only a fraction of the estimated unauthorized immigrants in the United States. Still, all agreements include some level of uncertainty and require due diligence on SSA’s part to alleviate those uncertainties. An agreement with Mexico, however, presents unique and difficult challenges for SSA because so little is known about the size, work history, earnings, and dependents of the unauthorized Mexican population. Furthermore, a common border and economic disparity between the United States and Mexico have fostered significant and longstanding unauthorized immigration into the United States, making an agreement with Mexico potentially far more costly than any other. Thus, for the Mexican agreement, additional analyses to assess risks and costs may be called for. A revised approach for entering into totalization agreements with all countries would enhance the quality of information provided to the Congress, which is tasked with reviewing these vital long-term commitments. A more thorough prospective analysis will also provide a better basis for determining whether agreements under consideration meet the mutual economic and business needs of all parties. Finally, current solvency issues require the Congress to think carefully about future trust fund commitments resulting from totalization agreements. Having more timely and complete information on the benefits, costs, and risks associated with each agreement can only serve to better inform their decisions. In light of the potential impact of totalization agreements on the Social Security Trust Funds, we recommend that the Commissioner of Social Security establish a formal process to identify and assess the major risks associated with entering into agreements with other countries. Such a process should include mechanisms to assess the integrity of a country’s retirement data and records, as well as a means for documenting the range of analyses conducted by SSA; enhance future reports to the Congress for proposed totalization agreements with other countries by making them more consistent and informative. Such reports should include consistent time periods for estimating both the short- and long-term effects on the trust fund and, as appropriate, include data on how alternative assumptions or sensitivity analyses could affect costs and potential beneficiaries; and work with the Office of the Chief Actuary to establish a regular process that examines original projected costs and beneficiaries affected versus what actually transpired over time and use this information, as appropriate, to adjust future estimating methods for totalization agreements. We obtained written comments on a draft of this report from the Commissioner of SSA, as well as OCACT. The full texts of these comments are reproduced in appendix II. We made limited changes to the report as appropriate. The State Department was also provided a copy of the draft report for review and advised us that it had no comments. SSA said that the report did not sufficiently discuss the benefits of totalization agreements to U.S. workers and employers and disagreed with our recommendation that the agency establish a formal process to identify and assess the major risks associated with entering into agreements with other countries. The agency noted that its current informal process for evaluating whether to enter into negotiations for totalization agreements was sufficient to identify and assess risks. Regarding the potential benefits of totalization agreements, our report specifically notes that such agreements foster international commerce, protect benefits for persons who have worked in foreign countries, and eliminate dual social security taxes for multinational employers and employees. Our concluding remarks also note that totalization agreements often foster enhanced diplomatic relations between participating countries. However, these agreements also have costs to the U.S. social security system, and we continue to believe that SSA should take steps to assess and mitigate risk during the negotiation process rather than after an agreement is signed. SSA also noted that it has specific criteria it follows when deciding whether to enter into totalization agreements with other countries and that the agency received detailed information on Mexico’s social security system during its 2-day visit to Mexico City. In reviewing SSA’s criteria, we could find no specific reference to data reliability and program integrity as a factor in negotiations. Further, our review of the activities surrounding SSA’s visit to Mexico and the limited documentation SSA received from Mexican social security officials shows that data integrity issues and systems controls were not sufficiently examined. In its comments, SSA notes that it is currently in the process of scheduling additional visits to Mexican facilities outside of Mexico City and will utilize SSA technical staff to further examine Mexico’s social security system. We are hopeful that—prior to submitting a proposed agreement with Mexico—SSA will take additional steps to assess key data it will rely on to determine Mexican worker’s initial and continuing eligibility for U.S. totalized benefits and that it will sufficiently document its efforts. Enhancing its due diligence efforts and formalizing this process to include all future totalization agreements would further improve SSA’s risk assessment efforts. OCACT generally agreed with our recommendations that cost estimates for future totalization agreements should be more consistent and informative and that such agreements should be regularly analyzed to examine the differences between original projections and actual experience as an aid to making better estimates. OCACT noted that, consistent with the U.S./Mexican totalization agreement, all future potential agreements would include both long-range (75 year) and short- range (10 year) cost projections. OCACT also noted that regularly examining the differences between original projections and actual experience for future totalization agreements made sense and was consistent with current practice. Although we could find no evidence during our review that such analyses had occurred on a systematic basis, we are pleased to hear that such analyses are now being done and are hopeful that OCACT will both complete them in the future and document and make available the results. Both SSA and OCACT disagreed with our analysis and conclusions regarding the estimates of the potential cost of a totalization agreement with Mexico, as well as our statement that any difference between estimated and actual costs will be on the high side. OCACT noted that, given the relative uncertainty of the data, this outcome is possible, but that our statement inaccurately implied that there was evidence that OCACT estimates are more likely to be understated than overstated. OCACT went on to note that a number of factors suggest that OCACT’s estimate of 50,000 new beneficiaries, which will increase sixfold to 300,000 by 2050, could indeed be too high. Our intent was not to imply that OCACT’s estimate was biased. Thus, we have revised our report to state that, given the large disparity between the estimated beneficiaries and the large number of undocumented Mexican workers, the potential cost of an agreement could be higher than OCACT projects. However, we continue to believe that a totalization agreement with Mexico is both qualitatively and quantitatively different than any other agreement signed to date, especially regarding estimating the potential impact of millions of unauthorized workers and their families. Thus, in assessing the risks of a totalization agreement with Mexico, we believe it is important to discuss the potentially significant impact that any underestimate of beneficiaries could have on the Social Security Trust Funds. As table 2 shows, error rates associated with SSA’s estimates of potential beneficiaries under prior agreements have often been substantial, even in cases where uncertainties about the number of unauthorized workers were less prevalent. OCACT’s comment that “taken as a whole” its estimate of initial beneficiaries differs from actual initial beneficiaries by only 3 percent is misleading because it nets overestimates against underestimates. OCACT prepares estimates of initial beneficiaries for each proposed agreement with an individual country. Thus, any comparison of estimated to actual initial beneficiaries should be on a country-by-country basis, rather than by aggregating the error rates for all agreements. Finally, in response to our concern that the OCACT’s original baseline estimate of 50,000 first-year totalization beneficiaries did not directly consider millions of current and former unauthorized Mexican workers, OCACT said that this estimate was based on the best available data. OCACT’s comments also included excerpted text from the original estimate in order to illustrate the analyses and assumptions that supported using the 50,000 individuals already receiving Old-Age, Survivors, and Disability Insurance benefits in Mexico as a proxy for potential totalization beneficiaries. We acknowledge the data limitations facing OCACT as well as its good faith effort to reasonably estimate the costs of a totalization agreement with Mexico. However, based on our audit work—which involved a thorough review of the full text of the actuarial estimate, numerous in-depth interviews with OCACT officials to discuss issues of concern, and regular consultation with our own Chief Actuary—it seems reasonable to examine all sources of data and address the estimates of unauthorized Mexican immigrants directly to provide a more complete picture of possible outcomes from an agreement with Mexico. We continue to believe that, given the magnitude of the proposed Mexican agreement relative to other totalization agreements, it is not unreasonable to expect that OCACT should develop and use a variety of approaches to estimate potential costs and perhaps develop a range of cost estimates based on those data sources and alternative assumptions. Such efforts would better serve the information needs of the Congress in the event that an agreement is ultimately submitted for its review. We are sending copies of this report to the House and Senate committees with oversight responsibilities for the Social Security Administration. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your offices have any questions concerning this report, please call me or Daniel Bertoni, Assistant Director, on (202) 512-7215. Other major contributors to this report are Patrick Dibattista, Gerard Grant, Daniel Schwimer, William Staab, and Paul Wright.
Totalization agreements foster international commerce, protect benefits for persons who have worked in foreign countries, and eliminate dual social security taxes that employers and their employees pay when they operate and reside in countries with parallel social security systems. Because Mexicans are believed to represent a large share of the millions of unauthorized workers present in the United States, a totalization agreement with Mexico has raised concerns that they would become newly eligible for social security benefits. To shed light on the possible impacts, GAO was asked to (1) describe the Social Security Administration's (SSA) processes for developing the agreement with Mexico, (2) explain how the agreement might affect the payment of benefits to Mexican citizens, and (3) assess the cost estimate for such an agreement. SSA has no written policies or procedures it follows when entering into totalization agreements, and the actions it took to assess the integrity and compatibility of Mexico's social security system were limited and neither transparent nor well-documented. SSA followed the same procedures for the proposed Mexican agreement that it used in all prior agreements. SSA officials told GAO that they briefly toured Mexican facilities, observed how its automated systems functioned, and identified the type of data maintained on Mexican workers. However, SSA provided no information showing that it assessed the reliability of Mexican earnings data and the internal controls used to ensure the integrity of information that SSA will rely on to pay social security benefits. The proposed agreement will likely increase the number of unauthorized Mexican workers and family members eligible for social security benefits. Mexican workers who ordinarily could not receive social security retirement benefits because they lack the required 40 coverage credits for U.S. earnings could qualify for partial social security benefits with as few as 6 coverage credits. In addition, under the proposed agreement, more family members of covered Mexican workers would become newly entitled because the agreements usually waive rules that prevent payments to noncitizens' dependents and survivors living outside the United States. The cost of such an agreement is highly uncertain. In March 2003, the Office of the Chief Actuary estimated that the cost of the Mexican agreement would be $78 million in the first year and would grow to $650 million (in constant 2002 dollars) in 2050. The actuarial cost estimate assumes the initial number of newly eligible Mexican beneficiaries is equivalent to the 50,000 beneficiaries living in Mexico today and would grow sixfold over time. However, this proxy figure does not directly consider the estimated millions of current and former unauthorized workers and family members from Mexico and appears small in comparison with those estimates. The estimate also inherently assumes that the behavior of Mexican citizens would not change and does not recognize that an agreement would create an additional incentive for unauthorized workers to enter the United States to work and maintain documentation to claim their earnings under a false identity. Although the actuarial estimate indicates that the agreement would not generate a measurable long-term impact on the actuarial balance of the trust funds, a subsequent sensitivity analysis performed at GAO's request shows that a measurable impact would occur with an increase of more than 25 percent in the estimate of initial, new beneficiaries. For prior agreements, error rates associated with estimating the expected number of new beneficiaries have frequently exceeded 25 percent, even in cases where uncertainties about the number of unauthorized workers were less prevalent. Because of the significant number of unauthorized Mexican workers in the United States, the estimated cost of the proposed totalization agreement is even more uncertain than in prior agreements.
This section details the objectives of the project, the structure and interrelationships of the various teams and stakeholders, the time frames for undertaking the projects and, the measurements that are required to monitor and evaluate the projects. To implement reforms in the job management process that result in a reduction in average cycle time to produce a “product” by 100 days. To quantify the expected staff time (and cost) savings resulting from the process improvements. To identify additional opportunities that will dramatically reduce cycle time and reduce staff-days required. To enable the core project team to maximize its chances of success, the following structure has been established: Central to the successful rollout of the identified initiatives is the Job Process Reengineering Team. This Team is responsible for coordinating all the various initiatives identified by the Job Management Process Owners (JMPO) and approved by the Quality Council. Section 3 provides more details on the roles and the responsibilities of the teams. Detailed below are high-level descriptions of their roles. Executive Committee - Made up of the organization’s most senior people, the committee will be responsible for supporting the Team and helping the Team overcome organizational barriers. This committee will meet biweekly, resolve any coordination matters, and make decisions necessary to keep the project moving toward the goals. Job Process Reengineering Team - The Team will have primary accountability for implementing the process reforms identified by the JMPO and approved by the Quality Council. They are ultimately responsible for meeting the objectives for reductions in cycle time and staff-years. Central to this will be the integration of the initiatives. The Team will monitor and compare the pilots and the rollout with the targets. It will also be responsible for allocating resources among the project teams and the day to day coordination of those teams. The members of the Job Process Reengineering Team will also participate in the project teams. Process Champions - For any change initiatives to be successfully implemented in the divisions, they will require the full backing of the respective Assistant Comptrollers General (ACGs). Therefore, each initiative will be championed by an ACG. It is likely that initiatives in each key area (such as Job Design) will be grouped together and championed by one individual. This does not negate the role of the Planning and Reporting (P&R) Directors or others who will still be heavily involved in integrating all the change initiatives. Process Owners - The P&R Directors as the JMPO will continue to play an active role in the project. They will be advised on the development of the initiatives as they are taken from concept to pilot and ultimately to rollout. In addition, they will support the ACGs in championing the initiatives. They will also be responsible for following up on the pilots ongoing in their divisions and provide feedback on the success and the effectiveness of the pilots. Project Teams - The project teams will be responsible for the successful piloting and implementation of the initiatives. They will support the Job Process Reengineering Team; their members will come from all areas of GAO, including the Job Process Reengineering Team, and thereby introduce diverse views and build ownership. Each team will be responsible for a particular subprocess such as Job Acceptance and Staffing, Job Design, or Data Collection and Analysis, which corresponds with the structure set up by the JMPO. Initially, the teams will design, implement, and integrate pilots and projects based on concepts developed by the JMPO and approved by the Quality Council. The Team’s objective is to ensure that all current initiatives have entered the full-scale rollout phase by January 1996. To do this, each initiative will be rolled out as soon as it has been successfully piloted. Recently, the JMPO agreed to the title and the principle and minimum essential requirements for the following initiatives: Job Acceptance and Staffing, Job Design, and Product Development and Division Review. The Confirmation Letter (Terms of Reference) and the Expedited Agency Comments initiatives are approved and the pilots have begun. The Risk Assessment/Job Acceptance and Staffing team has begun to develop the concept details for the pilot. The charts below lay out the potential time frame for rollout of the projects. As the data collection and analysis subprocess was not in the original scope of the JMPO efforts, the full-scale rollout of any initiatives in this area will not occur until mid-1996. (Time frames are not final for Benchmarking, Staffing, and Product Output initiatives.) The teams must quantify the time that each project is expected to save. The savings will be quantified in terms of both calendar days and staff days saved through more efficient processes. Performance measures provide direction and help focus efforts on attainment of goals. In addition, they allow the organization to benchmark its performance against that of other organizations. are simple to understand and use, are few in number, are aligned with organizational strategies and focused on customer wants. The performance measurement system will support continuous improvement, monitor the critical steps in the process, help anticipate and prevent problems, change as the organization’s strategy changes. Once the initiatives have been rolled out, these measures/targets provide the basis against which further improvements may be evaluated. The diagram below illustrates the circle of continuous improvement. This section details the proposed roles and responsibilities of the key players in the project. Where appropriate, the approach and the methodology that could be adopted by the teams is also discussed. To lay out an implementation plan and methodology that would fit all situations is not possible. Much of the implementation plan will depend on what the “end state” process design looks like. The plan then will identify the steps necessary to bridge the gap and move from the “current state” to the desired “end state.” Made up of the organization’s most senior people, the committee will be responsible for providing support to the Job Process Reengineering Team and help it overcome organizational barriers. This committee will meet biweekly, resolve any coordination matters, and make decisions necessary to keep the project moving toward the goals. The culture must change. Having an Executive Committee to which the Job Process Reengineering Team reports sends a clear message GAO-wide as to the magnitude and the importance of the reforms required. It will lend credibility to the Team, foster participation by a wider cross section of the organization, and enable the Team to cut through organizational barriers. In addition, it will allow for the rapid resolution of any conflicts, thereby enabling the initiatives to be implemented in the time frames established. The Executive Committee is composed of the Comptroller General, the Special Assistant to the Comptroller General, and the ACG for Planning and Reporting, the ACG for Quality Management, the ACG for Information Management and Communications, the ACG for Policy, the ACG for Operations, and the Deputy ACG for Human Resources. The Team has primary accountability for implementing the process reforms identified by the JMPO and approved by the Quality Council. It is ultimately responsible for meeting the objectives for reductions in cycle time and staff-years. Central to this will be integrating the initiatives. The Team will monitor and compare the pilots and the rollout with the targets. It will also be responsible for allocating resources among the project teams and the day-to-day coordination of those teams. The members of the Team will also participate in the project teams. Any changes in the job management process will have implications on the training curriculum. Therefore, the Training Institute must be involved throughout the project, both on the Job Process Reengineering Team and on project teams, where appropriate. In addition, because the proposed changes will have policy implications, it will be necessary to get representatives from the Office of Policy involved early in the process and at all levels. While the project teams will be responsible for individual subprocesses or initiatives, certain activities are not confined to project boundaries. The Job Process Reengineering Team should create the appropriate infrastructure necessary to support the project teams. This support may include providing skilled facilitators, advice on appropriate methodologies, providing automated tools, and assisting in planning and documenting the various efforts. In addition, the Team will be responsible for identifying the barriers to change and developing a change management plan; developing and executing a communication plan; executing an activity analysis to determine how people currently spend developing the transition plan, including establishing performance measures; and identifying and prioritizing further opportunities for process reengineering. The objective in developing the change management plan is to anticipate, avoid, and solve implementation problems that stem from peoples’ feelings about change. To develop an effective change management plan, one must first clearly identify the sources of resistance. To do this one may consider the following questions: Why have previous initiatives failed? What will be the employees likely reactions? How do the performance measures/appraisal systems support/detract from the initiatives? What barriers to change exist within the current culture? A well-planned communication strategy is required to inform the rest of the organization of the projects objectives. An effective communication plan will accomplish the communication goals of an organization by providing accurate, useful, and timely information. Specifically, an effective communication plan will exploit existing communication channels in the organization; convey accurate and useful information to targeted receivers; present information in a timely fashion relative to certain events; provide feedback channels; replace the rumors which can hinder acceptance of change; and, foster commitment to the changes. An activity analysis is a tool designed to identify how people spend their time. It will enable the project teams to determine whether the demands being placed on staff are reasonable and therefore whether the workloads expected as a result of any process reforms are acceptable. The analysis process can be a powerful and effective way to understand key processes and interfaces. It identifies key activities, attributes them to functions or business processes, and identifies specific activity drivers. The results of the activity analysis will also provide the basis for quantifying the potential time savings. The objective of the transition plan is to develop an “end state road map.” It is a translation of the high-level concept, supplied by the JMPO, into the operating requirements for the process. Michael Hammer (Reengineering the Corporation) uses the Business System Diamond to describe the changes necessary. Business Process (or Subprocess) The business process should be focused on meeting or exceeding the expectations of the customer. To do this, the team should first identify and describe the customer. After doing this, the team should address the following questions when planning the integration of the initiatives: What are the major subprocesses and tasks? Where is the work done? What are the process triggers and outcomes? How does this business process link to other processes? What are the primary customer impacts? In answering these questions, the Team can plan for the implementation. Once the redesigned business process has been established, the Team should then examine how the work will be done. Any changes to the process will affect the nature of the staff’s jobs and potentially, how staff are grouped. The following questions should be addressed: What new jobs are created? What current positions are affected? What are our preliminary training requirements and capabilities? What are the fundamental structural implications? Will we have individual or team oriented work? How will we reward people? Management and Measurement Systems (Stakeholder Impact) Changes to the business process will have implications for jobs and structures, which in turn, may affect the management systems. Integrated processes typically give rise to multidimensional jobs. This could require that staff be organized more in teams, which could affect recruitment, evaluation, and compensation. Listed below are some of the management systems that may be affected. recruitment, communication, compensation and benefits, education and training, and career development. The management systems (how people are recruited, evaluated, and rewarded) will shape employees values and beliefs. These are the issues and concerns that GAO staff think are important and to which they pay significant attention. To bring about process changes, individual behaviors must change. Therefore, both current and desired values and beliefs must be identified. In particular, it is important to surface staffs’ values and beliefs regarding the customer because new processes need to be focused on that customer. In understanding these issues, the Team can identify the steps required to move people from the current to the desired value system. Some of the initiatives to date, such as the Team Ag efforts and the proposed risk assessment, can be considered as reengineering. To the extent that the Team identifies further opportunities for reengineering, a more substantial methodology may be applied. For any change initiatives to be successfully implemented in the divisions, they will require the full backing of the division ACGs. Therefore, each initiative must be championed by an ACG. It is likely that initiatives in each key area (such as job design) will be grouped together and championed by one ACG. This does not negate the role of the P&R Directors or others, who will still be heavily involved in integrating all the change initiatives. The champions will be involved in the design, the pilot, and the implementation of initiatives in their selected areas. They will also act as advocates, communicating the benefits of the reforms to others. They will be supported in this by their P&R Directors. The Process Champions, it is also hoped, will communicate informally with one another. The P&R Directors as the JMPO will continue to play an active role in the project. They will be advised on the development of the initiatives as they are taken from concept to pilot and ultimately onto rollout. In addition, they will support the ACGs in championing the initiatives. They will also follow up on the pilots running in their divisions and provide feedback on the success and the effectiveness of the various efforts. The Process Owners will help assure that the process reengineering results can be implemented with a high level of confidence. They can act as the teams’ critics, spokespersons, monitors, and liaisons. In a process-oriented organization, process, not function or geography, forms the basis of organizational structure. Every process continues to need an owner to attend to its performance, especially once the process reengineering is complete. The project teams will be responsible for successful piloting and implementation of the initiatives. They will support the core Team; their members will come from all areas of GAO, including the Job Process Reengineering Team, and thereby introduce diverse views and build ownership. Each team will be responsible for a particular subprocess, such as Job Acceptance and Staffing, Job Design, or Data Collection and Analysis. The teams initially will focus on taking the concepts developed by the JMPO and approved by the Quality Council and then designing, implementing, and integrating the pilots and projects. Teams will vary in size (3 - 10 members) and membership depending on the complexity of the change being identified. They will consist of both full-time and part-time members. Part-time members may include representatives from the Training Institute; issue areas involved in similar ongoing pilots; and specialty staff, such as congressional relations staff. For those initiatives already identified by the P&R Directors, the following steps should be completed: 1. Form project team Make sure everyone has the same objectives Include “insiders” and “outsiders” 2. Understand work to date Review “one-pagers” from JMPO Review work relating to initiatives Obtain briefings from key parties 3. Define attributes of the proposed initiatives Objectives Scope: defined boundaries (inside and outside) Process triggers Process outcomes Process flow: graphic representation of scope from beginning to end Process structure: graphic picture of scope relating processes to Process rules: governing statements Process performance targets: desired measurable outcomes 4. Design the pilot Develop detailed design Test and validate design with key stakeholder groups 5. Pilot the process Operate the design in a pilot area Measure results and refine the pilot Plan for rollout Rollout the new process design The project teams should also develop detailed work plans. This is best done by first setting goals and target completion dates and then determining how to meet the goals by those dates. Teams should pay particular regard to critical program requirements, such as information technology and human resources as well as change management. It should be noted, however, that whatever is in the plan will change! The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO described its strategy to implement job process reforms and the roles and responsibilities of the project's key personnel. GAO noted that the objectives of the project are to: (1) implement reforms in the job management process that result in a reduction in average production time; (2) quantify the expected staff time and cost savings resulting from the process improvements; and (3) identify additional opportunities that will dramatically reduce cycle time and reduce the staff-days required to produce a product. GAO also noted that the Job Process Reengineering Team is responsible for: (1) implementing the process reforms identified by the Job Process Management Owners; (2) identifying the barriers to change and developing a change management plan; (3) developing and executing a communications plan; (4) executing an activity analysis to determine how people currently spend their time; (5) developing the transition plan and establishing performance measures; and (6) identifying and prioritizing further opportunities for process reengineering.
When it was enacted in 1998, WIA created a new, comprehensive workforce investment system designed to change the way employment and training services are delivered. Under WIA, each state designates local workforce investment areas across the state. Each local area is governed by local workforce investment boards that make decisions about the number and location of one-stop career centers, where partner programs make their services and activities available. Local boards are required to promote employers’ participation in the workforce investment system and assist them in meeting hiring needs. Training services provided must be directly linked to occupations in demand in the local area. WIA requires states and localities to track the performance of WIA-funded activities and Labor to hold states accountable for their performance in the areas of job placement, employment retention, and earnings change. The Employment and Training Administration (ETA) oversees the High Growth, Community Based, and WIRED grant initiatives. The vast majority of these grants are awarded under a provision of WIA, which provides authority for demonstration, pilot, multiservice, research, and multistate projects, and a provision of the American Competitiveness and Workforce Improvement Act (ACWIA), which provides authority for job training grants funded by the H-1B visa program. Labor is required to conduct impact evaluations of its programs and activities carried out under WIA, including pilot and demonstration projects. While impact evaluations make it possible to isolate a program’s effect on participants’ outcomes, there are several ways to conduct them, including experimental and quasi-experimental methods. In 2004 and 2007, GAO recommended that Labor comply with WIA requirements to conduct an impact evaluation of WIA services to determine what services are most effective for improving employment-related outcomes. Labor agreed with our recommendation. In December 2007, the agency announced it had begun a quasi-experimental evaluation⎯an impact evaluation that does not use a control group—of the WIA Adult and Dislocated Worker programs, with a final report expected in November 2008. Federal law recommends, but does not require, that all grants be awarded through competition. The Federal Grant and Cooperative Agreement Act encourages competition in grant programs, where appropriate, to ensure that the best possible projects are funded. In addition, Labor’s own guidance governing procurement and grant operations—the Department of Labor Manual Series—states that competition is recommended, unless one or more of eight exceptions apply. Further, a guide on improving grant accountability developed by the Domestic Working Group Grant Accountability Project recommends grants be awarded competitively because competition facilitates accountability, promotes fairness and openness, and increases assurance that grantees have the systems in place to efficiently and effectively use funds to meet grant goals. Effective monitoring is also a critical component of grant management. The Domestic Working Group’s suggested grant practices state that financial and performance monitoring is important to ensure accountability and the attainment of performance goals. Labor monitors most grants through a risk-based strategy based on its Core Monitoring Guide. A key goal is to determine compliance with specific program requirements. In addition, entities receiving Labor grants are subject to the provisions of the Single Audit Act if certain conditions are met. A single audit is an organization-wide audit that covers, among other things, the recipient’s internal controls and its compliance with applicable provisions of laws, regulations, contracts, and grants. According to Labor officials, the grant initiatives are designed to change the focus of the public workforce system to emphasize the employment and training needs of high-growth, high-demand industries, but Labor will be challenged in assessing their impact. For the three grant initiatives, Labor awarded 349 grants totaling almost $900 million that were intended to bring about this change by identifying the workforce and training needs of growing, high-demand industries; engaging workforce, industry, and educational partners to develop innovative solutions to workforce challenges, such as worker shortages; leveraging a wide array of resources to fund the solutions; and integrating workforce and economic development to transform regional economies by creating good jobs. However, 7 years after awarding the first grant, Labor will be challenged to evaluate the effect of the grants. We recommended that Labor take steps to ensure that it could, but its response to our recommendation suggests that conditions remain much as they were when we did our audit work. According to Labor officials, the High Growth, Community Based, and WIRED initiatives are designed to collectively change the focus of the workforce investment system by giving greater emphasis to the employment and training needs of high-growth, high-demand industries. They characterized High Growth as a systematic change initiative designed to make the system more demand-driven (i.e., focused on the needs of growing and high-demand industries) and to make the system’s approach to workforce development more strategic by engaging business, industry, and education partners to identify workforce challenges and solutions. As a related effort, the Community Based grants were designed to build the training capacity of community colleges for high-growth, high-demand occupations. The goal of third grant initiative, WIRED, was to “catalyze” the creation of high-skill and high-wage opportunities for workers within the context of regional economies, to test models for integrating workforce and economic development, and to demonstrate that workforce development is a key driver in transforming regional economies. From 2001 through 2007, Labor awarded 349 grants totaling almost $900 million for these initiatives (see table 1). Labor officials said a number of indicators show that the initiatives are changing the system. According to Labor, they have seen a “system-shift” in the approach to implementing workforce solutions through an increase in demand-driven topics at conferences since the roll out of the initiatives. Labor said this shift has been driven by partnerships between the workforce investment system, business, industry, and educators using the High Growth framework. Labor also said it is seeing demand-driven strategies in state and local strategic plans and in states using their own money to fund High Growth-like projects. Labor pointed out that the system has evolved to the point where high performing local workforce boards with demand-driven practices are mentoring lesser performers. Lastly, Labor said the content on its Web site, Workforce3 One, was also evidence of change. For example, Labor held an interactive seminar broadcast on this site to train participants to use an online tool to share curricula developed through the initiatives. However, experts identified a number of challenges states face in pursuing demand-driven practices. These included insufficient funding, limited flexibility in how funds can be used, statutory requirements to target services to certain groups of workers, and the need to respond to local economic conditions. Commenting on workforce boards’ ability to form strategic partnerships, one expert noted that there are no funds to support such endeavors and no performance standards to measure them. With regard to regional economic development, experts said boards are structured around local areas, not regions, regional economies are highly variable, regional governance structures can make achieving buy-in difficult and that rural areas can be particularly challenged in pursuing regional approaches. Despite the money invested and emphasis placed on these initiatives, Labor did not fully integrate them into its strategic plan or ETA’s research plan from the start. The Government Performance and Results Act states that strategic plans shall contain strategic goals and objectives, including outcome-related, or performance goals, and objectives for an agency’s major functions and operations. However, the strategic plan includes performance goals only for the Community Based initiative. High Growth and WIRED—the two initiatives where Labor spent the most money⎯are mentioned in the strategic plan, but not specifically linked to a performance goal; therefore, it is unclear what criteria Labor will use to evaluate their effectiveness. Moreover, the data needed to assess the performance of these initiatives are not specified. Labor officials said the strategic plan did not address the initiatives because it focuses on budget issues. Just as the initiatives are not fully integrated into the strategic plan, neither are they fully integrated into ETA’s research plan, which cites plans for future evaluations, but it does not specify an assessment of their impact. In responding to recommendations made in our May 2008 report, Labor said only that it would consider inclusion of the initiatives in its next 5 year research agenda due for revision in 2009. Not fully incorporating the initiatives into its strategic or research plans may have limited Labor’s ability to collect consistent outcome data. Labor said that, prior to 2005, it consistently collected data from grantees on the number of participants enrolled in and completing training funded under High Growth—the only one of the three grant initiatives operating at that time. However, it did not collect performance outcomes similar to those being collected for its other training and employment services. Labor will face challenges in obtaining the data necessary to make meaningful comparisons. In 2005, Labor instituted what were called common measures to assess the effectiveness of one-stop programs and services. The common measures include participant employment outcomes, earnings, and job retention after receiving services. At the time, Labor could not require High Growth and Community Based grantees to provide data on the common measures because it did not have Office of Management and Budget (OMB) approval. In anticipation of OMB approval, starting in 2006, Labor included information on the common measures in all new solicitations for High Growth and Community Based grants, notified grantees of its goal for standardizing performance reporting, and provided technical assistance to help grantees prepare for it. Labor also encouraged grantees to work with local workforce system partners to leverage their experience in tracking and reporting performance outcomes. According to Labor, it has an OMB approved reporting format in place and expects data collection to begin in early program year 2008. However, because some of the first grantees have already completed their projects, obtaining information about workers that have left the program may prove difficult and costly. According to Labor, it can collect common measures for WIRED grantees, but it has not yet done so. As a result, Labor may not have consistent data for individuals participating in the programs funded under the grant initiatives. In addition, it may lack data that will allow it to compare outcomes for individuals served by grant-funded programs with those served by employment and training programs offered through the one-stop system. Having comparable outcome data is important because the goal of an impact evaluation is to determine if outcomes are attributable to a program or can be explained by other factors. Labor has some plans underway to evaluate the initiatives but may face challenges drawing strong conclusions from them. Labor has conducted an evaluation of the implementation and sustainability of 20 early High Growth grantees. It is now evaluating the impact of the training provided by High Growth grantees. Labor anticipated the final report in December 2008, but now expects it in spring 2009. Labor experienced a number of challenges in evaluating the initiatives. These include having to limit its evaluation to only 6 grantees of 166, because only 6 had sufficient participants to ensure a statistically significant evaluation. They also include problems gaining access to workers’ earnings data and inconsistent outcome data from grantees. Labor officials said they plan to conduct a comprehensive evaluation of the Community Based initiative. The first phase of the evaluation will examine the extent to which the Community Based grants addressed the stated workforce objectives and challenges funded projects were intended to address, as well as document the role of business and the workforce investment system in the overall success of the grants, according to Labor. This phase will also include an examination of the feasibility of performing an impact evaluation and will be completed in late 2008. Depending on the results of this phase, Labor officials said an impact evaluation will begin in 2009. For its evaluation of the WIRED initiatives, Labor says it is examining the implementation and cumulative effects of WIRED strategies, including change in the number and size of companies in targeted high-growth industries and whether new training led to job placement in the targeted industries. It contracted with the Berkeley Policy Associates to conduct the evaluation for the first 13 grantees, and a final report is expected by June 2010. It also contracted with Public Policy Associates to similarly evaluate the 28 remaining WIRED grantees. Labor officials said these initiatives are not included in the agency’s broader WIA impact study. According to Labor, none of the three initiatives is considered to be a research project or designed to compare participant outcomes with the participant outcomes achieved under WIA. Labor said it does not plan to include them in the assessment of the impact of WIA services because the initiatives have their own independent evaluations. While Labor now awards grants under all three grant initiatives competitively, initially almost all High Growth grants were awarded without competition. Labor also did not document the criteria for selecting noncompetitive High Growth grants or whether they met Labor’s internal requirements or the requirements of the laws under which the grants are authorized. In response to recommendations we made in our May 2008 report, Labor said it had modified its noncompetitive process so it now includes documentation of statutory program requirements. We have not evaluated the sufficiency of the modified forms for ensuring statutory compliance. Another issue with the process was that meetings Labor held to identify workforce solutions did not include most of the state and local workforce investment boards. The Community Based and the WIRED grants have always been awarded through a competitive process but, until 2005, Labor did not award High Growth grants competitively even though federal law and Labor’s internal procedures recommend competition. While Labor had discretion in awarding High Growth grants without competition, the extent to which it did so raises questions about how Labor used this method of awarding grants. Competition facilitates accountability, promotes fairness and openness, and increases assurance that grantees have systems in place to meet grant goals. Yet Labor chose to award 83 percent of the High Growth grants, which represented almost 90 percent of the funds, without competition between fiscal years 2001 and 2007 (see table 2). Congress required that High Growth grants funded by H-1B fees be awarded competitively for fiscal years 2007 and 2008. Prior to that time, there were no provisions requiring Labor to award High Growth grants competitively. Labor said that it used a noncompetitive process to promote innovation. They also said that they awarded grants without competition to save the time it would have taken to solicit grants through competition. In hindsight, they said they could have offered the High Growth grants competitively earlier because they recognized that the number of noncompetitive awards created a perception that the process was unfair. They said, however, that they always intended to award later grants competitively. In contrast to the High Growth grants, the Community Based and WIRED initiatives have always been awarded through competition. These funding opportunities were announced to potential applicants through a solicitation for grant application that listed the information that an application must include to compete for funding. These applications were then reviewed and scored by a knowledgeable technical panel. These solicitations were also reviewed by Labor attorneys for compliance with procurement and statutory program requirements for awarding grants, according to officials. Because the initial High Growth process was noncompetitive, documenting the decision steps was all the more important to ensure transparency. However, Labor was unable to provide documentation of the initial criteria for selecting grantees. As a result, it did not meet federal internal control standards, which state that all transactions and other significant events need to be clearly documented and that the documentation should be readily available for examination. In addition, it was unable to document that it met the statutory requirements for the laws authorizing the grants. Finally, according to Labor’s Inspector General, it did not adequately document that it had followed its own procedures for awarding grants without competition. Labor did not document the criteria used to select the early noncompetitive High Growth projects. Labor officials told us there were no official published guidelines specific to High Growth grants, only draft guidelines, which were no longer available. In addition, Labor officials told us that generally they were looking for grantees that pursued partnerships and leveraged resources, but that attributes they sought changed over time. Labor published general requirements for noncompetitive grants in 2005 and updated them in 2007. Officials said these were not requirements, only guidelines for the kinds of information Labor would find valuable in evaluating proposals. In addition, while Labor said that it had discretion to award high growth grants non-competitively under the WIA provision authorizing demonstrations and pilot projects and under ACWIA before 2007, they could not document that the grants fully complied with the requirements of these provisions. For example, WIA requirements include providing direct services to individuals, including an evaluative component, and being awarded to private entities with recognized expertise, or to state and local entities with expertise in operating or overseeing workforce investment programs. Officials said that they were certain they had ensured that the projects met all statutory requirements but acknowledged they did not document that the requirements were met. Labor’s Inspector General found the agency did not always document that it followed its own procedures or always obtained required review and approval before awarding grants noncompetitively. Labor officials said most of the noncompetitive grant proposals were presented to Labor’s Procurement Review Board for review and approval allowed under exceptions for proposals that were unique or innovative, highly cost- effective, or available from only one source. However, in 2007, Labor’s Inspector General reviewed a sample of the noncompetitive High Growth grants awarded between July 2001 and March 2007 and found that 6 of the 26 grants, which should have undergone review, were awarded without prior approval from the review board. Furthermore, they found that Labor could not demonstrate that proper procedures were followed in awarding the High Growth grants without competition. Although they were unable to provide documentation, Labor officials said they used considerable rigor in selecting grant recipients under the noncompetitive process. Similar to a competitive process, the noncompetitive grant proposals were highly scrutinized and reviewed to ensure they made best use of scarce resources. They said that, in most cases, staff created abstracts to highlight strengths and weaknesses, and multiple staff and managers participated in reviews and decision-making. In addition, Labor officials strongly disagreed with the majority of the Inspector General’s findings. They said they followed established procurement practices as required but agreed that additional documentation would be valuable. In response to the Inspector General’s report, Labor took steps to strengthen the noncompetitive process. These included developing procedures to review noncompetitive grant proposals for criteria including support of at least one of ETA’s strategic goals and investment priorities. The procedures also required ETA to document that required procedures are followed and that required review and approval is obtained before awarding grants noncompetitively. However, the newly developed procedures did not explicitly identify the statutory program requirements for which compliance should be documented. In response to a recommendation in our May 2008 report, Labor provided modified forms used in the noncompetitive process to include statutory program requirements and said that grant officers and program officials must confirm that the proposed grant is in compliance with these requirements. We have not evaluated the sufficiency of the modified forms for ensuring statutory compliance or reviewed how grant and program officers confirm compliance using the forms. The vast majority of workforce boards—which oversee the workforce investment system—were not included in the meetings that served as incubators for grant proposals. After identifying 13 high-growth/high- demand sectors, Labor held a series of meetings between 2002 and 2005 with industry executives and other stakeholders to identify workforce challenges and to develop solutions to them. According to Labor, they first held meetings with industry executives—executive forums—for 13 sectors to hear directly from industry leaders about the growth potential for their industries and to understand the workforce challenges they faced. Second, they hosted a series of workforce solutions forums for 11 of the sectors, which brought together industry executives (often those engaged in human resources and training activities) with representatives from education, state and local workforce boards, or other workforce-related agencies. However, a review of Labor’s rosters for the solutions forums shows that while there were more than 800 participants, 26 of the almost 650 local workforce boards nationwide were represented, and these came from 15 states. (See fig. 1.) Further only 20 of the 50 states had their state workforce investment board or other agency represented (see table 3). Labor officials said they went to great lengths to include workforce system participants in solutions forums. Officials said they asked state workforce agencies to identify a state coordinator to interface with Labor, work collaboratively with industry partners, and identify potential attendees for executive and solutions forums. Further, the state coordinators were to help Labor communicate with the workforce system about High Growth activities and were kept updated through routine conference calls and periodic in-person meetings, according to Labor. Labor officials also said the Assistant Secretary and other senior officials traveled frequently, speaking to workforce system partners at conferences to gather information about innovative practices. Labor officials said, even with these efforts, they found only a few workforce boards operating unique or innovative demand-driven programs. However, most workforce board officials we spoke to in our site visits reported becoming aware of the meetings and the grant opportunities after the fact, even though they were pursuing the kinds of innovative practices the meeting was supposed to promote. Some state board officials said that they were often unaware that grants had been awarded, and at least one local workforce board said it became aware of a grant only when the community college grantee approached it for assistance in getting enough students for their program. In addition, officials in states we visited said they had been developing and using the types of practices that Labor was seeking to promote at the meetings. Being present at the meetings could have been beneficial to workforce boards. Labor officials acknowledged that when meeting participants suggested a solution to an employment challenge that they deemed innovative and had merit, they encouraged them to submit a proposal for a grant to model the solution. In addition, officials said that, in some cases, they provided applicants additional assistance to increase the chances that the proposal would be funded. For all three grant initiatives, Labor has a process to resolve findings found in single audits, collects quarterly performance information, and provides technical assistance as a part of monitoring. In addition, it has a risk-based monitoring approach for High Growth and Community Based grants. When we conducted our audit work, there was no risk-based monitoring approach for WIRED. In response to our recommendation, Labor has documented steps it has taken to put a monitoring approach in place for WIRED grants. Labor said it has a process to work with grantees, including High Growth, Community Based, and WIRED to resolve findings in single audits. However, Labor’s Inspector General reported that Labor does not have procedures in place for grant officers to follow up with grantees with past due audit reports to ensure timely submission and thus proper oversight and correction of audit findings. The Inspector General recommended that Labor implement such procedures, and Labor has done so, but the finding remains open because Labor’s Inspector General has not yet determined if the procedures adequately address the recommendation. As part of its monitoring, Labor requires High Growth, Community Based, and WIRED grantees to submit quarterly financial and performance reports. Financial reports contain information, such as total amount of grant funds spent and amount of matching funds provided by the grantee. Performance reports focus on activities leading to performance goals, such as grantee accomplishments and challenges to meeting grant goals. Labor officials said they review these reports and follow up with grantees if there are questions. Labor officials acknowledge, however, that they are still working to ensure the consistency of performance reports provided by High Growth and Community Based grantees and are working with OMB to establish consistent reporting requirements. In addition, while the finding was not specific to these three grants, Labor’s Inspector General cited high error rates in grantee performance data as a management challenge. Labor is taking steps to improve grant accountability, such as providing grantee and grant officer training. All grantees receive technical assistance from Labor on how to comply with laws and regulations, program guidance, and grant conditions. For example, Labor issued guides for High Growth and Community Based grantees that include information on allowable costs and reporting requirements. In addition, Labor officials said they trained national and regional office staff to address grantees’ questions and help High Growth and Community Based grantees obtain assistance from experts at Labor and other grantees. Labor officials said they hold national and regional High Growth and Community Based grantee orientation sessions for new grantees, present technical assistance webinars and training sessions focused on specific high-growth industries, assist grantees with disseminating grant results and products, such as curricula, and set up virtual networking groups of High Growth grantees to encourage collaboration. Labor officials told us they have teams who provide technical assistance to each WIRED grantee including weekly contact. During these sessions, Labor staff work with WIRED grantees on grant management issues, such as costs that are allowed using grant funds. Labor staff provide additional assistance through conference calls, site visits, and documentation reviews. In addition, Labor officials said they have held five webinars on allowable costs and provided grantees with a paper on allowable costs in July 2006, which was updated in July 2007. Finally, Labor officials explained that they made annual site visits for the first 13 WIRED grantees in spring and summer of 2007 to discuss implementation plans and progress toward plan goals. In addition, Labor staff said they have reviewed the implementation of the remaining WIRED grants to ensure that planned activities comply with requirements of the law. However, none of these reviews resulted in written reports with findings and corrective action plans. Labor has spent $16 million on contracts to provide technical assistance, improve grant management, administration, and monitoring, and to assist Labor with tasks such as holding grantee training conferences. The larger of these contracts focus on providing technical assistance to WIRED grantees. For example, one contract valued at over $2 million provides WIRED grantees assistance with assessing regional strengths and weaknesses and developing regional economic strategies and implementation plans. Another grant, valued at almost $4 million, provides a database and geographic information system that WIRED grantees can use to facilitate data analysis and reporting, among other things. While these monitoring and technical assistance efforts are useful to help grantees manage their grants, they do not provide a risk-based monitoring process to identify and resolve problems, such as compliance issues, in a consistent and timely manner. Labor uses a risk-based strategy to monitor High Growth and Community Based grant initiatives. For these initiatives, it selects grantees to monitor based on indications of problems that may affect grant performance. Labor’s risk-based approach to monitoring most grants reflects suggested grant practices. Suggested grant practices recognize that it is important to identify, prioritize, and manage potential at-risk grant recipients for monitoring given the large number of grants awarded by federal agencies. Through this process, Labor staff determine if grantee administration and program delivery systems operate, the grantee is in compliance with program requirements, and information reported is accurate. Labor’s risk-based monitoring strategy involves conducting site visits based on grantees’ assessed risk-levels and availability of resources, among other things. These site visits include written assessments of grantee’s management and performance and compliance findings and requirements for corrective action. For example, Labor’s site visit guide includes questions about financial and performance data reporting systems, such as how well the grantee maintains files on program participants. Labor has monitored about half of the High Growth grants and over one- quarter of the Community Based grants. Labor officials said these monitoring efforts have resulted in a number of significant findings which have generally been resolved in a timely manner. (See table 4.) For example, during a November 2006 site visit of a Community Based grantee, Labor identified three findings: incomplete participant files, failure to follow internal procurement procedures, and missing grant partnership agreements. Similarly, during a site visit in spring 2006 to a High Growth grantee, Labor found that the grantee did not accurately track participant information and reported incorrect information on expenditures, among other things. As of September 2007 Labor said these findings had been resolved (see table 4). As another part of Labor’s risk-based monitoring strategy, Labor’s internal requirements specify that Labor staff are to make site visits to all new grantees, including High Growth, Community Based, and WIRED, within 12 months of beginning grant activity and to new grantees rated as “at risk” within 3 months. Labor officials said they consider “new grantee” site visits to be orientation visits and had not made visits to most new grantees. They said they broadly interpret this requirement to include a variety of methods of contact and generally use teleconference and video conference training sessions rather than site visits, based on the availability of resources. For example, Labor calls each new Community Based grantee to schedule new grantee training. Labor is taking steps to update its internal requirements to better reflect the purpose of the new grantee monitoring. According to Labor, in response to a recommendation we made in our May 2008 report, it has initiated the process for monitoring the financial and administrative requirements of the WIRED grants. Labor says it developed a WIRED Supplement to the Core Monitoring Guide which it is using to conduct reviews of WIRED grants. Labor also stated that it is developing a schedule of reviews that will provide for the monitoring of all WIRED grants prior to September 30, 2008, to be followed by reviews of remaining WIRED grants. Labor said the monitoring reviews are being conducted by four teams of ETA staff, consisting of experienced Regional Office financial staff, National Office staff, and the Federal Project Officers assigned to the grant. All of the teams have been provided training to maximize the results of the initial review. ETA will utilize standard procedures for issuance and resolution of any monitoring report issues. While Labor has said it has taken steps to implement our recommendation on documentation and monitoring, we have not assessed the sufficiency of those efforts. Labor has said it is taking steps to ensure that it can evaluate the impact of the initiatives, and this is an area that warrants continued oversight. Madam Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Patrick di Battista, Julianne Hartman Cutts, Karen A. Brown, and Nancy Purvine, Senior Analysts, and Stephanie Toby, Analyst. Jean McSween provided methodological assistance, and Jessica Botsford provided legal assistance. The team also benefited from key technical assistance from Susan Aschoff, Pat L. Bohan, Paul Caban, Jessica Orr, Michael Springer, and Charles Willson. Workforce Investment Act: Additional Actions Would Improve the Workforce System. GAO-07-1061T. Washington, D.C.: June 28, 2007. Veterans’ Employment and Training Service: Labor Could Improve Information on Reemployment Services, Outcomes, and Program Impact. GAO-07-594. Washington, D.C.: May 24, 2007. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. National Emergency Grants: Labor Has Improved Its Grant Award Timeliness and Data Collection, but Further Steps Can Improve Process. GAO-06-870. Washington, D.C.: September 5, 2006. Discretionary Grants: Further Tightening of Education’s Procedures for Making Awards Could Improve Transparency and Accountability. GAO-06-268. Washington, D.C.: February 21, 2006. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005 Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Workforce Investment Act: Employers Are Aware of, Using, and Satisfied with One-Stop Services, but More Data Could Help Labor Better Address Employers’ Needs. GAO-05-259. Washington, D.C.: February 18, 2005. Public Community Colleges and Technical Schools: Most Schools Use Both Credit and Noncredit Programs for Workforce Development. GAO-05-4. Washington, D.C.: October 18, 2004. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. National Emergency Grants: Labor Is Instituting Changes to Assess Performance, but Labor Could Do More to Help. GAO-04-496. Washington, D.C.: April 16, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: February 23, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Workforce Training: Employed Worker Programs Focus on Business Needs, but Revised Performance Measures Could Improve Access for Some Workers. GAO-03-353. Washington, D.C.: February 14, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 2001, Labor has spent nearly $900 million on the High Growth Job Training Initiative (High Growth), Community-Based Job Training Initiative (Community Based), and the Workforce Innovation in Regional Economic Development (WIRED). This testimony addresses 1) the intent of the grant initiatives and the extent to which Labor will be able to assess their effects; (2) the extent to which the process used competition, was adequately documented, and included key players; and (3) what Labor is doing to monitor individual grantee compliance with grant requirements. This testimony is based on GAO's May 2008 report (GAO-08-486) and additional information provided by the agency in response to the report's recommendations. For that report, GAO reviewed Labor's strategic plan, documents related to evaluations of the initiatives, internal procedures for awarding grants, relevant laws, and monitoring procedures, and conducted interviews. According to Labor officials, the grant initiatives were designed to shift the focus of the public workforce system toward the training and employment needs of high-growth, in-demand industries, but Labor will be challenged to assess their impact. Under the initiatives, Labor awarded 349 grants totaling almost $900 million to foster this change. However, the grant initiatives were not fully integrated into Labor's strategic plan or overall research agenda, so it is unclear what criteria Labor will use to evaluate their effectiveness. Labor lacks data that will allow it to compare outcomes for grant-funded services with those of other federally funded employment and training services. GAO recommended that Labor take steps to ensure that it could evaluate the initiatives' impact, but its response to our recommendation suggests that conditions remain much as they were when GAO did its audit work. While grants under all three initiatives are now awarded competitively, the initial noncompetitive process for High Growth grants was not adequately documented. Community Based and WIRED grants have always been awarded competitively, but more than 80 percent of High Growth grants were awarded without competition. Labor could not document criteria used to select the noncompetitive High Growth grants or whether these grants met internal or statutory requirements. In response to the report recommendation, Labor modified review forms used in its noncompetitive process to include documentation of statutory requirements; however, GAO has not evaluated the sufficiency of these changes. Another issue related to the process was that meetings Labor held to identify solutions for industry workforce challenges did not include the vast majority of local workforce investment boards. Labor provides some monitoring for grantees under all three initiatives and uses a risk-based monitoring approach for the High Growth and Community Based grants. However, when GAO conducted its audit work there was no risk-based monitoring approach for WIRED, and therefore recommended that Labor establish one. In response to the report recommendation, Labor documented steps it has taken to put a monitoring approach in place for WIRED grants. GAO has not reviewed the sufficiency of these steps.
There are three main types of space launches—national security, civil, and commercial. National security launches are by DOD for defense purposes, and civil launches are by NASA for scientific and exploratory purposes. Many commercial launches are internationally competed and carry payloads, such as satellites, that generate revenue. In 1984, the Commercial Space Launch Act required the Secretary of Transportation to “encourage, facilitate, and promote commercial space launches by the private sector.” At that time, the U.S. government was the sole entity launching civil and commercial payloads into orbit from the United States. However, as a result of the Space Shuttle Challenger accident in January 1986, the U.S. government transferred responsibilities for commercial payload launches to the private sector. Space launches by private sector companies grew as U.S. commercial launch companies responded to the increase in global demand for commercial satellite launch services in the mid-1990s. Nonetheless, following a downturn at the beginning of this century in the business of the commercial space launch industry’s primary commercial customer, the telecommunications services industry, the demand for commercial space launches has generally declined. As shown in figure 1, the total number of U.S. and worldwide commercial orbital launches has declined from a peak of 41 launches in 1998. The U.S. commercial space launch industry, comprising a few launch companies, has historically used federal sites to launch satellites using expendable vehicles, which are designed to be launched once. According to FAA, the launch vehicle manufacturing and services sector of the commercial space industry had $1.7 billion in economic impact on the U.S. economy for 2004, with the greatest economic impacts to enabling industry sectors such as satellite manufacturing and services. (See app. II for more information on the industry and its economic impact.) The commercial space launch industry is changing with the emergence of suborbital reusable launch vehicles that enable space tourism from state- sponsored or private launch sites, known as spaceports. (See fig. 2 for examples of expendable and reusable launch vehicles.) The prospect for commercial space tourism materialized in 2004 when SpaceShipOne, developed by Scaled Composites, flew to space twice, achieving a peak altitude of about 70 miles to win the Ansari X Prize. Several entrepreneurial launch companies are planning to start taking paying passengers, also known as space flight participants, on suborbital flights within the next few years. Virgin Galactic intends to enter commercial suborbital space flight service around 2009, launching from a spaceport in New Mexico, and according to the company, plans to carry 3,000 passengers over the subsequent 5 years, with 100 individuals having already paid the full fare of $200,000. In addition, 4 individuals have already paid an estimated $20 million each for space flights to the International Space Station on a Russian vehicle that launches from Kazakhstan. According to a Futron Corporation market study on space tourism, the orbital and suborbital space tourism market could attract up to 15,000 passengers and generate revenues in excess of $1 billion per year by 2021, with suborbital space tourism likely generating the most demand. Several other companies in the United States and elsewhere, including former Ansari X Prize competitors, continue to develop their vehicles for space tourism. For example, Russia is developing a reusable launch vehicle, Cosmopolis 21, for space tourism flights. Spaceports are being developed to accommodate anticipated commercial space tourism flights and are expanding the nation’s launch capacity. As of August 2006, the United States had five federal launch sites, six spaceports with an FAA launch site operator’s license, and an additional eight spaceports have been proposed (see fig. 3). Although their individual capabilities and level of infrastructure development vary, these facilities may house launch pads and runways as well as the buildings, equipment, and fuels needed to prepare vehicles and payloads before launch. The spaceports are operated by state or local governments and authorities and by private entities. These spaceports also face competition from abroad. Space Adventures, a U.S.-based space tourism broker, in partnership with other investors, plans to develop a $115 million spaceport near Changi Airport in Singapore and a $265 million spaceport in Ras Al-Khaimah near Dubai in the United Arab Emirates. Several federal agencies regulate and support the commercial space launch industry. FAA oversees the safety of all commercial launches— both expendable and reusable launch vehicles from federal launch sites and spaceports—through its licensing, compliance monitoring, and safety inspection activities. FAA licenses launches to ensure the health and safety of the public and the safety of property. The agency licenses all commercial launches that take place in the United States. In addition, it licenses all overseas launches by U.S. citizens or companies. FAA generally does not license launches by the U.S. government, nor does it license the operation of federal launch sites. In issuing launch and launch-site operator licenses, FAA does not certify the launch vehicle as safe; in contrast, FAA’s Office of Aviation Safety provides initial certification of aircraft and periodically inspects an aircraft and certifies it as safe to fly. FAA can also issue experimental permits for launches of reusable vehicles conducted for research and development, for demonstrations of compliance with licensing requirements, or for crew training before obtaining a license. During commercial launches, FAA aerospace engineers are on-site to monitor licensees’ compliance with license and permit requirements. For 179 commercial launches conducted between March 1989 and August 2006, FAA has issued licenses for 63 launch vehicles and six spaceports. In addition, FAA has issued one experimental permit for a suborbital reusable vehicle. (See app. III for more information about FAA’s launch licensing process.) Furthermore, FAA is responsible for promoting the industry, which the agency said it accomplishes by sponsoring an annual industry forecast conference, publishing industry studies, and conducting outreach to potential launch companies. FAA also consults with industry through its advisory committee, the Commercial Space Transportation Advisory Committee, which provides advice and recommendations to the FAA Administrator. This advisory committee has working groups comprising industry representatives who consult on reusable launch vehicle development and launch operations and support, among other commercial space subjects. Other federal agencies support the commercial space launch industry to varying degrees. DOD provides guidance and safety oversight for government and commercial launches at federal launch sites. The Air Force also operates the government’s two primary commercial launch sites—Cape Canaveral Air Force Station in Florida and Vandenberg Air Force Base in California—and provides infrastructure and operations support. In addition, the Department of the Army operates a launch site at the White Sands Missile Range in New Mexico and at the Ronald Reagan Ballistic Missile Test Site in the Marshall Islands. Commercial launches at federal launch sites occur when “excess capacity” is available and the launch company reimburses DOD for the direct use of government services. Three U.S. companies—Boeing, Lockheed Martin, and Orbital Sciences—have been the primary commercial users of DOD launch facilities. In support of its mission to have assured access to space, DOD also has supported the industry through investments in the design and development of small, medium, and heavy lift launch vehicles, which have been used for both government and commercial launches. Under the Technology Administration Act of 1998, Commerce is to serve as an advocate for the commercial space industry. Its Office of Space Commercialization, established in 1988 within the Office of the Secretary of Commerce and now located within the National Oceanic and Atmospheric Administration, is responsible for promoting commercial investment in the industry by, among many activities, collecting and disseminating information on space markets; conducting workshops on commercial space opportunities; promoting space-related exports; and seeking the removal of legal, policy, and institutional impediments to space activities. Commerce’s International Trade Administration also promotes the commercial space industry in matters concerning international trade through such activities as trade events, advocacy programs, and the development of policies to further U.S. industry competitiveness. In addition, Commerce regulates export of space technologies that are considered dual-use items—that is, items with military and civilian uses—and is a customer of satellite launches. NASA’s support for the commercial space launch industry includes (1) providing infrastructure and range support from its Wallops Flight Facility in Virginia and radar support for commercial launches from DOD launch sites and (2) encouraging private sector investment in NASA launches and other activities. Since the 1985 National Aeronautics and Space Administration Authorization Act, Congress has required NASA to “seek and encourage, to the maximum extent possible, the fullest commercial use of space.” In January 2004, the President announced the Vision for U.S. Space Exploration, which directed NASA to pursue commercial opportunities for providing transportation and other services supporting the International Space Station and exploration missions beyond low-Earth orbit. Congress supported this direction in the NASA Authorization Act of 2005 by requiring NASA to develop a commercialization plan that (1) identifies opportunities for the private sector to participate in NASA missions and activities in space and (2) emphasizes the use of advancements made by the private sector in developing launch vehicles. One such opportunity is NASA’s Commercial Orbital Transportation Services demonstration program, for which NASA solicited proposals from private industry in March 2006 to demonstrate cargo and crew space transportation to low-Earth orbit and awarded two contracts in August 2006. Other federal agencies support commercial launches in various ways. DHS’s Transportation Security Administration (TSA) is responsible for security policy, compliance, and related issues for commercial space transportation. TSA also is responsible for establishing national standards for transportation and infrastructure security for commercial space transportation. The Department of State ensures that domestic space policies support U.S. foreign policy objectives and international commitments. State also regulates the export of space technology and represents the United States on the United Nations Committee on the Peaceful Uses of Outer Space. The White House Office of Science and Technology Policy (OSTP) and the National Security Council (NSC) develop and manage commercial space launch policymaking by mediating among federal agencies and reporting to the President on space policy issues, among other duties. The Office of the U.S. Trade Representative (USTR) negotiates and monitors commercial space launch industry trade agreements as needed. (See fig. 4 for a summary of federal agencies’ roles and responsibilities.) FAA has met its safety performance goal of no fatalities, serious injuries, or significant property damage to the public; however, the Air Force’s oversight of its launch sites has contributed to this achievement. FAA’s oversight of launches includes the use of a system safety process in its licensing and monitoring process and incorporation of management controls, which we have reported to be effective means of providing safety oversight and program management. FAA has met its annual performance goal to have no fatalities, serious injuries, or significant property damage to the public during licensed space launches and reentries since establishing this goal in 2003. Moreover, according to FAA, none of the 179 commercial launches that occurred between March 1989 and August 2006 resulted in casualties or substantial property damage. Of these 179 launches, FAA had joint oversight responsibility with other federal agencies for 152 (about 85 percent) and sole responsibility for 27 (about 15 percent) that included sea launches and the launches of SpaceShipOne from Mojave Spaceport. FAA shared responsibility with the Air Force for 132 launches at Air Force launch sites and with NASA, the Army, or foreign governments for 20 launches at NASA’s Wallops Flight Facility in Virginia, the Army’s White Sands Missile Range in New Mexico, and other facilities. Thus, the majority of commercial space launches during this period took place at Air Force launch sites where the Air Force had primary responsibility for safety oversight. We discuss later in this report the challenges that FAA faces in the future in assuming sole responsibility for launch safety oversight at spaceports. FAA incorporates a system safety process in its oversight of commercial launches by requiring the launch company to use system safety in the development and operation of its vehicle and in applying system safety methodologies to calculate the risk posed by a launch. As we have reported, a system safety process is an effective evaluative method of identifying and mitigating risks. Specifically, system safety relies on the application of technical and managerial skills to identify, analyze, and control hazards and risks. An objective of a system safety process is to identify hazard trends to spot and correct problems at their root cause before an incident occurs. During the licensing process, the launch company is responsible for system safety by demonstrating that it has assessed all hazards and risks posed by its launch operations and has proposed how to mitigate them. The assessment is focused on safety critical systems, such as a vehicle’s main structure, propulsion system, and flight safety systems, whose performance or reliability can affect public safety and the safety of property. Through the development of a system safety program plan, the launch company applicant demonstrates that the proposed vehicle design and operations satisfy regulatory requirements, and that the system is capable of performing safely during all flight phases, including launch and reentry. The plan provides a description of the strategy by which recognized and accepted safety standards and requirements, including organizational responsibilities, resources, methods of accomplishment, milestones, and levels of effort, are to be tailored and integrated with other system engineering functions. FAA consults with an applicant early in its launch vehicle development to help the applicant understand what must be included in the system safety program plan. In addition, FAA reviews the final system safety program plan as part of its safety review of the license application. Another way in which FAA incorporates system safety in its oversight of commercial launches is by conducting a risk analysis for each launch. FAA calculates, for each launch, the expected average number of casualties (deaths or serious injuries) to the public from debris hazards in the proposed flight path. This risk level—no more than 30 per million for the public and no more than 1 per million for an individual—is consistent with the launch standards used at federal launch sites. According to FAA, the risk to the public from commercial launches should not exceed “normal background risk”—that is, no greater risk than is voluntarily accepted in the course of normal day-to-day activities. For licensing launch-site operations, FAA performs a similar safety review that includes a risk analysis, which considers the site’s proximity to populated areas and a review of security planned at the facility. The risk analysis is both site- specific and vehicle-specific, and FAA reviews the results on a case-by- case basis because of differences between launch sites and vehicle designs. An expert on system safety confirmed our assessment that FAA has appropriately applied a system safety process to its launch license activities. In particular, the expert said that FAA has identified all of the safety systems that are critical to commercial space launches, made the proper assumptions of risk, and used proper validation methodologies. He also noted that FAA has used a higher factor of safety for commercial space launches than is commonly used in other industries, which is appropriate given that the space launch industry has a high-risk profile. However, the expert said that FAA should update its system safety handbook as the space tourism sector matures to incorporate different launch methods, such as launches from land, sea, and air, which may have different safety implications. FAA is applying relevant management controls in its licensing process. Management controls that we reviewed include the documentation of the review and approval of licenses, compliance with timely review requirements, communication with other federal agencies, and reliability and verification of data. According to our review of the 19 applications for launch and launch-site licenses that were active as of January 2006, FAA is applying these management controls. FAA accurately documented the review and approval process and completed its reviews of license applications within 180 days, as required by the Commercial Space Launch Act. However, FAA starts counting the 180 days after deciding that an application is sufficiently complete. Our analysis showed that FAA communicated and consulted on an as-needed basis with other federal agencies that are members of an interagency advisory group on expendable launch vehicles, as required by an executive order that designates DOT as lead agency within the federal government for commercial space launches. This communication includes coordinating with other federal agencies during the licensing process. Representatives from the majority of agencies serving on the group told us that FAA had periodically contacted them during its review of license applications. For example, agency representatives told us that FAA had checked with DOD on whether certain launches would negatively affect national security, and with State on whether launches were consistent with international treaties. In addition, our analysis showed that FAA verified the information in the applications for accuracy. In response to changes in the commercial space launch industry, including the emerging issues of anticipated growth in space tourism, FAA issued regulations in 2000 for the licensing of launch and reentry of reusable launch vehicles. The regulations for reusable launch vehicles require launch operators to obtain a safety approval from FAA in order to receive a license. In August 2006, FAA issued regulations that include safety requirements that applicants must meet to obtain a license for operations of expendable launch vehicles. The 2006 regulations cover license requirements for any launch of a commercial expendable launch vehicle from any launch site, whether a federal launch site or a spaceport. However, some industry experts raised questions about the appropriateness of the regulations for operations at spaceports and also expressed concern about both existing and potential future safety requirements for reusable launch vehicles, which can vary widely in design and operation. In addition, FAA has developed training for its aerospace engineers to help prepare them to assume safety oversight responsibility at spaceports. FAA’s regulations for operations of launch vehicles and launch sites are based on common safety standards, which were developed jointly by FAA and the Air Force to harmonize the respective agencies’ safety practices. (See app. IV for a timeline and list of FAA’s commercial space launch rulemaking and guidance.) These safety standards cover vehicle design and operations and criteria for acceptable risks for launch and launch-site operations, such as the siting of hazardous materials. The regulations build on those common standards with the goal of promoting consistent, streamlined safety reviews of launch operations. However, concerns have been raised regarding the suitability of the 2006 regulations, which are based on the experience of expendable launch vehicles at federal launch sites, for launches at spaceports. Some industry experts that we interviewed noted that differences in spaceports from which vehicles are or will be launched raise questions about the appropriateness of the regulations. Additionally, while the 2006 regulations only apply to expendable launch vehicles, industry experts expressed concern about safety regulation of reusable launch vehicles, given the difference in the design of vehicles and the methods for launching them. FAA stated that it addresses these concerns by (1) making license determinations on a case-by-case basis using common performance standards and (2) providing waivers in special circumstances. Performance standards require launch companies to meet certain performance thresholds—a risk level, calculated by an expected casualty analysis, of no more than 30 per million for the public and 1 per million for an individual—while allowing these companies to develop their own specific launch vehicle design. FAA said that it uses performance standards to encourage innovation in vehicle design, rather than being prescriptive on how the vehicle should be designed. FAA said that its ability to issue waivers of license requirements for special circumstances allows it to assess the unique characteristics of a launch and its impact on safety. FAA, for example, granted a waiver at Mojave Spaceport that allowed for the storage and handling of liquid propellants closer to the runway—which would be used for a horizontal launch, such as SpaceShipOne—than would have been allowed for a vertical launch. Later in this report, we discuss the challenges that FAA faces in ensuring that its regulations are suitable for the emerging space tourism sector. In addition, industry officials and one expert with whom we spoke raised concerns about the costs that expendable launch vehicle companies would incur to comply with the proposed regulations, because they believe that FAA’s safety requirements at federal launch sites will be in addition to the Air Force’s requirements. However, according to FAA, it has minimized these companies’ costs by ensuring that its safety standards are the same as the Air Force’s, and that waivers issued by FAA or the Air Force are accepted by both agencies. FAA officials also noted that they have the authority to implement an option that they said could potentially reduce costs for both launch companies and the agency—namely, to issue a safety approval that is separate from a licensing determination. For example, they said that FAA could approve a component of a vehicle, such as a flight termination system, which could then be used for multiple licenses. This approval could reduce uncertainty and costs for the vehicle manufacturer and save FAA the cost of evaluating the component for each license. As of August 2006, FAA had not made any such approvals. FAA has developed training for its aerospace engineers that focuses on oversight duties and technical areas. According to FAA, the oversight training addresses evaluations of license and permit applications; safety inspections of launch, reentry, and site operations; and mishap investigations of launch and reentry vehicles. Technical training addresses system safety, flight safety analyses, and flight safety systems. The training is either provided in-house or obtained from commercially available sources and other government agencies. In addition, to help its aerospace engineers develop expertise that will be applicable to reusable launch vehicles, employees from FAA’s Office of Commercial Space Transportation who are pilots and familiar with aircraft certification systems share their expertise with other staff. FAA also has sent its aerospace engineers to NASA and Air Force courses on launch and space flight operations, which include procedures for the launch and recovery of vehicles; FAA courses on avionics and aircraft operations, which are relevant because reusable launch vehicles have aircraft characteristics; and National Transportation Safety Board courses on aviation accident investigation, which includes procedures that would be useful in the event of a launch incident. While this training will help FAA to respond to current emerging issues, it will be important for FAA to keep abreast of industry changes and train their aerospace engineers accordingly. FAA faces multiple challenges in responding to the emergence of the space tourism sector. Those challenges include obtaining the expertise and resources needed to provide safety oversight of the sector, ensuring that its various regulations are suitable for the different launches and launch sites it licenses, determining the circumstances under which it would regulate passengers and crew, and ensuring that its industry promotion responsibilities do not conflict with its safety oversight responsibilities. If the space tourism industry develops as rapidly as some industry representatives suggest, FAA’s responsibility for licensing reusable launch vehicles will greatly expand. However, FAA’s experience in this area is limited because its launch safety oversight has focused primarily on unmanned orbital launches. From 1989 to 2005, FAA issued two reusable launch vehicle licenses that were mission-specific and conducted compliance monitoring and safety inspections for five reusable launch vehicle missions. Although FAA gained some experience and expertise from these missions, some industry representatives and experts with whom we spoke questioned whether FAA is prepared for its expanded role and raised concerns about whether FAA has sufficient experience and expertise. Experts also indicated that FAA must stay ahead of the development of the reusable launch vehicle industry to fulfill its safety oversight responsibilities, because many companies are developing space hardware for the first time and are producing different designs that have not been tested. For example, a safety incident occurred during a SpaceShipOne flight when the vehicle deviated from the launch trajectory and flew over a populated area. FAA evaluated this incident and required Scaled Composites, the developer of SpaceShipOne, to take corrective measures to continue its licensed flights. During its next flight, SpaceShipOne unexpectedly rolled 29 times, which FAA did not classify as an incident. In addition, retaining staff expertise may also be a challenge, given federal funding constraints and competition within the industry for qualified aerospace engineers. We have reported on the challenges the aerospace industry faces in attracting, training, and retaining new workers with the engineering, science, and technical capabilities it needs, given recent trends in the decline of the future supply of such workers. In FAA’s case, two of the five aerospace engineers who worked on the licensing and monitoring of SpaceShipOne flights are no longer with the agency. However, FAA said that it has since filled these positions. To help evaluate the safety of reusable vehicle launches, FAA’s Office of Commercial Space Transportation has obtained expertise from outside firms and other FAA offices. For example, FAA contracted with a consulting firm to verify the expected casualty analysis involving SpaceShipOne’s flights. In addition, because of certain similarities between reusable launch vehicles and aircraft, the Office of Commercial Space Transportation has consulted with FAA’s Office of Aviation Safety. Both of these FAA offices, for example, worked on SpaceShipOne’s license application and had a documented agreement that described how the offices would work together. However, according to Scaled Composites, confusion existed during the licensing process regarding the respective authorities of the FAA offices. For instance, Scaled Composites was required to have two authorizations—one for its vehicle and one for its launch operations. Initial vehicle flight tests to demonstrate “proof of concept” were conducted by the Office of Aviation Safety. Once Scaled Composites was ready to conduct launch operations, the Office of Aviation Safety transferred the vehicle review to the Office of Commercial Space Transportation, which reviewed the launch for licensing. According to an Office of Aviation Safety inspector, communication between the two offices, especially during the transfer stage of the review of SpaceShipOne, was not clear. In addition, an FAA engineer stated that distinctions between the two offices’ respective authorities had to be made so that there was no overlap and disagreement between the offices. Since the licensing of SpaceShipOne, the Commercial Space Launch Amendments Act of 2004 clarified responsibility by stating that only one license would be required by DOT to approve commercial space launches, and this responsibility has been designated to the Office of Commercial Space Transportation. However, the two offices still need to coordinate on license reviews for operations of hybrid vehicles having both aircraft and rocket-like characteristics, according to officials in both offices. Nonetheless, no formal process exists between the two offices that outlines when and under what circumstances the offices should consult. While the documented agreement between the offices described how they would authorize flights of SpaceShipOne, this document is specific to those flights and is not generic for future reusable launch vehicle licenses. FAA’s safety oversight of the commercial space launch industry may be further challenged, in part, because of the expected increase in workload demands facing agency staff. FAA is anticipating a substantial increase in the number of permit and launch applications that could be submitted for reusable launch vehicle and launch-site operations in the near future, but FAA has not quantified the magnitude of applications. For example, FAA’s annual industry forecast does not include projected reusable vehicle launches. FAA said that its anticipated increase in applications is based on preapplication consultations that FAA has conducted with reusable launch vehicle companies and spaceports. In addition, an FAA official noted that companies with existing reusable launch vehicle licenses are likely to apply for additional permits or licenses for the new vehicles they are developing. Furthermore, launch companies participating in NASA’s Lunar Lander Challenge and Commercial Orbital Transportation Services demonstration program and Commerce’s Geostationary Operational Environmental Satellite Program are required to obtain commercial launch permits or licenses from FAA. FAA initially plans to be present for every licensed launch of a reusable vehicle. If FAA carries out this plan, its staff workload would increase, since the proposed spaceports for space tourism flights are located throughout the country and space tourism companies are planning frequent launches. FAA has not determined the level of resources needed to meet this expected increase in responsibilities involving reusable launch vehicles. Agency officials said that they monitor the industry to assess how its development could impact resource requirements and will not make a budget request for additional resources until the workload has grown to justify the request. However, the agency has not conducted scenarios of different workload projections on the basis of increased space tourism launches. FAA faces the challenge of ensuring that its 2006 regulations on licensing and safety requirements for launch, which are based on the Air Force’s safety requirements for expendable launch vehicle operations at federal launch sites, will be suitable not only for operations at federal launch sites, but also for operations at spaceports. As we previously mentioned, industry representatives and experts are concerned that the safety regulations for reusable launch vehicles may not be suitable for space tourism flights because of differences in vehicle types and launch operations. Table 1 contains a comparison of some of the differences between expendable and reusable launch vehicles. Three of the six operators of licensed spaceports and six of the eight operators of spaceports in the licensing process told us they did not believe that FAA’s regulations should apply to the new spaceports. Five of these spaceport officials said that since reusable vehicles can be launched differently from expendable vehicles and can return to earth, the reusable vehicles present different safety implications. In addition, concerns about the suitability of the safety regulations were raised by experts we interviewed and by comments filed in the public docket for these regulations. While it was noted that the regulations were appropriate to spaceports if the vehicles launching from them are using exotic or dangerous fuels, concerns were raised that the rules may be too stringent for the currently proposed and operating reusable launch vehicles. Although the safety regulations applicable to expendable launch vehicles are separate from the safety approvals required to obtain a reusable launch vehicle license, some experts are concerned about similarities in the safety rules. For example, two experts noted that the expected casualty analysis that is the same for launches of expendable and reusable vehicles might be too high for reusable vehicles, given the vehicles’ different safety implications. Experts also said that safety regulations should be customized for each spaceport to address the different safety issues raised by different orbital trajectories and the differences in the way that vehicles launch and return to Earth—whether vertically or horizontally. (See fig. 5 for examples of vertical federal launch sites and spaceports.) To address these concerns, experts have noted that it will be important to measure and track safety information and use it to determine if the regulations should be revised. For example, an expert noted that FAA should identify and track safety indicators for launch companies and spaceports and, as the industry matures, conduct trend analyses with the objective of eliminating negative situations representative in the trends. Another expert noted that any safety performance measure should account for different launch and trajectory tracks, such as over land or over water. Yet another expert noted that FAA’s proposed regulations for experimental permits allow FAA to collect statistical data that could then be applied to develop safety standards criteria. FAA says that it collects data on anomalies and failures of safety- critical systems, which will allow it to analyze safety trends and determine potential precursors to accidents. However, the agency has not conducted trend analyses of that information. Other industry experts noted that the regulations should be revisited when the space tourism sector has further developed. Meanwhile, the Commercial Space Launch Amendments Act of 2004 requires DOT to commission an independent report to be issued to Congress and completed by December 2008. This report is to analyze whether expendable and reusable vehicles should be regulated differently from each other, and whether either of the vehicles should be regulated differently if carrying passengers. This report could provide FAA with information to address industry concerns about the suitability of its regulations to space tourism. The Commercial Space Launch Amendments Act of 2004 requires that a phased approach be used in regulating commercial human space flight, and that regulatory standards evolve as the industry matures. The act prohibits FAA from regulating crew and space flight participant safety before 2012, except in response to incidents that either pose a high risk or result in serious or fatal injury. However, the act maintains FAA’s authority over protecting the uninvolved public, and FAA stated that it has the authority to regulate crew and passenger safety to the extent that the public would be affected. According to FAA, it is only prohibited from issuing regulations that apply solely to crew and passenger safety, and any situation that implicates the public would allow FAA to regulate. FAA asserts that it has the authority to protect the crew because they are part of the flight safety system that protects the general public. FAA’s proposed regulations for human space flight would establish requirements for crew qualifications and training and space flight participant training and informed consent. Although these proposed regulations address passenger and crew behavior, FAA believes that the regulations are within its authority because they are intended to protect the public—not space flight participants. For example, the proposed regulations would require an operator to train each space flight participant before the flight on how to respond to emergency situations, including loss of cabin pressure, fire, smoke, and emergency egress. The proposed training requirement is aimed at protecting public safety, because if a space flight participant did not receive this training, he or she might interfere with the crew’s ability to protect public safety. The proposed regulations are not aimed solely at crew and passenger safety and, as a result, there have been instances in which FAA has not stepped in and imposed additional regulations or requirements for safety reasons because the public was not implicated. For example, Scaled Composites’ SpaceShipOne rolled 29 times, and, according to FAA, it did not impose additional requirements because the flight was over a unpopulated area, and because FAA concluded that the pilot was in control of the vehicle. FAA monitored and reviewed the corrective actions taken by Scaled Composites prior to its next flight. Additionally, FAA has not developed specific criteria regarding when an incident would qualify as contributing to “an unplanned event or series of events…that pose a high risk of causing a serious or fatal injury” that would trigger FAA’s authority to issue regulations specific to crew and passenger safety. Experts and industry representatives that we interviewed expressed different opinions about whether FAA should regulate crew and flight participant safety and, if so, when. Some industry representatives and experts we interviewed agreed that the Commercial Space Launch Amendments Act of 2004 provides the industry with the flexibility needed to innovate and grow. However, other experts said that there is too much flexibility in the act. One of these experts noted that FAA should publish the criteria that would cause it to regulate crew and flight participant safety before 2012. Another expert said that FAA’s having the discretion to decide when it would regulate crew and flight participant safety creates uncertainty for the industry, noting that without published criteria, the industry does not know how FAA would react to an incident involving a space tourism company, which could seriously hurt the industry. The designer of a reusable launch vehicle told us that FAA should regulate crew and flight participant safety for commercial space flight because he believes that space tourism needs to be as safe as commercial aviation. Meanwhile, a trade association made up of space tourism companies and spaceports—called the Personal Spaceflight Federation—plans to commission standards for vehicles and their operation, including space flight participant safety, as the space tourism industry develops. The federation believes that ensuring the highest-possible level of safety for the industry and sharing best practices will be essential to promote the safety and growth of the industry. The federation intends to commission an independent standards organization, such as the American Society for Testing and Materials, to develop accredited industry standards for voluntary testing and approval, much as Underwriters Laboratories, an independent organization that tests electrical devices, has done. According to an expert, while companies do not have to submit their products to Underwriters Laboratories for testing, market acceptance is low and liability exposure is high without the Underwriters Laboratories’ stamp of approval. The expert believes that a similar approach will work for space flight participants, who are more likely to choose to fly on a launch vehicle that has been approved according to industry standards than on one that has not been approved. FAA faces the potential challenge of overseeing the safety of commercial space launches while promoting the industry as the space tourism sector develops. According to our analysis, FAA’s current promotional activities have not conflicted with its safety regulatory role; however, industry experts have noted that potential challenges may arise as the space tourism sector develops. FAA is mandated to regulate the commercial space transportation industry to protect public safety and property while encouraging, facilitating, and promoting commercial space launches. According to FAA, its promotional activities include sponsoring an annual industry conference, sponsoring Commercial Space Transportation Advisory Committee meetings and work groups, presenting space-related topics at various aerospace professional association conferences, creating forums at which industry elements network, publishing economic impact studies and launch forecast reports, and conducting outreach to potential license applicants. These are all activities that experts say, and we agree, do not conflict with FAA’s safety oversight responsibilities. According to some experts, FAA’s promotional activities have not conflicted with the agency’s role as a safety regulator because the activities do not involve advocacy for the industry, nor do they increase demand in the industry. Furthermore, experts noted that some of these activities promote FAA’s safety role. For example, outreach to potential license applicants is a means of ensuring that new launch companies know about and adhere to federal safety regulations. Experts also noted that industry conferences are a means by which FAA can have a dialogue with industry to ascertain new industry trends and issues. FAA also has provided an estimated $200,000 for Mojave Spaceport to complete its environmental impact study, which FAA deems a promotional activity. According to its statutory responsibility, FAA can take action to facilitate private sector involvement in spaceport infrastructure. However, as the commercial space launch industry matures, there is a greater risk that FAA’s role as both the regulator and a promoter of the industry may pose a conflict of interest. Experts told us, and we agree, that as the commercial space launch industry evolves, it may be necessary to separate FAA’s regulatory and promotional activities. For example, one expert indicated that with the emergence of space tourism, FAA’s dual role could pose a potential conflict of interest between creating an enabling business environment and not compromising safety with regard to the agency’s determining when and if it would regulate crew and passenger safety on space launches. Other experts cited Congress’s removal of FAA’s promotional responsibilities for commercial aviation in 1996 as evidence of the importance of maintaining FAA’s focus on safety oversight. In response to the ValuJet accident of May 11, 1996, the DOT Secretary asked Congress to restrict FAA’s mandate to safety, eliminating its role in promoting the airline industry. According to the conference report that accompanied the legislative change, Congress withdrew FAA’s promotional role in commercial aviation to address public perceptions that might exist that the promotion of air commerce by FAA could create a conflict with its safety regulatory mandate. Congress also has withdrawn promotional responsibilities from other transportation entities. In 1961, the Federal Maritime Board was dissolved and its promotion and safety responsibilities were transferred to Commerce and the Federal Maritime Commission, respectively. In proposing the legislative change, the President stated that this change was made to eliminate the intermingling of regulatory and promotional functions that had diluted responsibility and led to serious inadequacies, particularly in the administration of regulatory functions. Recognizing the potential conflict in the oversight of commercial space launches, Congress required DOT to report by December 2008, among other things, on whether the federal government should separate the promotion of human space flight from the regulation of such activity. Furthermore, FAA’s promotional role has the potential to overlap with that of Commerce’s role—given the broad definition of FAA’s statutory promotional responsibilities, the more detailed definition of Commerce’s promotional responsibilities, and the efforts of Commerce to fully staff its Office of Space Commercialization. Commerce’s International Trade Administration, which is responsible for promoting U.S. exports and competitiveness of U.S. companies in foreign markets, has remained fully staffed and provides assistance to the U.S. commercial space industry. However, the Office of Space Commercialization within NOAA did not have a permanent director from 1999 through January 2006 and had been staffed with one permanent employee who had been charged with work related to satellite services. In February 2006, a new director was appointed, and, as of June 2006, the office has been fully staffed. The Office of Space Commercialization is currently developing a strategic plan that is to be completed by the end of 2006. Some of FAA’s promotional activities, such as publishing economic impact studies on the industry, have been undertaken due to past understaffing at Commerce, according to an FAA official. FAA has not revisited which promotional activities it should continue to undertake in light of these new developments at Commerce. (Fig. 6 describes FAA’s and Commerce’s statutory promotional responsibilities.) The U.S. commercial space launch industry faces key competitive issues concerning high launch costs and export controls. Space launches incur high costs for launch vehicle development and for launch facility operations and maintenance. The U.S. government has responded by providing support, such as launch contracts, the use of its launch facilities, and launch vehicle development infrastructure. Some foreign competitors have historically offered lower launch prices than U.S. launch providers. During the rise of the commercial launch industry in Russia, Ukraine, and China in the 1980s and 1990s, bilateral agreements between these countries and the United States (1) limited the number of launches in those countries of commercial satellites containing U.S. licensed components and (2) imposed pricing restrictions. According to a Commerce official, the United States entered into these agreements because foreign countries were nonmarket economies and there was the potential to employ nonmarket-based practices or offer prices substantially below international market value. According to an USTR official, which negotiated these agreements, the agreements required the countries to price their commercial launch services “on a par” with Western companies, which allowed the nonmarket economies to evolve their industries while competing on a fair basis. These foreign governments were required to sign the agreements with the United States as a condition of launching U.S. satellites or of launching any satellites containing U.S. licensed parts. According to an USTR official, these agreements were intended to be transitional to allow time for U.S. competitors to adjust to the entry of new launch companies from nonmarket economies. The bilateral agreement with Ukraine was terminated in 2000, while agreements with Russia and China were allowed to expire in 2000 and 2001, respectively, because of the changing dynamics in the marketplace—including, for example, the emergence of international partnerships in the expendable launch vehicle industry. The creation of international partnerships in the commercial space launch industry could allow expendable launch vehicle companies to offer commercial launches at lower prices. International Launch Services, formed in 1995, is an international partnership of Lockheed Martin and a Russian launch company that markets launches of the U.S. Atlas vehicle from Cape Canaveral and the Russian Proton vehicle from Baikonur in Kazakhstan. According to representatives from International Launch Services, between 1995 and 2005, the company contracted for 48 commercial launches from Cape Canaveral and Baikonur. Sea Launch, formed in 1995, is an international partnership between the Boeing Commercial Space Company and companies from Ukraine, Russia, and Norway that launches from a sea platform near the equator. Between 1999 and 2005, Sea Launch conducted 18 launches on a Ukrainian launch vehicle. Sea Launch has also partnered with Russia’s Space International Services to form Land Launch, which will offer launches on the same Ukrainian vehicle from Baikonur beginning in 2007. The United States, like foreign governments, supports its commercial launch industry in several ways. The U.S. government encourages federal agencies to acquire space transportation from U.S. commercial launch companies. Some of these companies have also received DOD funds to develop new launch vehicles that are intended to provide low-cost access to space for government purposes. Once developed, these vehicles could also be used for commercial purposes. For example, DOD’s Evolved Expendable Launch Vehicle Program, a government-industry partnership whose objective is to lower the cost of medium-to-heavy lift vehicle launches, has led to the development of Lockheed Martin’s Atlas V vehicle and Boeing’s Delta IV vehicle. DOD has provided $1.4 billion to the program as of fiscal year 2006, with an additional investment of $4.6 billion provided by Lockheed Martin and Boeing. With the objective of reducing U.S. government launch costs, Lockheed Martin and Boeing have proposed a joint venture of their vehicle programs, called the United Launch Alliance, for which DOD gave conditional approval in January 2006 and for which the Federal Trade Commission gave conditional clearance in October 2006. In addition, DOD has funded small vehicle development. For example, SpaceX, which has received DOD funding, has developed and is testing its Falcon I vehicle that will carry a small government payload and will launch from the Army’s Ronald Reagan Ballistic Missile Test Site on Kwajalein Atoll in the Marshall Islands. SpaceX then plans on launching a small commercial payload to low-Earth orbit from Kwajalein at an estimated cost of under $7 million (in 2006 dollars). This cost is in contrast to launch prices for small payloads averaging $15 million, according to a report on space transportation costs. Furthermore, whereas international competitors launch prices for medium-to-heavy payloads to geosynchronous transfer orbit average $56 million for medium and $87.5 million for heavy payloads, SpaceX plans to launch medium payloads on its Falcon 9 vehicle from $27 million and heavy payloads from $78 million. SpaceX said that it has reduced launch costs in a number of ways, including the simplification of vehicle design akin to Russian vehicles. Another example of new lower-cost vehicles that have been developed with DOD support is AirLaunch, LLC, which has developed a small lift vehicle that launches from the air from a military cargo aircraft and is intended to put a small payload into orbit for less than $5 million. A representative from AirLaunch said that it plans to use this technology in partnership with t/Space to develop a vehicle that will compete in NASA’s Commercial Orbital Transportation Services demonstration program. The U.S. government also supports the industry by making infrastructure and support staff available at its launch sites. Air Force launch pads leased by launch companies may be used for government or commercial launches. The commercial launch company pays the Air Force the direct costs associated with its use of facilities and services for a commercial launch. The Air Force is not reimbursed for indirect costs such as infrastructure improvements or base support that involves the use of Air Force active-duty personnel. NASA provides launch vehicle development facilities, including rocket propulsion test stands, wind tunnels, and thermal vacuum chambers to vehicle developers. Other types of government support include prize competitions and indemnification. (See table 2.) Demonstration programs, such as NASA’s Commercial Orbital Transportation Services, have received positive feedback from launch vehicle developers, according to a NASA official. In addition, the official said that the agency’s prize competitions, such as the Lunar Lander Challenge competition, have inspired many new launch vehicle companies to design vehicles using different launch approaches that could be used for human space flight. According to a launch company, the Commercial Orbital Transportation Services demonstration program allows for solicitations that encourage innovation and investment in the space industry by specifying an objective, such as carrying payloads to the International Space Station, rather than detailed requirements for a particular aircraft type. States are offering economic incentives to develop spaceports to attract space tourism and provide economic benefits to localities. The New Mexico legislature approved $100 million in February 2006 for construction of the Southwest Regional Spaceport in Upham, New Mexico. The spaceport is expected to be completed in 2008 or 2009, with three vertical launch pads; two runways; and service facilities for fuel service, payload processing, launch control, and mission control. Currently, the Southwest Regional Spaceport has 5 signed customers, including Virgin Galactic, which plans to launch its initial commercial space flights from the spaceport and expects to fly 3,000 passengers within five years after commercial launches begin. According to an official from the Oklahoma spaceport, Oklahoma provides approximately $500,000 annually to the spaceport for operations, and the state paid for the environmental impact statement and the safety analysis needed to apply for an FAA license. Existing infrastructure includes a 13,500-foot runway capable of accommodating the Space Shuttle, maintenance and repair hangars, and a rail spur. Furthermore, the Oklahoma spaceport has offered incentives valued at over $128 million over 10 years to attract space companies. Rocketplane Kistler, which has developed a reusable vehicle, plans to launch from the Oklahoma spaceport starting in mid-2007. The Florida Space Authority, a state agency, has an arrangement with the Cape Canaveral Air Force Station to use a launch pad for expendable vehicle launches when excess capacity exists. The Florida Space Authority has invested over $500 million in new space industry infrastructure development, including upgrades to the launch pad, a new space operations support complex, and a reusable launch vehicle support complex. Lockheed Martin’s Athena and Atlas vehicles and Boeing’s Delta vehicle launch from the spaceport. Although a launch site primarily for vertical launches, the Florida Space Authority is also considering the development of a commercial spaceport at a Florida airport to accommodate horizontally launched space tourism flights. The Mid- Atlantic Regional Spaceport, colocated at NASA’s Wallops Flight Facility, owns two launch pads for expendable vehicle launches and has access to three runways. The spaceport receives half of its funding from Virginia and Maryland, with the remainder coming from revenue from operations. According to the spaceport’s executive director, the spaceport will compete for Commercial Orbital Transportation Services demonstration program launches. The Mojave Spaceport in Mojave, California, is owned and operated by the East Kern Airport District and consists of three runways with associated taxiways and other support facilities. With an FAA Airport Improvement Program grant of $7.5 million, one of these runways will be extended to allow for the reentry of horizontally landing reusable vehicles. The spaceport also received FAA financial support to conduct its environmental assessment. Scaled Composites, XCOR Aerospace, and Interorbital Systems—companies that plan to enter the space tourism business—are tenants at the airport. Officials from spaceports told us the competition among the spaceports is positive. One licensed spaceport official mentioned that because each spaceport will attract a market unique to its launch capability; this competition will help the overall industry to grow. Industry representatives that we interviewed identified export licensing requirements under the International Traffic in Arms Regulations as a competitive issue facing the U.S. space launch industry. The regulations establish controls to ensure that arms exports are consistent with national security and foreign policy interests. Launch vehicles are included on State’s munitions list that is part of these regulations because these vehicles can deliver chemical, biological, and nuclear weapons. In the 1990s, U.S. space technology was divulged to a foreign country, which led to improvements of the reliability of its ballistic missiles that could be used against the United States. Industry representatives said that they would like fewer items to be regulated or a streamlined process for obtaining authorization to export launch vehicles. While we have not examined the issue of which specific items should be subject to export controls, we have examined the export control system and have recommended ways to improve its overall efficiency. As the commercial space launch industry expands to include the transportation of humans as well as satellites and other payloads into space and the use of inland as well as coastal launch sites, FAA’s safety oversight responsibilities will grow. To carry out these responsibilities and address the serious safety implications of the industry’s expansion for people both on the ground and in the launch vehicles, FAA will need sufficient expertise, either in-house or available from an impartial source, to evaluate a range of highly complex launch technologies. Such expertise may be difficult for FAA to obtain and maintain, given federal funding constraints and competition from the industry for qualified aerospace engineers. While FAA’s decision not to request additional safety oversight resources until the space tourism industry materializes is prudent in light of the industry’s uncertain pace of development, FAA also needs to be prepared to provide competent safety oversight if and when its workload increases in order to continue to provide timely license approvals and monitoring. Experience has not yet shown whether FAA’s regulations will be appropriate for the space tourism industry, given the differences in the operations of launch vehicles and the launch sites used to transport humans and payloads into space. FAA’s plan to address these differences through case-by-case evaluations of individual launch license applications is reasonable for an emerging industry with a wide variety of products. A DOT commissioned report to be issued to Congress and completed by December 2008, which will analyze whether expendable and reusable launch vehicles should be regulated differently from each other, could provide FAA with information about the suitability of its regulations to space tourism. FAA is prohibited from regulating crew and passenger safety before 2012, except in response to incidents that either pose a high risk or result in serious or fatal injury. FAA has interpreted this limited authority to allow it to regulate crew safety in certain circumstances and has been proactive in proposing regulations concerning emergency training for crews and passengers. However, FAA has not developed safety indicators by which it would monitor the developing space tourism sector and determine when to step in and regulate human space flight. Because FAA is a regulatory agency, it is important that its statutory responsibility to promote the commercial space launch industry not interfere with its safety oversight of the industry. We have no evidence that FAA’s promotional activities have conflicted thus far with its safety regulatory role, but conflicts could occur as the industry matures. For example, such conflicts may have occurred or appeared to occur when FAA was responsible for promoting as well as regulating the airline industry. Recognizing the potential conflict in the oversight of commercial space launches, Congress required DOT to report by December 2008 on whether the federal government should separate the promotion of human space flight from the regulation of such activity. Furthermore, Commerce now has the staff resources to promote the commercial space industry, possibly eliminating the need for FAA to play a promotional role. If DOT’s 2008 commissioned report on the dual safety and promotion roles does not fully address the potential for a conflict of interest, Congress should revisit the granting of FAA’s dual mandate for safety and promotion and decide whether the elimination of FAA’s promotional role is necessary to alleviate the potential conflict. To prepare for a possible major expansion in its safety oversight responsibilities resulting from the emergence of the space tourism industry and spaceports, we recommend that the Secretary of Transportation direct the FAA Administrator to implement the following three recommendations: As part of its strategic planning effort, FAA needs to assess the level of expertise and resources that will be needed to oversee the safety of the space tourism industry and the new spaceports under various scenarios and timetables. In addition, the Office of Commercial Space Transportation should develop a formal process for consulting with the Office of Aviation Safety about licensing reusable launch vehicles. The process should include the criteria under which the consultation takes place. To allow the agency to be proactive about safety, rather than responding only after a fatality or serious incident occurs, FAA should identify and continually monitor space tourism industry safety indicators that might trigger the need to regulate crew and flight participant safety before 2012. As part of this effort, FAA should develop and issue guidance on the circumstances under which it would regulate crew and flight participant safety before 2012. As long as FAA has a promotional role, it should work with the Department of Commerce to develop a memorandum of understanding that clearly delineates the two agencies’ respective promotional roles in line with their statutory obligations and larger agency missions. This memorandum of understanding should reflect Commerce’s role as an advocate of the industry, with the objective of increasing U.S. competitiveness and FAA’s focus on providing a safe environment in which the emerging space tourism sector could operate. We provided a draft of the report to Commerce, DHS, DOD, DOT, NASA, OSTP, State, and USTR. Commerce and NASA provided written comments (see apps. V and VI). State, DOD, and DHS had no comments. The four agencies that provided comments generally agreed with the findings presented in the report and FAA (within DOT) and Commerce agreed with the report’s recommendations. FAA, Commerce, OSTP, and USTR provided technical corrections, which we incorporated as appropriate. In response to the draft report’s discussion of resource challenges, FAA stated that it monitors commercial space launch developments to assess the impact on agency resources, and that it will request additional resources when they can be justified through the annual budget process. We agreed that FAA assesses resource requirements annually and added this information to the report; however, we have not seen evidence that it does so on a longer-term, strategic basis. In response to the draft report’s discussion of the suitability of FAA’s expendable launch vehicle regulations for reusable launch vehicles, FAA explained that the regulation is not intended to apply to reusable vehicles. We agreed with this comment and revised the draft to indicate the specific reusable launch vehicle regulation to which we were referring. Commerce agreed with our recommendation concerning the need for a memorandum of understanding between it and DOT that clearly delineates the two agencies’ respective promotional roles. In addition, Commerce pointed out that the draft report did not reflect the industry advocacy role played by its International Trade Administration. We agreed and added that information to the report. OSTP stated that the report should include more discussion of competition challenges facing the industry. Although we agree that such challenges are important and addressed some competitive issues, such as foreign price competition, a larger study of these issues was beyond the scope of the report. Finally, NASA noted that the draft report did not reflect the infrastructure support, such as wind tunnels and rocket propulsion test stands, that it provides to the commercial space launch industry. We agreed that this information should be included and modified the text accordingly. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 5 days after the date of this letter. At that time, we will send copies of this report to interested congressional committees, the Secretary of Transportation, the Administrator of FAA, the Secretary of Defense, the Secretary of Commerce, the Secretary of State, the Administrator of the National Aeronautics and Space Administration, the Secretary of Homeland Security, the Assistant Secretary of Homeland Security for the Transportation Security Administration, the Director of the White House Office of Science and Technology Policy, and the Assistant U.S. Trade Representative for Policy Coordination. We will also make copies available to others upon request. In addition, the report will be available at no cost on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our objective was to assess the federal role regarding commercial space launches and the government’s response to emerging industry trends both—domestically and internationally. To accomplish this, we addressed the following questions: (1) how well does the Federal Aviation Administration (FAA) oversee the safety of commercial space launches? (2) To what extent is FAA responding to key emerging issues in the commercial space launch industry? (3) What challenges does FAA face in regulating and promoting the commercial space launch industry? and (4) What are the key competitive issues affecting the U.S. commercial space launch industry, and to what extent are the industry and government responding to them? For background information on the commercial space launch industry, we reviewed reports prepared by the Congressional Research Service, FAA, the Department of Commerce (Commerce), and other sources to determine the composition of the industry and its role in the economy. We also obtained data on historical commercial launch activity worldwide, including the number of commercial space launches conducted, by country, from 1997 through 2005; the types of vehicles used; and the types of payloads launched. We did not independently verify this information because it was used for background purposes only. In addition, we identified the commercial launch infrastructure in the United States and observed the commercial launch facilities at Cape Canaveral Air Force Station and Vandenberg Air Force Base, which are the two main federal launch facilities. We also determined the roles and responsibilities of various federal agencies involved in commercial space launch activities by reviewing their respective statutory authorities and interviewing agency officials. These included officials from FAA, Commerce, the National Aeronautics and Space Administration (NASA), the Department of Defense (DOD), the Department of State, the Department of Homeland Security, the Office of the United States Trade Representative, and the Office of Science and Technology Policy. To determine how well FAA has overseen the safety of commercial space launches to date and to what extent it is responding to key emerging issues in the commercial space launch industry, we reviewed FAA’s safety oversight processes, identified key emerging issues in the commercial space launch industry, and reviewed FAA’s response to those issues. We reviewed FAA’s safety oversight process by interviewing agency officials about their safety oversight activities and reviewed documentation on FAA’s licensing and safety monitoring processes, including internal guidance and policies, applicable regulations, and memorandums of agreement with other federal agencies. Because FAA shares responsibility with the Department of the Air Force to conduct safety oversight at the Air Force’s launch sites, we interviewed FAA and Air Force officials at Cape Canaveral Air Force Station and Vandenberg Air Force Base about their interaction and respective responsibilities, and reviewed Air Force launch safety requirements. We also interviewed representatives from eight commercial space launch companies that had received launch licenses from FAA and six launch companies that were consulting with FAA about obtaining licenses as of September 2005 about FAA’s licensing process. We also interviewed an official heading a working group on reusable launch vehicles from the Commercial Space Transportation Advisory Committee, which is an industry group that provides advice to FAA on commercial launch issues. This official is also a key principal of the Personal Spaceflight Federation, which is an industry group. When we found that some companies were offering to sell tickets for flights into space, we also interviewed two firms that were selling such tickets or were planning to sell them about their services and related safety issues. In addition, because security is a component of safety, we interviewed officials from the Transportation Security Administration about its future role in securing new spaceports. To further assess how well FAA has overseen the safety of commercial space launches and because FAA conducts its safety oversight largely through its licensing process, we reviewed its application files for the licenses that were in effect in January 2006, which consisted of 13 launch licenses and five launch-site licenses. In addition, although its license was no longer in effect at the time of our review, we reviewed the application file for Scaled Composites’ launch of SpaceShipOne because the company had received the first license from FAA for a reusable vehicle. We reviewed these application files to determine the types of FAA safety issues that the agency examined and how it conducted those examinations. Because we were evaluating the management of a government program, we examined (1) how FAA applied certain management controls in its license approval process using our guidelines for management controls at federal agencies and (2) whether FAA met the 180-day review criteria established by the 1984 Commercial Space Launch Act. The management controls included documentation of the review process, effective communication, reliability and verification of data, supervisory review, and documentation of the approval process, which we determined from consulting Standards for Internal Control in the Federal Government for the elements needed for effective management of the licensing approval process. Furthermore, we assessed the extent to which FAA interacted with other federal agencies participating in an interagency advisory group on expendable vehicles, which was part of the application review process in some cases, by interviewing the interagency group members about their interaction with FAA on commercial launch issues. In addition, we reviewed FAA’s safety monitoring process by examining its most recent compliance-monitoring reports that corresponded to the licenses that were in effect as of January 2006, as well as enforcement actions taken against commercial launch companies for noncompliance with safety issues. Moreover, to obtain an independent perspective on how well FAA has conducted launch safety oversight and responded to key emerging issues, we interviewed 11 experts from academia and industry that we selected with the assistance of the National Academy of Sciences. (See table 3 for a list of these experts.) We identified key emerging issues through literature reviews and interviews with agency officials and industry representatives, including associations representing the commercial space launch industry and entities that had received launch or launch-site licenses from FAA or were consulting with FAA about receiving such licenses as of September 2005. To assess the extent to which FAA has responded to emerging issues in the commercial space launch industry, we interviewed federal government officials, including FAA officials and representatives from federal launch sites and FAA-licensed and proposed spaceports, launch companies, industry experts, and trade associations to obtain their views. See table 4 for a list of the organizations that we interviewed. We also reviewed the proposed and issued regulations relating to commercial space launches and comments published in the Federal Register on those regulations to assess how the agency had responded to the emerging issues. To determine the challenges that FAA faces in responding to emerging issues in the commercial space launch industry, we interviewed FAA officials, industry representatives, industry experts, and trade associations to obtain their views on how FAA would need to respond and the level of expertise and resources that would be required. This included considering the challenges that FAA may face in complying with requirements contained in both proposed regulations and under existing law to provide safety oversight over a new industry sector involving reusable launch vehicles. To determine the key competitive issues affecting the U.S commercial space launch industry and the extent to which the industry and federal government are responding to them, we conducted a literature review that included applicable laws affecting industry competitiveness and interviewed FAA officials, industry representatives, industry experts, and trade associations to obtain their views. This included interviewing officials from the Office of the U.S. Trade Representative about the U.S. government’s past use of bilateral treaties with foreign governments regarding the commercial space market and from the Commercial Space Transportation Advisory Committee, which is an industry group that provides advice to FAA on commercial launch issues, about industry concerns regarding insurance and liability matters. We also interviewed U.S. commercial space launch companies, including U.S. partners in international partnerships. In addition, to obtain the perspective of a foreign commercial launch company on international competitive issues, we interviewed an official from Arianespace, which is a French commercial space launch company. To obtain the perspective of a domestic commercial launch company, we interviewed SpaceX. We also reviewed regulations affecting the competitiveness of the commercial space launch industry, such as the International Traffic in Arms Regulations, and reports on competitive issues prepared by the Congressional Research Service, FAA, Commerce, Futron, and others. We attempted to compare the extent to which countries were providing financial assistance to their commercial space launch industries, but we were unable to obtain transparent and quantifiable data. We conducted our review from August 2005 through October 2006 in accordance with generally accepted government auditing standards. The commercial space transportation industry as a whole represents a significant sector of the U.S. economy. The industry consists of the commercial launch industry as well as the industries that commercial space enables, such as satellite manufacturing and services, ground equipment manufacturing, remote sensing, distribution industries, and launch vehicle manufacturing and services (see fig. 7). According to FAA, the commercial space transportation and enabled industries were responsible for approximately 550,000 total jobs and $98 billion in economic activity in the United States in 2004, with the satellite services industry, such as direct-to-home television services, having the largest economic activity impact (see fig. 8). Of this, launch vehicle manufacturing and services had $1.7 billion in economic impact. FAA evaluates applications for launch licenses by reviewing the safety, environmental, payload, and policy implications of a launch and determining the launch company’s insurance liability or financial responsibility. Figure 9 illustrates this process. FAA’s safety review includes an analysis of the reliability and functions of the vehicle, an assessment of the risk and hazards it poses to public property and individuals, and a review of the launch company’s policies and practices to demonstrate that the operations “pose no unacceptable threat to the public.” FAA conducts environmental reviews to fulfill its obligations under the National Environmental Policy Act, and FAA ensures that proposed commercial space transportation activities present “no unacceptable danger to the natural environment.” In addition, FAA reviews a proposed payload to determine whether its launch or reentry would jeopardize public health and safety, safety of property, U.S. national security or foreign policy interests, or international obligations of the United States. During the policy review, FAA consults with other federal agencies to determine whether the launch license presents any issues affecting U.S. national security, foreign policy, or international obligations. FAA also determines the amount of liability insurance required to compensate third-parties for activities carried out under a license, up to a maximum of $500 million or the maximum liability insurance available on the world market at a reasonable cost as determined by FAA. FAA also sets insurance requirements for U.S. government range property on the basis of its determination of the maximum probable loss that would result from licensed launch or reentry activities, not to exceed the lesser of $100 million or the maximum available on the world market at reasonable cost. FAA’s launch-site safety requirements are similar to those for launches of vehicles. FAA reviews a launch-site’s application for environmental, policy, operations, and safety considerations that include the location of the spaceport and its siting of explosives. Applicants also are required to address how they will control public access to their sites, which would include the use of security personnel, surveillance systems, physical barriers, or other means approved during the licensing process. Licensing and Safety Requirements for Launch; Final Rule (to amend 14 C.F.R. parts 413, 415, 417). 71 Fed. Reg. 50508. Experimental Permits for Reusable Suborbital Rockets; Notice of Proposed Rulemaking (to amend 14 C.F.R. parts 401, 404, 405, 406, 413, 420, 431, 437). 71 Fed. Reg. 16251. Human Space Flight Requirements for Crew and Space Flight Participants; Proposed Rule (to amend 14 C.F.R. parts 401, 431, 435, 440, 450, 460). 70 Fed. Reg. 77262. Reusable Launch and Reentry Vehicle System Safety Process, AC 431.35- 2A; Advisory Circular. Safety Approvals; Proposed Rule (to amend 14 C.F.R. part 414). 70 Fed. Reg. 32912. Miscellaneous Changes to Commercial Space Transportation Regulations; Proposed Rule (to amend 14 C.F.R. parts 401, 404, 413, 415, 420). 70 Fed. Reg. 29164. Licensing and Safety Requirements for Launch; Availability of Draft Regulatory Language and Notice of Public Meeting (to amend 14 C.F.R. parts 415, 417). 70 Fed. Reg. 9885. Commercial Space Transportation; Suborbital Rocket Launch; Notice and Request for Comments. 68 Fed. Reg. 59977. Licensing Test Flight Reusable Vehicle Missions, AC 431.35-3; Advisory Circular. Licensing and Safety Requirements for Launch; Proposed Rule (to amend 14 C.F.R. parts 413, 415, 417). 67 Fed. Reg. 49456. (This is a supplemental Notice of Proposed Rulemaking to the October 25, 2000, Proposed Rule.) Civil Penalty Actions in Commercial Space Transportation; Final Rule (14 C.F.R. parts 405, 406). 66 Fed. Reg. 2176. Licensing and Safety Requirements for Launch; Proposed Rule (to amend 14 C.F.R. parts 413, 415, 417). 65 Fed. Reg. 63921. Licensing and Safety Requirements for Operation of a Launch Site; Final Rule (14 C.F.R. parts 401, 417, 420). 65 Fed. Reg. 62812. Commercial Space Transportation Reusable Vehicle and Reentry Licensing Regulations; Final Rule (14 C.F.R. parts 400-435). 65 Fed. Reg. 56618. Financial Responsibility Requirements for Licensed Reentry Activities; Final Rule (14 C.F.R. part 450). 65 Fed. Reg. 56670. Expected Casualty Calculations for Commercial Space Launch and Reentry Missions, AC 431.35-1; Advisory Circular. Small-Scale Rockets; Notice of Public Meeting (to solicit comments on possible FAA regulation of small-scale rocket launches). 64 Fed. Reg. 73597. License Application Procedures, AC 413-1; Advisory Circular. Commercial Space Transportation Licensing Regulations; Final Rule (14 CFR parts 401, 411, 413, 415, 417). 64 Fed. Reg. 19586. Part 440 Insurance Conditions, AC 440-1; Advisory Circular. Commercial Space Transportation Financial Responsibility Requirements for Licensed Launch Activities; Final Rule (14 C.F.R. part 440). 63 Fed. Reg. 45592. Commercial Space Transportation Licensing Regulations; Final Rule (14 C.F.R. Ch. III). 53 Fed. Reg. 11004. In addition to the contact named above, Teresa Spisak (Assistant Director), Maureen Luna-Long, Bob Homan, Ashley Alley, Elizabeth Eisenstadt, Jim Geibel, Dave Hooper, Rosa Leung, Sara Ann Moessbauer, Josh Ormond, and Sandra Sokol made key contributions to this report.
In 2004, the successful launches of SpaceShipOne raised the possibility of an emerging U.S. commercial space tourism industry that would make human space travel available to the public. The Federal Aviation Administration (FAA), which has responsibility for safety and industry promotion, licenses operations of commercial space launches and launch sites. To allow the industry to grow, Congress prohibited FAA from regulating crew and passenger safety before 2012, except in response to high-risk events. GAO evaluated FAA's (1) safety oversight of commercial space launches, (2) response to emerging issues, and (3) challenges in regulating and promoting space tourism and responding to competitive issues affecting the industry. GAO reviewed FAA's applicable safety oversight processes and interviewed federal and industry officials. Several measures indicate that FAA has provided a reasonable level of safety oversight for commercial launches. For example, none of the 179 commercial launches that FAA licensed over the past 17 years resulted in fatalities, serious injuries, or significant property damage. However, FAA shared safety oversight with the Department of Defense (DOD) for most of these launches because they took place at federal launch sites operated by DOD. In addition, FAA's licensing activities incorporate a system safety process, which GAO recognizes as effective in identifying and mitigating risks. GAO's analysis of FAA records indicates that the agency is appropriately applying management controls in its licensing activities, thereby helping to ensure that the licensees meet FAA's safety requirements. In response to emerging issues in the commercial space launch industry, such as the potential development of space tourism, FAA has developed safety regulations and training for agency employees. The industry has raised concerns about the costs of complying with regulations and about the flexibility of the regulations to accommodate launch differences. However, FAA believes it has minimized compliance costs by basing its regulations on common safety standards and has allowed for flexibility by taking a case-by-case approach to licensing and by providing waivers in certain circumstances. FAA faces several challenges and competitive issues in regulating and promoting space tourism. For example, FAA expects to need more experienced staff for safety oversight as new technologies for space tourism evolve, but has not estimated its future resource needs. Other challenges for FAA include determining the specific circumstances under which it would regulate space flight crew and passenger safety before 2012 and balancing its responsibilities for safety and promotion to avoid conflicts. Recognizing the potential conflict in the oversight of commercial space launches, Congress required the Department of Transportation (DOT) to commission a report by December 2008 on several issues, including whether the promotion of human space flight should be separate from the regulation of such activity. In addition, U.S. commercial space launch industry representatives said that they face competitive issues concerning high launch costs and export controls that can affect their ability to sell services overseas. The federal government has provided support to the industry to help lower launch costs.
The V-22 Osprey is a tilt-rotor aircraft—one that operates as a helicopter for takeoffs and landings and, once airborne, converts to a turboprop aircraft—developed to fulfill medium-lift operations such as transporting combat troops, supplies, and equipment for the U.S. Navy, Marine Corps, and Air Force special operations. Figure 1 depicts V-22 aircraft in various aspects of use. The Osprey program was started in December 1981 to satisfy mission needs for the Army, Navy, and Air Force. Originally headed by the Army, the program was transferred to the Navy in 1982 when the Army withdrew from the program citing affordability issues. The program was approved for full-scale development in 1986, and the first aircraft was flown in 1989. A month after the first flight, the Secretary of Defense stopped requesting funds for the program due to affordability concerns. In December 1989, the Department of Defense (DOD) directed the Navy to terminate all V-22 contracts because, according to DOD, the V-22 was not affordable when compared to helicopter alternatives, and production ceased. Congress disagreed with this decision, however, and continued to fund the project. In October of 1992 the Navy ordered development to continue and awarded a contract to a Bell Helicopter Textron and Boeing Helicopters joint venture to begin producing production-representative aircraft. Low-Rate Initial Production began in 1997. In 2000, the MV-22 variant began operational testing, the results of which led the Navy’s operational testers to conclude that the MV-22 was operationally effective and was operationally suitable for land-based operations. Later evaluations resulted in testers concluding that the MV-22 would be operationally suitable on ships as well. Based on the same tests, DOD’s independent operational testers concluded that the MV-22 was operationally effective but not operationally suitable, due in part to reliability concerns. Despite the mixed test conclusions, a Program Decision Meeting was scheduled for December 2000 to determine whether the V-22 should progress beyond low-rate initial production into full-rate production. Following two fatal crashes that occurred in 2000 and resulted in 23 deaths, the last one occurring just before the full-rate production decision, the V-22 was grounded and, rather than proceeding to full-rate production, the program was directed to continue research and development while low-rate production continued. Before the V-22 resumed flight tests, modifications were made to requirements and design changes were made to the aircraft to correct safety concerns and problems. A second round of operational testing with modified aircraft was conducted in June 2005. Both Navy and DOD testers then recommended that the aircraft be declared operationally effective and suitable for military use. The Defense Acquisition Board approved it for military use as well as full-rate production in September 2005. The MV-22 deployments in Iraq were considered successful. As of January 2009, the 12 MV-22s deployed in Iraq and utilized by three separate squadrons had successfully completed all missions assigned to them including general support—moving people and cargo—in what was considered an established, low-threat theater of operations. These deployments confirmed that the MV-22’s enhanced speed and range enable personnel and internally carried cargo to be transported faster and farther than is possible with the legacy helicopters the MV-22 is replacing. According to MV-22 users and troop commanders, its speed and range “cut the battlefield in half,” expanding battlefield coverage with decreased asset utilization and enabling it to do two to three times as much as legacy helicopters in the same flight time. Cited advantages include more rapid delivery of medical care, more rapid completion of missions, and more rapid travel by U.S. military officials to meetings with Iraqi leaders. The MV-22 also participated in a few AeroScout missions and carried a limited number of external cargo loads. However, questions have arisen about whether the MV-22 is the aircraft best suited to accomplish the full mission repertoire of the helicopters it is intended to replace, and some challenges in operational effectiveness have been noted. Also, aircraft suitability challenges, such as unreliable parts and supply chain weaknesses, drove availability significantly below minimum required levels. The aircraft’s use in Iraq demonstrated operational challenges. For example, the introduction of the MV-22 into Iraq in combination with existing helicopters has led to some reconsideration of the appropriate role of each. Battlefield commanders and aircraft operators in Iraq identified a need to better understand the role the Osprey should play in fulfilling warfighter needs. They indicated, for example, that the MV-22 may not be best suited for the full range of missions requiring medium lift, because the aircraft’s speed cannot be exploited over shorter distances or in transporting external cargo. These concerns were also highlighted in a recent preliminary analysis of the MV-22 by the Center for Naval Analysis, which found that the MV-22 may not be the optimal platform for those missions. Availability challenges also impacted the MV-22. In Iraq, the V-22’s mission capability (MC) and full-mission capability (FMC) rates fell significantly below required levels as well as rates achieved by legacy helicopters. The V-22 MC minimum requirement is 82 percent, with an objective of 87 percent, compared with actual MC rates for the three squadrons of 68, 57 and 61 percent. This experience is not unique to Iraq deployment, as low MC rates were experienced for all MV-22 squadrons, in and out of Iraq. In comparison, the Iraq-based legacy helicopter MC rates averaged 85 percent or greater during the period of October 2007 to June 2008. Similarly, the program originally had a FMC requirement of 75 percent; but its actual rate of 6 percent in Iraq from October 2007 to April 2008 was significantly short of that, due in large part to faults in the V-22’s Ice Protection System. In areas where icing conditions are more likely to be experienced, such as in Afghanistan, this may threaten mission accomplishment. Repair parts issues and maintenance challenges affected the availability of MV-22s in Iraq. V-22 maintenance squadrons faced reliability and maintainability challenges, stemming from an immature supply chain not always responsive to the demand for repair parts and aircraft and engine parts lasting only a fraction of their projected service life. The MV-22 squadrons in Iraq made over 50 percent more supply-driven maintenance requests than the average Marine aviation squadron in Iraq. A lack of specific repair parts took place despite having an inventory intended to support 36 aircraft as opposed to the 12 aircraft deployed. However, only about 13 percent of those parts were actually used in the first deployment. In addition, many parts that were used were in particularly high demand, which led to a shortage that caused cannibalization of parts from other V- 22s, MV-22s in the United States, and from the V-22 production line. Thirteen V-22 components accounted for over half the spare parts unavailable on base in Iraq when requested. These 13 lasted, on average, less than 30 percent of their expected life, and 6 lasted less than 10 percent of their expected life. V-22 engines also fell significantly short of service life expectancy, lasting less than 400 hours versus the program estimated life of 500-600 hours. V-22 missions in Iraq represent only a portion of the operations envisioned for the aircraft, but operational tests and training exercises have identified challenges in the V-22’s ability to conduct operations in high-threat environments, carry the required number of combat troops and transport external cargo, operate from Navy ships, and conduct missions operating in more extreme environments throughout the world. While efforts are underway to address these challenges, success is uncertain since some of them arise from the inherent design of the V-22. High-Threat Environments: The Osprey was intended to operate across a spectrum of high-threat combat situations, facing a broad range of enemy land- and sea-based weapons. However, its ability to do so is not yet demonstrated. The V-22 has maneuvering limits that restrict its ability to perform defensive maneuvers and it does not have a required integrated defensive weapon needed to suppress threats while approaching a landing zone, disembarking troops within the landing zone, or while leaving the landing zone. Currently, the Marine Corps intends to employ the aircraft in a manner that limits its exposure to threats—a change from the original intent that the system would be able to operate in such environments. Transporting Personnel and External Cargo: Operational tests and shipboard training exercises have determined that the capacity of the MV- 22 to transport troops and external cargo is, in some cases, below program requirements. The V-22 cannot carry a full combat load of 24 Marines if equipped as intended. The average weight of each Marine fully equipped with improved body armor and equipment has risen from 240 to 400 lbs. As a result, the aircraft can only transport 20 fully loaded combat troops rather than the 24-troop requirement. Troop-carrying capacity may be further reduced in other configurations and flight scenarios. Most external cargo loads have not been certified for high-speed transport and thus would not enable the V-22’s speed to be leveraged. Anticipated new and heavier equipment would not be able to be transported by the Osprey. A 2007 Center for Naval Analysis study found that the MV-22 will not be able to externally transport heavier equipment, such as the Joint Light Tactical Vehicle—which is to replace the Marine Corps’ High-Mobility, Multi-Purpose Wheeled Vehicle (HMMWV). As a result, the study concluded that there will be less need for MV-22s for external lifting and an increased need for heavier lift helicopters. The weight of the MV-22 with added equipment planned as upgrades to currently configured aircraft may pose a moderate risk to the program. The heavier the aircraft is, the less it can carry. Weight growth as a result of planned MV-22 upgrades could reduce the aircraft’s operational utility transporting loads in higher altitude regions of the world, such as Afghanistan. Operating on Navy Ships: Efforts to ready the V-22 for deployment onboard Navy ships have identified numerous challenges. Because it is larger than the helicopter it is replacing, ships can carry fewer V-22s than the predecessor aircraft. Also, the V-22 cannot fully utilize all operational deck spots on ships. The MV-22 is only cleared to take off and land from four of the six operational deck spots of the LHA- and LHD-class ships usable by CH-46s. The Osprey’s large inventory of repair parts also constrains hangar deck space essential for maintenance actions on the V-22 and other aircraft. The space needed for its repair parts is so large that some parts may need to be prepositioned ashore. Safety concerns caused by downwash have been documented. The V- 22’s proprotors create downwash significantly greater than that of the CH-46s it is replacing. The downwash impacts operations below the aircraft, including troop embarkation and debarkation, hooking up external loads, and fastroping. During shipboard exercises, the V-22’s downwash dislodged equipment such as life raft container securing bands and was so severe in one instance that another person was assigned to physically hold in place the sailor acting as the landing guide. Recently completed tests on the CV-22 found that the significant downwash also had various negative effects on land-based missions. Challenges Operating Globally in Extreme Environments: The Osprey’s ability to conduct worldwide operations in many environments is limited. The V-22 had a requirement that its fuselage and cockpit be designed to restrict the entry of nuclear, biological, and chemical contaminants into the aircraft. During initial operational tests numerous problems existed with the seals that maintained cabin pressure, so the system could not be used. Without it, operational V-22s are forced to avoid or exit areas of suspected contamination and decontaminate affected aircraft, likely reducing their availability and sortie capability. The MV-22 is intended to support diverse mission requirements that will require it to fly during the day or at night, in favorable or adverse weather, and across a range of altitudes from close to the ground to above 10,000 feet above mean sea level. Current V-22 operating limitations do not support helicopter operations above 10,000 feet. The MV-22 currently does not have a weather radar and the Osprey’s Ice Protection System is unreliable, so flying through known or forecasted icing conditions is currently prohibited. The V-22’s original program cost estimates have changed significantly as research and development, and procurement costs have risen sharply above initial projections. Operations and supports costs are just beginning and are expected to rise. This has taken place in spite of the fact that performance standards and metrics for V-22 were modified throughout the development effort. From initial development in 1986 through the end of 2007, the program’s Research, Development, Test, and Evaluation cost increased over 200 percent—from $4.2 to $12.7 billion—while its procurement cost increased nearly 24 percent from $34.4 to $42.6 billion. This increase coincided with significant reductions in the number of aircraft being procured—from nearly a thousand to less than 500 (most of which will be procured for the Marine Corps)—resulting in a 148 percent increase in procurement unit cost for each V-22. Operations and support (O&S) cost are also expected to rise. Table 1 details key aspects of the V-22 program’s cost and schedule experience from development start to 2007. O&S costs—typically the largest portion of a weapon system’s total costs—are currently reported at $75.41 billion for the life cycle of the program, but O&S costs for the program are just beginning and are expected to rise. One indication they may rise is the current cost per flying hour, which is over $11,000—more than double the target estimate for the MV-22 as well as 140 percent higher than the cost for the CH-46E. The Osprey’s Iraq experience demonstrated that the rise in cost is due in part to unreliable parts, the cost of some parts, and required maintenance. As illustrated in figure 2, the program’s estimated future funding needs are approximately $100 billion (then-year dollars)—nearly $25 billion in procurement and around $75 billion in O&S. According to Marine Corps officials, the presence of unreliable parts contributed to reliability and maintainability issues for MV-22 deployed in Iraq, and a program is in place to address underperforming components. However, program management does not consider the current reliability and maintainability strategy to be coherent. Problems with parts reliability have resulted in more maintenance activity than expected, and if there is no improvement, overall cost and maintenance hours may remain high. Changes to the current engine sustainment contract with Rolls Royce—the V-22’s engine manufacturer—could also affect the program’s already rising O&S costs. Initially, the Marine Corps’ proposed performance parameters for the V-22 were focused on speed, range, and payload. However, the Joint Requirements Oversight Council deferred consideration of system requirements until completing the 1994 Cost and Operational Effectiveness Analysis that validated the V-22 over other alternatives. While reports indicate that the MV-22 is meeting all its key performance parameters, program officials said modifications were made to balance aircraft operational requirements against technical risks and program costs. In 2001, for example, modifications consolidated 14 key performance parameters into 7 for the MV-22 variant. While the office of the Director, Operational Test and Evaluation (DOT&E) found the MV-22 operationally effective in 2000, it did not find it operationally suitable, due in part to reliability concerns. Mission capability, one of the metrics used to measure suitability, was modified in 2004 such that the mission capability rate does not now have to be met until the aircraft reaches system maturity (60,000 flight hours), whereas the requirement previously specified no minimum required number of flight hours. According to Marine Corps Headquarters officials, the aircraft currently has over 50,000 hours and may reach the 60,000 hour threshold within a year. Concerns about V-22 weight increase and how it may affect aircraft performance have continued. In 2005, a DOT&E report on the second operational test of the MV-22 predicted a drop in performance due to a projected weight increase. However, according to Navy operational testers who tested the aircraft in 2007, performance did not decrease. DOT&E did not report on the 2007 test. The program office is currently tracking weight increase in the newest version of the aircraft as a potential risk to the achievement of select key performance parameters. After more than 20 years in development and 14 years since the last cost and operational effectiveness analysis was developed to reaffirm the decision to proceed with the V-22 program, the MV-22 experience in Iraq demonstrated that the Osprey can complete missions assigned in low- threat environments. Its speed and range were enhancements. However, challenges may limit its ability to accomplish the full repertoire of missions of the legacy helicopters it is replacing. If so, those tasks will need to be fulfilled by some other alternative. Viewed more broadly, the MV-22 has yet to fully demonstrate that it can achieve the original required level of versatility. To be useful to the warfighter in a variety of climates and places, its ability to address and resolve a range of operational challenges must be re-evaluated. Furthermore, suitability challenges that lower aircraft availability and affect the operations and support funding that may be required to maintain the fleet need to be addressed. Based on the Iraq experience, the cost per flight hour is more than double the target estimate. DOD is therefore faced with the prospect of directing more money to a program, the military utility of which in some areas remains unproven. Now is a good time to consider the return on this investment as well as other less costly alternatives that may fill the current requirement. The V-22 program has already received or requested over $29 billion in development and procurement funds. The estimated funding required to complete the development and procure additional V-22s is almost $25 billion (then-year dollars). In addition, the program continues to face a future of high operations and support cost funding needs, currently estimated at $75.4 billion for the life cycle of the program. Before committing to the full costs of completing production and support the V- 22, the uses, cost, and performance of the V-22 need to be clarified and alternatives should be reconsidered. Questions to consider include: To what degree is the V-22 a suitable and exclusive candidate for the operational needs of the Marine Corps and other services? How much will it cost? How much can DOD afford to spend? To what degree can a strategy be crafted for ensuring control over these future costs? If the V-22 is only partially suitable, to what degree can another existing aircraft or some mixture of existing aircraft (including V-22s) or a new aircraft perform all or some of its roles more cost effectively? Some consideration should be given to evaluating the roles such aircraft play in today’s theaters of war and whether their performance warrants their cost. Failure to re-examine the V-22 program at this point risks the expenditure of billions of dollars on an approach that may be less effective than alternatives. Furthermore, if the suitability challenges facing the program are not adequately addressed, the future cost of the program could rise significantly requiring funds that might otherwise be made available to satisfy other needs. This is why we recommended in our May 11 report that the Secretary of Defense (1) re-examine the V-22 by requiring a new alternatives analysis and (2) require the Marine Corps to develop a prioritized strategy to improve system suitability, reduce operational costs, and align future budget requests. DOD concurred with our second recommendation, but not the first. In non-concurring with our recommendation for a new V-22 alternatives analysis, DOD stated that it supports validating required MV-22 quantities and the proper mix of aircraft, but not by means of a new V-22 alternatives analysis. Rather, DOD stated that planning for all elements of Marines Corps aviation (including required quantities, location, and employment of medium-lift assets) and total force affordability are reviewed and updated annually in the Marine Aviation Plan. We maintain our recommendation for a new alternatives analysis as a means of providing a comparison of a fuller range of alternatives, including their costs, operational suitability, and operational effectiveness under varying scenarios and threat levels. Furthermore, development of a V-22 alternatives analysis could assure congressional decision-makers that a reasoned business case exists that supports the planned acquisition of an additional 282 V-22s and an expenditure of almost $25 billion in procurement funds in fiscal years 2010 and beyond. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Michael J. Sullivan at (202) 512-4841 or [email protected]. Individuals making key contributions to this testimony include Bruce H. Thomas, Assistant Director; Jerry W. Clark; Bonita J.P. Oden; Bob Swierczek; Kathryn E. Bolduc; Jonathan R. Stehle; Johanna Ayers; Jason Pogacnik; Hi Tran; William Solis; and Marie P. Ahearn. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the 1980s, the V-22, developed to transport combat troops, supplies, and equipment for the U.S. Marine Corps and to support other services' operations, has experienced several fatal crashes, demonstrated various deficiencies, and faced virtual cancellation--much of which it has overcome. Although recently deployed in Iraq and regarded favorably, it has not performed the full range of missions anticipated, and how well it can do so is in question. Given concerns about the V-22 program, GAO recently reviewed and on May 11, 2009, reported on MV-22 operations in Iraq; strengths and deficiencies in terms of the capabilities expected of the V-22; and past, current, and future costs. In that report, GAO recommended that the Secretary of Defense require (1) a new alternatives analysis of the V-22 and (2) that the Marine Corps develop a prioritized strategy to improve system suitability, reduce operational costs, and align future budget requests. The Department of Defense (DOD) concurred with the second recommendation, but not the first. GAO believes both recommendations remain valid. This testimony highlights GAO's findings from that report. In speaking of the V-22, we are actually speaking of two variants of the same aircraft. The MV-22 is used by the Marine Corps; and the CV-22 by the Air Force to support special operations. This statement largely focuses on the MV-22, but also refers to the V-22 and CV-22. As of January 2009, the 12MV-22sin Iraq successfully completedall missions assigned in a low-threat theater of operations--using their enhanced speed and range to deliver personnel and internal cargo faster and farther than the legacy helicopters being replaced. However, challenges to operational effectiveness were noted that raise questions about whether the MV-22 is best suited to accomplish the full repertoire of missions of the helicopters it is intended to replace. Additionally, suitability challenges, such as unreliable component parts and supply chain weaknesses, led to low aircraft availability rates. Additional challenges have been identified with the MV-22's ability to operate in high-threat environments, carry the required number of combat troops and transport external cargo, operate from Navy ships, and conduct missions in more extreme environments throughout the world. While efforts are underway to address these challenges, it is uncertain how successful they will be as some of them arise from the inherent design of the V-22. The V-22's original program cost estimates have changed significantly. From 1986 through 2007, the program's Research, Development, Test, and Evaluation cost increased over 200 percent--from $4.2 to 12.7 billion--while the cost of procurement increased 24 percent from $34.4 to $42.6 billion. This increase coincided with significant reductions in the number of aircraft being procured--from nearly 1,000 to less than 500--resulting in a 148 percent increase in cost for each V-22. Operations and support costs are expected to rise. An indication is the current cost per flying hour, which is over $11,000--more than double the target estimate for the MV-22. After more than 20 years in development, the MV-22 experience in Iraq demonstrated that the Osprey can complete missions assigned in low-threat environments. Its speed and range were enhancements. However, challenges may limit its ability to accomplish the full repertoire of missions of the legacy helicopters it is replacing. If so, those tasks will need to be fulfilled by some other alternative. Additionally, the suitability challenges that lower aircraft availability and affect operations and support costs need to be addressed. The V-22 program has already received or requested over $29 billion in development and procurement funds. The estimated funding required to complete development and procure additional V-22s is almost $25 billion (then-year dollars). In addition, the program continues to face a future of high operations and support cost funding needs, currently estimated at $75.4 billionfor the life cycle of the program. Before committing to the full costs of completing production and supporting the V-22, the uses, cost, and performance of the V-22 need to be clarified and alternatives should be re-considered.
The sun provides the energy that determines the climate and weather. Solar radiation passes through space and is largely absorbed by components of the global climate system (the atmosphere, oceans, and land, as well as the biosphere, which includes all living things); the remaining radiation is reflected. The solar radiation absorbed by the earth’s surface is released as infrared radiation. Some of this radiation passes back through the atmosphere, and some is absorbed in the atmosphere by the molecules of gas—principally water vapor, carbon dioxide, methane, and chlorofluorocarbons—known collectively as greenhouse gases. These gas molecules act as a partial thermal blanket, trapping much of the heat energy and redirecting it to the earth’s surface and lower atmosphere. This naturally occurring process, called the greenhouse effect (see fig. 1), helps to maintain the earth’s temperature at an average of approximately 60 degrees Fahrenheit.is reflected by the Earth and the atmosphere. by greenhouse gas molecules. The effect of this is to warm the Earth’s surface and the lower atmosphere. atmosphere. by the Earth’s surface and warms it. Earth’s surface. Additional atmospheric warming—called the enhanced greenhouse effect or global warming—appears to be associated with human activities. During the past century, as industry, agriculture, and transportation have grown, so, too, have atmospheric concentrations of heat-trapping greenhouse gases (see app. I). At the same time, the earth has gotten warmer, according to historical data. Recorded temperatures for the period from 1860 through 1993 show a warming trend that generally coincides with the increased use of fossil fuels during the Industrial Revolution—and, hence, with the increased emission of greenhouse gases. During the past 50 to 100 years, volcanic eruptions have combined with the increased combustion of fossil fuels and emission of greenhouse gases to increase the concentration of aerosols in the lower atmosphere. Scientists believe that because these aerosols deflect sunlight, they have partially offset the effects of global warming. As a result, scientists surmise, temperatures have not reached the levels projected by GCMs, which do not include the aerosols’ effects. To help understand the global climate system’s response to emissions of greenhouse gases, scientists use three types of GCMs: atmospheric, oceanic, and coupled. In general, atmospheric GCMs predict the physical behavior of the atmosphere. Oceanic GCMs represent the physics of the ocean. Coupled GCMs, which scientists regard as the most advanced of the models, physically join atmospheric and oceanic GCMs and treat the evolution of the climate in both domains. To improve predictions of the future climate, modelers are also striving to couple, and to some degree have coupled, (1) the land surface to the atmosphere and (2) the Antarctic sea ice to both the ocean and the atmosphere. All types of GCMs process vast quantities of data on variables affecting climate. Using complex mathematical equations to represent the actions and interactions of these variables, the GCMs process the data to project patterns of climatic conditions. (App. II shows how a coupled GCM works.) To test the accuracy of a model’s projections, modelers run the model with their best estimates of historical climatic data and compare the resulting projections with records of actual climatic conditions for the period being modeled. Modelers assume that if the model can accurately simulate actual climatic conditions for prior periods, then it can be used to accurately project future climatic conditions. Although the earth’s gradual warming since the mid-19th century is generally consistent with GCMs’ estimates of the effects of greenhouse gases (when adjusted for the effects of aerosols), scientists have not been able to attribute the warming conclusively to the enhanced greenhouse effect or to quantify its effects. Specifically, they have not been able to uniquely and quantitatively distinguish the effects of higher concentrations of greenhouse gases from the effects of other factors that can change the climate. Such factors include natural fluctuations in the global climate system, increases in atmospheric ozone, air pollution, and aerosols emitted into the atmosphere from volcanic eruptions. Until more is known about the relative influence of these various factors on the earth’s climate, GCMs’ estimates of global warming will remain uncertain. Over the last decade, GCMs have accurately simulated many elements of the observed climate, providing useful indications of some future climatic conditions. For example, atmospheric models have demonstrated some skill in portraying aspects of atmospheric variability, such as the surface temperature of the sea. Oceanic models have also simulated the general circulation of the ocean, including the patterns of the principal currents. Coupled models, though still prone to small-scale errors, have simulated the current climate on a large scale as well as portrayed large-scale atmospheric and oceanic structures. This progress notwithstanding, the models remain limited in their ability to estimate, with desired accuracy, the magnitude, timing, and regional distribution of future climatic changes. These limitations stem from scientists’ imperfect understanding of the global climate system and computers’ insufficient capacity to perform more detailed simulations. More specifically, the accuracy of the models’ predictions is limited by (1) incomplete or inadequate representations of the processes affecting climate and (2) insufficient computer power. Research is being conducted to overcome both the scientific and the technical limitations affecting the accuracy of GCMs’ estimates. According to the U.S. Global Change Research Program, most GCMs include the most important processes that affect climate, such as radiation, convection, and land surface exchanges. However, some models do not include or fully incorporate some processes, and even the most advanced models do not adequately represent the interactions of some processes. None of the models fully incorporates certain components of the global climate system, called feedbacks or feedback mechanisms, and none adequately represents the interactions of these mechanisms with greenhouse gases, called feedback processes. Atmospheric and oceanic GCMs include fewer processes than coupled GCMs, and their simulations are, therefore, more limited and, in some cases, less accurate. Atmospheric models do not fully portray the influence of oceanic pressures (currents) and fluctuations in climate, while oceanic models do not fully account for the effects of atmospheric surface winds. The omission or incomplete incorporation of some processes may introduce errors into these models’ projections. For example, atmospheric models tested in 1991 produced systematic errors in their projections of sea level pressure, temperature, zonal wind, and precipitation. Compared with atmospheric and oceanic GCMs, coupled GCMs include more processes and interactions at the ocean-atmosphere interface, but even they do not include critical biospheric and chemical interactions with the atmosphere. The U.S. Global Change Research Program is supporting efforts by modeling groups to include more complete sets of processes in their models and to identify systematic errors in the models. Although coupled GCMs produce more comprehensive simulations of current climatic conditions than either atmospheric or oceanic GCMs, their simulations still differ from actual conditions. Modelers believe that these models are impaired by a condition known as climatic drift, which results from imbalances in the models’ analyses of heat and moisture variables. These imbalances cause the models’ estimates of temperature and precipitation to deviate from actual conditions. For example, in an experiment conducted by the National Center for Atmospheric Research in 1988, the models estimated wintertime ocean temperatures that were 7 degrees warmer than observed temperatures for the icebound region of Antarctica and 9 degrees colder than observed temperatures for the tropics. Modelers either accept climatic drift or try to correct its effects by inserting adjustments, called flux adjustments. Because flux adjustments artificially improve the models’ performance, their use is controversial. Scientists believe that an increased understanding of the interactions between atmospheric and oceanic variables—and, hence, a more accurate mathematical representation of these interactions—may eventually remove the need for flux adjustments. Reducing the need for flux adjustments is an objective of coupled model research. GCMs include many of the most important feedback mechanisms, such as vegetation, water vapor, ice cover, clouds, and the ocean. However, the models do not yet adequately represent the interactions of these mechanisms with greenhouse gases. Such interactions can amplify, dampen, or stabilize the warming produced by increased concentrations of greenhouse gases. The influence of feedback mechanisms on climate is likely to increase as concentrations of greenhouse gases increase; however, modelers do not fully understand the effects of these mechanisms and have not learned how to represent them with sufficient accuracy in models. Although they have clarified the role of water vapor and improved their ability to model its effects, they are still seeking to understand and accurately model the effects of clouds, which have the greatest potential of all the feedback mechanisms to amplify or moderate global warming. Recent studies have shown that different schemes for modeling cloud formation processes can lead to substantially different projections of the earth’s temperature. In 1989, for instance, two simulations, which varied only in their treatment of the cloud feedback process, produced estimates of the increase in the earth’s annual average surface temperature of 4.9 and 9.4 degrees, respectively. Insufficient computer power affects the accuracy of GCMs’ estimates because even the most powerful computers are limited in their ability to store and analyze the vast quantity of data required to accurately simulate changes in the global climate. Modelers have tried to overcome these limitations by introducing assumptions into their models that deliberately oversimplify some operations in order to free the GCMs’ capacity and time for other, more critical operations. For example, modelers have assumed that the ocean was not warmed by emissions of greenhouse gases before 1985. Although this assumption gains capacity for the GCMs, it introduces an error, called the cold start error, that increases the uncertainty of the GCMs’ predictions. Another oversimplification, the division of the earth into relatively large grids for analytical purposes, prevents the GCMs from accurately predicting regional changes in climate. Simulations by coupled GCMs that are calculated on the assumption that the ocean was not warmed by increased emissions of greenhouse gases before 1985 do not adequately account for the ocean’s reduced capacity to absorb these emissions. In fact, the ocean will reach its capacity for absorbing these emissions sooner—possibly decades sooner—than the coupled GCMs calculate. It will then deflect more of the heat-trapping emissions to the atmosphere, thereby enhancing global warming more rapidly than the models predict. While recognizing that the cold start error artificially delays the onset of global warming in GCMs’ predictions, scientists do not know by how much or by how long it distorts the predictions. Overall, they believe that it causes the models to underestimate the change in temperature that will result from the emissions. Modelers have shown that the cold start error can cause projections of the earth’s average annual temperature to differ by as much as 0.7 degrees after 50 years. According to scientists, an extraordinary commitment of computer time would be required to project the timing of future temperature changes more accurately. Completing the number of computer runs needed to arrive at more precise timing projections could take many months even on a state-of-the-art supercomputer. Still another limitation affecting the accuracy of GCMs’ estimates is the relatively large size of the grids into which the models divide the earth. These grids typically cover an area about the size of South Carolina. Although their use enables GCMs to depict larger-scale regional effects in relatively large, homogeneous regions, it does not allow modelers to incorporate detailed regional features. Consequently, the use of large grids prevents the models from accurately forecasting climatic changes for smaller, less homogeneous regions. The use of smaller grids would permit the incorporation of more detailed features that could be used to project regional changes more precisely. However, models using smaller grids would take longer to run. Each grid contains a single value for each variable for the entire area represented. Today’s grids are smaller than those we described in our 1990 report on global warming, but they are not yet small enough to produce the information policymakers and planners need to develop strategies for adapting to regional changes. Researchers believe that the combination of greater computer power, which would permit the use of smaller grids, and greater understanding of cloud formation processes, which would permit the incorporation of this important but often excluded feedback mechanism, would produce more accurate projections of regional climatic changes. To improve the accuracy of GCMs’ estimates, scientists are developing models that incorporate more of the processes affecting the climate system (particularly cloud formation processes) and better reflect interactions among various components of the climate system, including interactions between or among the ocean and the atmosphere; the land surface, the biosphere, and the atmosphere; and the cryosphere (frozen regions), the ocean, and the atmosphere. They are also developing larger and faster computers that can manipulate data for longer periods of time and smaller grids. In addition, they are collecting more data and conducting more research on the processes affecting climate and improving the international exchange of such data. Various international programs, such as the World Climate Research Programme and the Global Climate Observing System, currently have efforts under way to address these actions. In commenting on a draft of this report, the Director of the Office of the U.S. Global Change Research Program and agency officials stated that the program has several ongoing efforts to address the limitations of GCMs discussed in this report. For example to address the models’ inadequate representation of processes affecting the climate, the program is devoting approximately 30 percent of its $1.8 billion budget for fiscal year 1995 to conduct research aimed at improving scientific understanding of these processes. In addition, to address the need for increased computer power, the program has, through NSF, established a dedicated computing facility for modeling the climate system, known as the Climate Simulation Laboratory, in cooperation with the National Center for Atmospheric Research. This facility will provide state-of-the-art computer resources and data storage systems for use in major modeling research simulations. The goals and funding for the U.S. Global Change Research Program’s fiscal year 1995 research programs are summarized in appendix III. Further information on the program’s efforts to reduce the uncertainties of GCMs’ projections appears in a letter from the Subcommittee on Global Change Research, which is reproduced in appendix VI. Five federal agencies reported spending an estimated $122.6 million during fiscal years 1992 through 1994 to fund modeling activities to improve predictions of the future climate. As shown in table 2, the agencies reported spending approximately $36.9 million, $40.5 million, and $45.3 million for these projects in fiscal years 1992, 1993, and 1994, respectively. Of the five agencies, DOE had the largest climate change modeling program, representing about 36 percent of the total cost for all 3 years. Appendix IV presents background and cost information on each agency’s climate modeling program. Most of the agencies’ climate modeling research was contracted out to universities and research laboratories throughout the United States. These modeling activities were conducted at five major modeling centers in the United States: 1) the National Center for Atmospheric Research in Boulder, Colorado; 2) NOAA’s Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey; 3) NASA’s Goddard Institute for Space Studies in New York, New York; 4) NASA’s Goddard Space Flight Center in Greenbelt, Maryland; and 5) DOE’s Lawrence Livermore National Laboratory in Livermore, California. Although the accuracy of general circulation models’ estimates of climatic change has improved over the past decade, these estimates are still limited by incomplete and inaccurate representations of the processes affecting climate and by insufficient computer power. These limitations prevent scientists from carrying out analyses that would yield more precise information about the magnitude, timing, and regional effects of predicted increases in warming. Ongoing efforts to collect and analyze data, improve representations of climatic processes, and, develop and apply more powerful computers should improve the accuracy of the models’ estimates. Whether these estimates will provide policymakers with the information they need to respond to possible future climatic changes will depend on the degree of certainty expected from the models, the resources provided to improve the models, and advances in scientists’ fundamental understanding of the climate system. We obtained comments from representatives of DOE, NASA, NSF, NOAA, and EPA. According to these comments, which were coordinated by the Director of the Office of the U.S. Global Change Research Program, the agencies found, overall, that this report provided an interesting and useful perspective on the most important factors that limit the credibility of general circulation models’ projections of future climatic conditions. However, the agencies believed that the report would be more useful if it provided some perspective on what the modeling community has learned about the models’ limitations and what efforts are under way to address them. The agencies also believed that the report focused too heavily on the limitations of the models while remaining largely silent on their accomplishments. We have responded to the agencies’ comments by adding information about ongoing research to overcome the models’ scientific and technical limitations and about recent positive results achieved with the models. Additionally, the agencies believed that the report should include, in full, the report of the Forum on Global Climate Change Modeling (Forum), which was developed to inform policymakers about the issues associated with using general circulation models. While we believe that the Forum’s document is useful, we did not include it in this report because its major points are summarized in the agencies’ detailed comments and are included in the body of this report, insofar as they pertain to the objectives of this assignment. Furthermore, since the Forum’s report is available to the public from the Office of the U.S. Global Change Research Program (USGCRP Report 95-01, May 1995), we believe that persons desiring the additional detail may request the document. The agencies’ comments and our response appear in appendix VI. We conducted our work between September 1994 and June 1995 in accordance with generally accepted government auditing standards. We reviewed various scientific documents that discussed the models’ limitations and the implications of these limitations. Through the Director of the Office of the U.S. Global Change Research Program, we collected data on costs from the five agencies that fund U.S. global climate change modeling. We did not independently verify the validity of the cost data. Appendix V more fully discusses our scope and methodology. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Director of the U.S. Global Change Research Program and other interested parties. We will make copies available to others upon request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix VII. Over the past century, human activities have significantly increased atmospheric concentrations of carbon dioxide, methane, and nitrous oxide—known, together with water vapor, as greenhouse gases. Emissions of carbon dioxide, the most abundant greenhouse gas after water vapor, increased by about 25 percent from preindustrial times until 1993. Currently, the growth in emissions is primarily attributable to the increased use of fossil fuels, whereas, in the 19th and early 20th century, it was due to deforestation and the expansion of agriculture. Methane emissions increased by about 9 percent between 1978 and 1987 and have more than doubled since preindustrial times. Nitrous oxide emissions increased by about 9 percent from preindustrial times until 1993. Table I.1. details the increases in greenhouse gas emissions, the periods when the increases occurred, and the sources of the emissions. The relative contribution of each gas to the enhanced greenhouse effect is determined by the ability of the gas to absorb infrared radiation and by its atmospheric abundance. Atmospheric abundance is determined by the quantity of gas emitted and by its atmospheric life span. For example, although a methane molecule is a more effective absorber of infrared radiation than a carbon dioxide molecule, it contributes only about a third as much to the enhanced greenhouse effect because it is less abundant. Carbon dioxide is believed to have contributed 70 percent of the enhanced greenhouse effect from the beginning of the Industrial Revolution up to 1990. Figure I.1 depicts the relative contributions to the enhanced greenhouse effect from the cumulative contributions of carbon dioxide, methane, and nitrous oxides. Chlorofluorocarbons are not included in the figure because, unlike carbon dioxide, methane, and nitrous oxide, their atmospheric concentrations vary considerably across the globe and are difficult to quantity. General circulation models (GCM) are the most advanced tool that scientists have to model climate and predict climatic change. These models comprise complex mathematical equations that describe various physical processes and interrelationships, including seasonal changes in sunlight, global air currents, and other factors that affect the climate. Because the equations are so complex, modelers cannot solve them exactly and consequently must segment the earth into a discrete number of grids to approximate the solutions. The coupled model depicted in figure II.1 calculates solutions for 18 layers above each grid (extending from the ocean’s surface to the top of the atmosphere) and 20 layers below each grid (extending from the surface to the floor of the ocean). Figure II.1: How One General Circulation Model Works Earth is divided into a gridwork of 50, 520 “boxes”. The atmosphere above each box is divided into 18 layers. The ocean under each box is divided into 20 layers. Each layer’s program represents a set of variables (such as winds and temperature) and formulas for basic physical laws (such as the conservation of energy). The computer calculates how processes in each layer affect conditions in each neighboring layer and feeds that data into adjoining layers. The computer repeatedly recalculates as modeled days pass into months. As seasons change, it varies the amount of sunlight. The U.S. Global Change Research Program (USGCRP) provides insight into the causes and effects of changes in the earth’s climate system, especially those related to human activities, and is developing tools to assess options for responding to global change. As the depth of understanding grows, the research results are intended to provide increasingly valuable support for formulating national and international policy, as well as for evaluating the impact and effectiveness of the actions taken. The research activities are grouped by major focus in six broad categories: observing the earth’s climate system through land, ocean, and satellite managing data and information to ensure that they are preserved and available for national and international researchers to use; understanding global change processes, ranging from cloud formation and hydrologic processes to the accumulation of atmospheric ozone; predicting the magnitude, timing, and extent of global change; evaluating the consequences of global change by analyzing the impact of global change on the environment and on society; and assessing policies and options for responding to global change. The President’s fiscal year 1995 budget for the research activities is summarized in table III.1. Observing the earth’s climate system Predicting change (modeling) The earth’s environmental system encompasses the atmosphere; the oceans and marine life; the land surface and biosphere (plant and animal life); and the cryosphere (snow, glaciers, sea ice, and icecaps). Because this complex, interconnected system cannot be reconstructed and experimented with in the traditional laboratory sense, numerical models are used to simulate the behavior of the earth’s system and its fluctuations, variations, and responses to disturbances, including the effects of human activities. Coupled atmospheric/oceanic GCMs are, within the limits of available resources and ingenuity, designed to include as much of the important and relevant physics, chemistry, and biology as is understood and as is needed to address particular questions posed to the models about future climatic change. A wide array of modeling activities supports the need to provide society with the best possible predictions of weather; anomalous seasonal events, such as floods and droughts; fluctuations in the frequency of climatic extremes; and long-term changes in climate. These activities, which are conducted at research centers, universities, and government laboratories, are supported by government agencies that have responsibility for scientific research, including (1) the Department of Energy (DOE), (2) the National Aeronautics and Space Administration (NASA), (3) the National Science Foundation (NSF), (4) the National Oceanic and Atmospheric Administration (NOAA), and (5) the Environmental Protection Agency (EPA). These agencies support research using full models of the global climate to improve, test, and, in some cases, project the future climate and its potential changes. The agencies’ roles in modeling climatic change are discussed below. DOE’s modeling program focuses on changes and variations in the earth’s climate—especially those caused by human activities—that may occur over periods ranging from decades to centuries. DOE’s program (1) tests the performance of models from around the world by comparing their ability to represent the recent climate, (2) simulates the effects of carbon dioxide emissions on the climate, and (3) develops global models by taking advantage of the new generations of highly parallel computers. These activities are intended to develop the coupled models of the earth’s oceans, atmosphere, and land surface that are needed to project the climate more accurately from tens to hundreds of years. During fiscal year 1994, DOE funded modeling research at 24 universities and research centers. NASA’s modeling program focuses on developing and applying a four-dimensional model that places special emphasis on the role of data from satellites in providing research-quality information on the climate system. NASA’s program supports efforts to (1) better understand the relative roles of the various factors that have changed or are changing the earth’s climate; (2) analyze the global effects of feedback mechanisms, such as clouds, that can amplify or moderate climatic change; and (3) develop tools for integrating data from satellites and other sensors into a coherent record of atmospheric behavior. During fiscal year 1994, NASA supported modeling research on climatic change at its Goddard Institute for Space Studies and Goddard Space Flight Center and at two universities. NSF’s modeling program focuses on climatic change that occurs over seasons to centuries and provides computer resources to the research community. Specifically, NSF’s programs emphasize research on coupling models of the atmosphere, oceans, land surface, and cryosphere into a single integrated model that can simulate the global climate system over the long term. NSF also supports wide-ranging research activities, including simulations of climates of the distant past, of natural variations in the present climate, and of the interactions of the various processes and influences. During fiscal year 1994, NSF funded major modeling research projects at 10 universities and research centers. NOAA’s modeling program focuses primarily on seasonal to interannual (year to year) predictions and on better understanding long-term climatic variation and change. NOAA’s activities include (1) developing and improving models of the atmospheric-oceanic system, (2) comparing models’ simulations to observations and analyses of the processes that most influence climate, (3) simulating the potential climatic effects of increased concentrations of greenhouse gases, and (4) separating the effects of natural climatic variations from the effects of human activities on climate. In addition, NOAA has tried to develop models capable of predicting the seasonal to interannual fluctuations that cause extreme rainfall and other similar disruptions to regional climates. During fiscal year 1994, NOAA supported global general circulation modeling research on 10-year and longer time scales at its Geophysical Fluid Dynamics Laboratory and four universities and research centers. EPA focuses its modeling research on chemical and environmental interactions within the biosphere. It supports research to improve GCMs’ representations of ecosystems and of the relationship between chemicals and plant and animal life in an area so that the effects of climatic change on the biosphere and of biospheric change on climate can be projected. During fiscal year 1994, EPA funded modeling research at three universities and research centers. As previously stated, five agencies support research on modeling global climate change. This research—through which models of the global climate are improved, tested, and, in some cases, used to project the future climate and the ways it may change—can be grouped into two broad areas: modeling to predict changes in climate that may occur over decades and modeling to simulate the current climate. The cumulative estimated costs of the five agencies’ modeling activities was approximately $123 million during fiscal years 1992 through 1994, as table IV.1 shows. The Ranking Minority Member of the House Committee on Commerce asked us to review the factors that affect the accuracy of GCMs’ estimates of future climatic changes and determine the costs of federally funded GCMs for fiscal years 1992 through 1994. We conducted our work between September 1994 and April 1995 in accordance with generally accepted government auditing standards. To determine the factors that affect the accuracy of GCMs’ estimates of future climatic changes, we reviewed information that we previously reported in Global Warming: Emission Reductions Possible as Scientific Uncertainties Are Resolved (GAO/RCED-90-58, Sept. 28, 1990). We also met with headquarters officials at DOE, NASA, NSF, NOAA, and EPA and with the Director of the Office of U.S. Global Change Research Program (USGCRP) to discuss these factors. From these meetings, we obtained various scientific assessments of GCMs’ strengths and limitations. In October 1994, the Subcommittee on Global Change Research held the Forum on Global Change Modeling with modelers from throughout the United States. The intent of the forum was to address requests from the White House Office of Science and Technology Policy and the General Accounting Office to produce a consensus document on issues concerning the use of climate models to inform policy on future climatic changes. This forum, whose participants included agency officials, scientists, and academicians involved in studying the global climate, provided information on the strengths and weaknesses of GCMs, as well as other relevant topics. In addition, we searched four scientific data bases to identify additional assessments of the models’ limitations. Throughout our review, we met with the Director of the Office of USGCRP to clarify technical issues associated with the models’ limitations. To identify federal funding for GCMs during fiscal years 1992 through 1994, we obtained cost data by agency from USGCRP. We worked with the Director of the Office of USGCRP to develop an instrument to capture all relevant cost components. We did not independently verify the validity of the cost data. On May 12, 1995, we met with the Director of the Office of USGCRP, the Manager of the Climate Modeling Program at NASA, the Deputy Director of the Office of Global Programs at NOAA, and the Manager of the Global Change Research Program at EPA to obtain their comments on a draft of this report. On May 22, 1995, the Chair of the Subcommittee on Global Change Research provided us with written comments on the draft. These comments integrated the responses of the five agencies included in our review (see app. VI). We have addressed the comments in the text of this report, where appropriate. The following are GAO’s comments on the Subcommittee on Global Change Research’s letter dated May 22, 1995. 1. Under the heading “Factors Limiting the Accuracy of Models’ Estimates,” we discussed some of the successes of GCMs to create a context and provide balance for our discussion of the models’ limitations. Later, under the heading “Improving GCMs’ Estimates,” we discussed the efforts that are currently under way to address these limitations and referred to the agencies’ discussion of such activities in this letter. Under the heading “Agency Comments,” we explained why we did not reproduce the report of the U.S. Global Change Model Forum in this report. 2. We added a footnote on page 3 of the report to better explain USGCRP’s role in coordinating federal research on global climate change. 3. We revised our discussion of the models’ limitations (pp. 8-14 of our draft report) as necessary to address the agencies’ specific comments. We changed the heading “Exclusion of Critical Processes,” cited in the agencies’ comments, to “Some Processes Not Included or Fully Incorporated in Some Models” to better describe the supporting text. Mary A. Crenshaw, Site Senior The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to congressional request, GAO reviewed the accuracy of general circulation models (GCM) in forecasting global warming trends, focusing on the: (1) factors limiting the accuracy of GCM estimates of future climatic changes; and (2) federal expenditures for GCM for fiscal years (FY) 1992 through 1994. GAO found that: (1) although GCM have improved their ability to predict future climatic changes over the last decade, their estimates are still limited by their incomplete or inaccurate representations of climate-affecting processes and by insufficient computer power; (2) scientists do not fully understand how the climate system responds to potentially important physical, chemical, and biological processes; (3) the lack of computer power requires scientists to use simplified assumptions and structures that increase the uncertainty of the models' predictions; (4) scientists are conducting research to overcome the limitations of the computer models; and (5) five federal agencies spent about $122.6 million for various global modeling projects, which represented about 3 percent of the global change research program's budget for FY 1992 through 1994.
As shown in table 1, DOD spent nearly $124 million on airline tickets that included at least one leg of premium class service during fiscal years 2001 and 2002. However, because DOD did not maintain centralized data on premium class travel, we had to extract these data from Bank of America’s databases of DOD centrally billed account travel, which included over 5.3 million transactions for airline tickets valued at over $2.4 billion. Due to limitations in the information collected on individual transactions, we were unable to determine the amount of premium class travel by military service or the amount of premium class travel used for domestic versus overseas flights. DOD’s premium class air travel accounted for a very small percentage of DOD travel overall—about 1 percent of total DOD airline transactions and 5 percent of total DOD dollars spent on airline travel. However, to put the $124 million that DOD spent on premium class travel in perspective, the amount DOD spent on premium class-related travel during these 2 fiscal years exceeded the total travel and transportation expenses—including airfare, lodging, and meals—spent by each of 12 major agencies covered by the Chief Financial Officers Act of 1990, including the Social Security Administration; the Departments of Energy, Education, Housing and Urban Development, and Labor; and the National Aeronautics and Space Administration. The difference between the price of a premium class ticket and a comparable coach class ticket can range from negligible—particularly if the traveler traveled within Europe—to thousands of dollars. In one instance, a traveler’s first class flight between Washington, D.C., and Los Angeles was 14 times, or about $3,000 more than, the price of a comparable coach class flight at the government fare. Higher-ranking civilian personnel and military officials accounted for a large part of premium class travel. Based on our statistical sample, we estimated that DOD civilian employees under the General Schedule (GS) grade GS-13 to GS-15 (supervisors and managers), Senior Executive Service (SES) (career senior executives), presidential appointees with Senate confirmation, and DOD senior military officers O-4 and above accounted for almost 50 percent of premium class travel. GAO’s Guide for Evaluating and Testing Controls Over Sensitive Payments considers travel by high-ranking officials, in particular senior-level executives, to be a sensitive payment area because of its susceptibility to abuse or noncompliance with laws and regulations. Control activities occur at all levels and functions of an agency. They include a wide range of diverse activities such as authorizations, reviews, approvals, and the production of records and documentation. For first and business class travel, we tested control activities designed to provide assurance that premium class travel transactions are (1) authorized and (2) justified in accordance to the Federal Travel Regulation (FTR), issued by GSA to implement travel policies for federal civilian employees and others authorized to travel at government expense, and DOD’s travel regulations, including the Joint Federal Travel Regulations (JFTR), which applies to uniformed service members, and the Joint Travel Regulations (JTR), which applies to DOD civilian personnel who are subject to GSA’s travel regulation. These regulations generally require that premium class travel be specifically authorized in advance of travel and only under specific circumstances. (See app. I for further details of GSA and DOD premium class travel regulations.) For example, although FTR and DOD travel regulations allow premium class travel when the scheduled flight time is in excess of 14 hours, these regulations prohibit use of premium class accommodation if the traveler has scheduled rest stops. In addition to the FTR and DOD regulations, we also applied the criteria set forth in our internal control standards and sensitive payments guidelinesin evaluating the proper authorization of premium class travel. For example, while DOD travel regulations and policies do not address the issue of subordinates authorizing their supervisors’ premium class travel, our internal control standards consider such a policy to be flawed from an independence viewpoint. Therefore, a premium class transaction that was approved by a subordinate would fail the controls over authorization test. Using these guidelines, we estimated, based on our statistical sample, that an estimated 72 percent of the DOD centrally billed travel transactions containing premium class travel for fiscal years 2001 and 2002 were not properly authorized and that an estimated 73 percent were not properly justified. As shown in table 2, an estimated 64 percent of premium class transactions did not contain travel orders that specifically authorized the traveler to fly premium class, and thus the commercial travel office—a travel agency— should not have issued the premium class ticket. Another 6 percent of premium class transactions were related to instances where the travel order authorizing business class was not signed (left blank) or the travel order authorizing first class was not signed by the service secretary or his or her designee, as required by DOD regulations. If the travel order is not signed, or not signed by the individual designated to do so, DOD has no assurance that the substantially higher cost of the premium class tickets was properly reviewed and represented an efficient use of government resources. We also estimated that 2 percent of the premium class transactions involved situations where a subordinate approved a superior’s travel. Although these limited instances do not necessarily indicate the existence of a significant systemic problem, allowing subordinates to approve their supervisors’ premium class travel is synonymous with self- approval and reduces scrutiny of premium class requests. Another internal control weakness identified in the statistical sample was that the justification used for premium class travel was not always provided, not accurate, and/or not complete enough to warrant the additional cost to the government. As previously stated, premium class travel is not an entitlement and recent changes to DOD regulations state that in the context of lengthy flights premium class travel should only be used when exceptional circumstances warrant and alternatives should be explored to avoid the additional cost of premium class travel. As shown in table 2, an estimated 72 percent of premium class transactions were not authorized and therefore because they were not properly authorized they could not have been justified. An additional two transactions in our sample which were authorized but not justified in accordance with DOD’s criteria increased our estimate of premium class transactions that were not justified to 73 percent. Considering the significant breakdown in key internal controls, it was not surprising that our audit identified numerous examples of improper premium class travel that cost DOD significantly more than what would have been spent on a coach class ticket. Table 3 illustrates a few of the types of unauthorized and/or unjustified transactions from both our statistical samples and data mining work, along with a comparison between amounts actually paid and the comparable coach fares at that time. Without authorization or adequate justification, these cases illustrate the improper use of premium class travel and the resulting increase in travel costs. For further details on the cases shown in table 3, as well as additional examples of unauthorized and/or unjustified transactions, please refer to the report that we released today on this subject. Our work also included data mining to identify the individuals who traveled premium class most frequently. We identified 28 of the most frequent premium class travelers from the 68,090 premium class transactions during fiscal years 2001 and 2002. All but 1 of the 28 frequent travelers were at least GS-13 civilians or O-4 military, that is, senior DOD personnel. We found that the most frequent travelers were, in most instances, authorized to obtain premium class travel by people at the same or higher levels, with 3 of the 28 failing the authorization test because they or their subordinates authorized their travel orders. However, we determined that many of the transactions were improper because their justification was not supported by the documentation provided or did not adhere to FTR and DOD travel regulations. Some cases involving frequent travelers were questionable because the justification documentation was not adequate to determine whether the transaction met DOD’s criteria. We found that 12 of the 28 frequent premium class travelers justified their more expensive flights with a medical condition. However, we identified several anomalies in the application of medical condition justification, as evidenced by travelers who used both coach and premium class accommodations during flights of similar duration and during the same time period. For example, frequent traveler 1 in table 4 took 14 premium and 31 coach class trips during fiscal years 2001 and 2002. Many of the coach class trips, for example, from Washington, D.C., to Honolulu or cities in California were similar in duration to premium class trips from Washington, D.C., to Frankfurt or Amsterdam. This may indicate that additional steps should be taken to verify the validity of the medical certification. During testing, an Army official at the Traffic Management Office informed us that his office forwards all medical certifications to the Surgeon General for an opinion before recommending to the Secretary of the Army that approval be granted for first class travel. For further details on the cases shown in table 4, as well as additional examples of travelers who frequently used premium travel, please refer to the report that we released today. GAO’s Standards for Internal Controlstates that a positive control environment is the foundation for all other standards. The importance of the “tone at the top” or the role of management in establishing a strong control environment cannot be overstated. However, we found that prior to us initiating this audit, DOD had not taken actions to encourage a strong internal control environment over premium class travel. Specifically, DOD and the military services did not (1) maintain adequate and accurate premium class travel data, (2) issue adequate policies related to the approval of premium travel, (3) require consistent documentation to justify premium class travel, and (4) perform audits or evaluations of premium class travel and did not monitor training provided to travelers, authorizing officials, and commercial travel offices employees on governmentwide and DOD premium class travel regulations. During the course of our work, DOD updated the JTR and JFTR in April 2003 to articulate more clearly and to make more stringent the circumstances under which premium class travel can be authorized. The FTR requires DOD, along with all other executive and legislative branch agencies, to provide GSA annual reports listing all instances in which the organization approved the use of first class transportation accommodations. We found that the Military Traffic Management Command (MTMC), responsible for tracking DOD’s first class travel, understated DOD’s cost and frequency of first class travel reported to GSA. According to DOD’s first class travel reports submitted to GSA for fiscal years 2001 and 2002, DOD civilian and military personnel took less than 1,000 first class flight segmentstotaling less than $600,000. In contrast, our analysis of the Bank of America airline transaction data indicates that DOD purchased more than 1,240 tickets containing over 2,000 separate segments with first class accommodations. Our analysis also found that these first class tickets costs of about $2.9 million were almost 5 times the amount DOD reported to GSA. We found that a number of cities were omitted from DOD’s first class report. For example, while DOD data indicated that no first class flights were taken into Washington, D.C., during fiscal year 2001, Bank of America data identified 88 first class flights into Washington D.C., during the same time period. We also found that DOD did not obtain or maintain centralized data on premium class travel other than first class, i.e., business class. Consequently, DOD did not know, and was unable to provide us with data related to, the extent of its premium class travel. As mentioned previously, we were able to obtain such data through extensive analysis and extractions of DOD travel card transactions from databases provided by the Bank of America. DOD travelers must follow a complicated array of premium class travel guidance. The applicability of specific regulations depends on whether the traveler is civilian or military. For DOD civilians, GSA’s FTR governs travel and transportation allowances. DOD’s JTR and individual DOD and military service directives, orders, and instructions supplement the FTR. For military personnel, DOD’s JFTR governs travel and transportation allowances. Individual DOD and military service directives, orders, and instructions supplement the JFTR. The executive branch policy on the use of first class travel applicable to the FTR, JTR, and JFTR is found in OMB Bulletin 93-11. When a subordinate organization issues an implementing regulation or guidance, the subordinate organization may make the regulations more stringent, but generally may not relax the rules established by higher-level guidance. Inconsistencies have accumulated within the various premium class travel regulations because DOD did not revise its directives, or require the military services to revise their travel policies or implementing guidance, when DOD modified the JTR or JFTR. For example, DOD first issued the JTR in 1965 and since then has modified it 450 times through April 2003, including 30 modifications since October 2000. While the JFTR has had fewer modifications—196 through April 2003—the JFTR has also been modified 30 times since October 2000. In contrast, DOD Directive 4500.9, Transportation and Traffic Management, was last revised in 1993 while DOD Directive 4500.56, Use of Government Aircraft and Air Travel, was last updated in 1999. Similarly, the Navy Passenger Transportation Manual was last updated in 1998, the Marine Corps Order P4600.7C Marine Corps Transportation Manual was last changed in 1992, and while the Air Force Instruction 24-101 Passenger Movement was last updated in 2002, it contains some provisions that are contrary to GAO’s internal control standards and sensitive payments guidelines. Inconsistencies also exist because DOD and its components have elected to authorize the use of premium class travel in different circumstances or have described the authorization to use premium class using different language. For example, DOD Directive 4500.9 grants blanket authority for high-ranking officials to use premium class when traveling overseas on official government business. This policy contradicts and is less stringent than the FTR, which does not cite rank as a condition for obtaining premium class travel. GSA’s FTR authorizes agencies to approve the use of first class or business class accommodations when required by an agency’s mission, but neither the JTR nor the JFTR adopt this authorization. In contrast, DOD’s policies on transportation and traffic management—DOD Directive 4500.9—states that the use of business class on domestic travelmay be authorized when necessitated by mission requirements. GSA’s FTR prohibits premium class travel if the traveler is authorized a rest stop en route or a rest period upon arrival at the duty site, even if the scheduled flight time is in excess of 14 hours. While DOD’s JTR and JFTR that were in effect at the time of our audit should have contained the same restriction, they were silent as to whether a rest period upon arrival would exclude a traveler from traveling in premium class. Further, the services’ implementing guidance is inconsistent in their application of the 14-hour rule. Because premium travel is to be used only on an exception basis after all other alternatives have been exhausted, the documentation for authorization and justification should be held to the highest standards to provide reasonable assurance that in every case, the substantially higher premium travel cost is warranted. The JTR and JFTR state that approval for premium class travel should be obtained in advance of travel, except in extenuating/emergency circumstances that make authorization impossible, and specify the circumstances under which premium travel is to be permitted. However, we found substantial inconsistencies in the documentation trail indicating that appropriate officials approved premium class travel based on inadequate documentation. In contrast, other federal agencies have issued clear and consistent guidelines related to the documentation of premium class travel. For example, the Department of Agriculture approves the use of premium class accommodations on a case-by-case basis and specifies that premium travel be approved by the under secretary except when frequent travel benefits are used. The justification must include the specific circumstances relating to the criteria, such as a medical justification from a competent medical authority, which must include a description of the employee’s disability, medical condition, or special need; approximate duration of the medical condition or special need; and a recommendation of a suitable means of transportation based on medical condition or special need. The National Institutes of Health (NIH) requires that the traveler, when requesting premium class travel based on a medical condition, detail the nature of the disability or special need on an authorization form for employees with disabilities or other special needs. The authorization form must be signed by both the employee and a competent medical authority. NIH’s policies state that the medical statement should specifically address why it is necessary to use upgraded accommodations. The form also limits the authority to a period of 6 or 12 months from the initial date of approval depending on the nature of the disability or special need. In the instance of a permanent disability, NIH policy is that authorized use of premium class accommodations is valid for up to 3 years, but that resubmission is necessary to ensure that there continues to be a need for the premium class travel. In general, effective oversight activities would include management review and evaluation of the process for issuing premium class travel and independent evaluations aimed directly at the effectiveness of internal control activities. Our internal control standards state that separate evaluations of control should depend on the assessment of risks and the effectiveness of ongoing monitoring procedures. As mentioned above, we consider executive travel as a high-risk area susceptible to abuse or noncompliance with laws and regulations. However, we found no evidence of any audits or evaluations of premium class travel. The lack of effective oversight and monitoring was another contributing factor to DOD and the services’ lacking knowledge of the extent of improper premium class transactions. The lack of oversight was further demonstrated by the fact that travelers, supervisors/managers, and employees at the commercial travel offices (CTO) responsible for issuing airline tickets to the travelers were not adequately informed on governmentwide and DOD travel regulations concerning when premium class travel is or is not to be authorized. Thus, it was not surprising that some DOD travelers and authorizing officials were under the mistaken impression that travel regulations entitled travelers to travel in business class when their flights exceed 14 hours. These individuals were not aware that the FTR provides that, in order to qualify for business class travel, travelers have to proceed directly to work upon arriving at the duty location. DOD also did not verify whether CTO employees receive training in DOD premium travel regulations. A representative from one commercial travel office informed us that they issue premium class travel if premium class was requested on the travel order, even if justification for obtaining premium class travel was flawed— for example, the flight was not at least 14 hours. During the course of our work, in April 2003, DOD updated the JTR and JFTR to articulate more clearly and make more stringent the circumstances under which premium class other than first class travel, that is, business class, is authorized for DOD travelers on flights to and/or from points outside the continental United States when the scheduled flight time exceeds 14 hours. The revised regulations prohibit the use of business class travel when travelers are authorized a “rest period” or an overnight stay upon arrival at their duty station, and state that business class accommodations are not authorized on the return leg of travel. Finally, in its revised regulations, DOD states that, in the context of authorizing business class accommodations for flights scheduled to exceed 14 hours, “business class accommodations must not be common practice” and that such service should be used only in exceptional circumstances. Further, DOD directs order-issuing officials to “consider each request for business class service individually.” We agree with DOD that decisions regarding the use of premium class travel should be made on a case-by-case basis and based on a preference for coach class. The ineffective management and oversight of premium class travel provides another example of why DOD financial management is one of our “high-risk” areas, with the department highly vulnerable to fraud, waste, and abuse. DOD does not have the management controls in place to identify issues such as improper use of premium class travel. As a result, millions of dollars of unnecessary costs are incurred annually. Because premium class travel is substantially more costly than coach travel, it should only be used when absolutely necessary, and the standards for approval and justification must be appropriately high. During our audit, DOD began taking steps to improve its policies and procedures for premium class travel. DOD must build on these improvements and establish strong controls over this sensitive area to ensure that its travel dollars are spent in an economical and efficient manner. Our related report on these issues released today includes recommendations to DOD. Our recommendations address the need to improve internal controls to provide reasonable assurance that authorization and justification for premium class travel are appropriate, monitor the extent of premium class travel, modify policies and procedures to make them consistent with GSA regulations, and issue policies prohibiting subordinates or the travelers themselves from authorizing premium class travel. In oral comments on a draft of this report, DOD officials concurred with our recommendations to resolve the control weaknesses. Mr. Chairman, Members of the Subcommittee, Senator Grassley, and Ms. Schakowsky, this concludes my prepared statement. I would be pleased to answer any questions that you may have. For future contacts regarding this testimony, please contact Gregory D. Kutz at (202) 512-9095, John J. Ryan at (202) 512-9587, or John V. Kelly at (202) 512-6926. Individuals making key contributions to this testimony included Kris Braaten, Beverly Burke, Francine DelVecchio, Aaron Holling, Jeffrey Jacobson, Julie Matta, Sidney H. Schwartz, and Tuyet-Quan Thai. DOD travelers must follow a complicated array of premium class travel guidance. The applicability of specific regulations depends on whether the traveler is civilian or military. For DOD civilians, GSA’s FTR governs travel and transportation allowances. DOD’s JTR and individual DOD and military service directives, orders, and instructions supplement the FTR. For military personnel, DOD’s JFTR governs travel and transportation allowances. Individual DOD and military service directives, orders, and instructions supplement the JFTR. The executive branch policy on the use of first class travel applicable to the FTR, JTR, and JFTR is found in OMB Bulletin 93-11. When a subordinate organization issues an implementing regulation or guidance, the subordinate organization may make the regulations more stringent, but generally may not relax the rules established by higher-level guidance. GSA and DOD regulations authorize the use of premium class travel under specific circumstances. The JTR and the JFTR limit the authority to authorize first class travel to the Secretary of Defense, his or her deputy, or other officials as designated by the Secretary of Defense. However, while both the JTR and JFTR provide that the authority to authorize first class travel may be delegated and re-delegated, the regulations specify that the authority must be delegated to “as high an administrative level as practicable to ensure adequate consideration and review of the circumstances necessitating the first class accommodations.” DOD travel regulations also require that authorization for premium class accommodations be made in advance of the actual travel unless extenuating circumstances or emergency situations make advance authorization impossible. DOD regulations also provide that first class accommodations may be used without authorization only when regularly scheduled flights between the authorized origin and destination (including connecting points) provide only first class accommodations. Specifically, the JTR and JFTR state that first class accommodation is authorized only when at least one of the following conditions exists: coach class airline accommodations or premium class other than first class airline accommodations are not reasonably available; the traveler is so handicapped or otherwise physically impaired that other accommodations cannot be used, and such condition is substantiated by competent medical authority; or exceptional security circumstances require such travel. The JTR and JFTR allow the transportation officer, in conjunction with the official who issued the travel order, to approve premium class travel other than first class. In accordance with the FTR, DOD restricts premium class travel to the following eight circumstances: regularly scheduled flights between origin and destination provide only premium class accommodations, and this is certified on the travel voucher; coach class is not available in time to accomplish the purpose of the official travel, which is so urgent it cannot be postponed; premium class travel is necessary to accommodate the traveler’s disability or other physical impairment, and the condition is substantiated in writing by competent medical authority; premium class travel is needed for security purposes or because exceptional circumstances make its use essential to the successful performance of the mission; coach class accommodations on authorized/approved foreign carriers do not provide adequate sanitation or meet health standards; premium class accommodations would result in overall savings to the government because of subsistence costs, overtime, or lost productive time that would be incurred while awaiting coach class accommodations; transportation is paid in full by a nonfederal source; or travel is to or from a destination outside the continental United States, and the scheduled flight time (including stopovers) is in excess of 14 hours. However, if premium class accommodations are authorized, a rest stop is prohibited. Both GSA and DOD regulations allow a traveler to upgrade to premium class other than first class travel at personal expense, through redemption of frequent traveler benefits. GSA also identified agency mission as one of the criteria for premium class travel. However, agency mission is not a DOD criterion for obtaining premium class travel. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Long-standing financial management problems, coupled with ineffective oversight and management of the Department of Defense's (DOD) travel card program, which GAO has previously reported on, have led to concerns about DOD's use of first and business class airfares. At the request of the Subcommittee on Investigations, Senate Committee on Governmental Affairs, Senator Grassley, and Representative Schakowsky, GAO performed work to identify problems in DOD's controls over premium class travel. This testimony focuses on (1) the extent of DOD premium class travel, (2) the effectiveness of key internal control activities and examples of improper premium class travel resulting from internal control breakdowns, and (3) DOD's control environment over premium class travel. In a companion report being issued today, GAO made numerous recommendations--that DOD concurred with--to strengthen key internal control activities and improve the overall control environment. Breakdowns in internal controls and a weak control environment resulted in a significant level of improper premium class travel and millions of dollars of unnecessary costs being incurred annually. Based on extensive analysis of records obtained from DOD's credit card issuer--Bank of America, GAO found that for fiscal years 2001 and 2002, DOD spent almost $124 million on about 68,000 premium class tickets that included at least one leg of premium class service, primarily business class. To put the $124 million into perspective it exceeded the total travel expenses--including airfare, lodging, and meals--spent by each of 12 major CFO agencies. The price difference between a premium class ticket and a coach class ticket ranged from a few dollars to thousands of dollars. Based on statistical sample testing, GAO estimated that 72 percent of DOD's fiscal year 2001 and 2002 premium class travel was not properly authorized, and that 73 percent was not properly justified. GAO estimated that senior civilian and military employees accounted for almost 50 percent of premium class travel. Further, our data mining showed that 27 of the 28 most frequent premium class travelers were senior DOD officials. Lack of oversight and a weak overall control environment characterized DOD's management of premium class travel. DOD and the military services (1) did not have accurate and complete data on the extent of premium class travel, (2) issued inadequate policies on premium class travel that were inconsistent with government travel regulations and with each other, (3) did not issue guidance on how to document the authorization and justification of premium class travel, and (4) performed little or no monitoring of this travel. During the course of our audit, DOD began updating its travel regulations to more clearly articulate and to make more stringent the circumstances under which premium class travel can be authorized.
Private banking has been broadly defined as financial and related services provided to wealthy clients. It is difficult to measure precisely how extensive private banking is in the United States, partly because the area has not been clearly defined and partly because financial institutions do not consistently capture or publicly report information on their private banking activities. We do know, however, that domestic and foreign banks operating in the United States have been increasing their private banking activities and their reliance on income from private banking. The target market for private banking—individuals with high net worth—is also growing and becoming more sophisticated with regard to their product preferences and risk appetites. of the largest monetary penalty ever imposed on a bank in a money laundering-related case. More recent investigations of private bankers at Citibank and BankBoston continue to keep private banking in the forefront of public attention. Such cases, which can involve the illicit transfer of millions of dollars, underscore the crucial importance of private banking and its potential vulnerability to money laundering. Federal banking regulators may review banks’ efforts to prevent or detect money laundering in their private banking activities during examinations,including recent examinations focused on their private banking activities. During these examinations, regulators focus on a bank’s compliance program; internal controls; and, in particular, on its KYC policies. Regulators instruct their examiners to determine whether banks have implemented sound KYC policies in general and to ensure that these policies extend to their private banking activities. Until recently, U.S. regulators were attempting to incorporate KYC requirements as uniform regulations. However, the proposed KYC regulation, which was published for comment in December 1998, was met with an overwhelming public response that raised concerns about the government’s scrutiny of personal banking accounts. In the face of these concerns, U.S. regulators have since withdrawn the proposed regulations. Nevertheless, regulators we interviewed for this statement told us that, during the course of examinations, they continue to verify that banks have prudent banking policies, including KYC policies, that ensure compliance with the Bank Secrecy Act. Although regulatory efforts to establish uniform KYC requirements have stopped, Congress continues to look for ways to reinforce current anti- money-laundering laws and, more specifically, to promote due diligence in customer banking relationships. For example, the Chairman of the House Committee on Banking and Financial Services recently introduced legislation that would, among other things, require financial institutions that open or maintain a U.S. account for a non-publicly-traded foreign entity to maintain a record of identity for each beneficial owner of the account. The legislation would also prohibit U.S. depository institutions from maintaining banking relationships with banks that are not licensed to provide services in their home countries. The growing importance of private banking over the last several years led the Federal Reserve Bank of New York (FRBNY) to undertake a special initiative focusing on private banking that disclosed a number of key weaknesses in selected institutions’ internal controls for detecting or preventing money laundering. In 1996 and 1997, FRBNY attempted to review private banking activities at about 40 domestic and foreign banking institutions in the New York district. During the course of these reviews, examiners focused on assessing each bank’s ability to recognize and manage money laundering risks associated with inadequate knowledge of its clients’ personal and business backgrounds, their sources of wealth, and their use of their private banking accounts. FRBNY officials explained to us that most of the banks reviewed had adequate anti-money-laundering programs for their private banking activities, although a few were antiquated and vulnerable to money laundering. Deficiencies identified in the private banking area primarily involved poor internal controls, such as insufficient documentation and inadequate due diligence standards. In a systemwide study conducted during 1998, the Federal Reserve assessed the risk management practices at seven banks with private banking activities. The study found that internal controls and oversight practices over private banking activities were generally strong at banks that focused on high-end domestic clients, while similar controls and oversight practices were seriously weak at banks that focused on higher risk Latin American and Caribbean clients. In the latter part of 1997, the Office of the Comptroller of the Currency began targeting national banks’ private banking activities based on law enforcement leads or on the bank activities meeting OCC’s high-risk criteria. A primary focus of these reviews has been the banks’ implementation of sound KYC policies and procedures. In these reviews, OCC targeted 10 high risk national banks for expanded Bank Secrecy Act examinations, three of which focused on the banks’ private banking activities. OCC found that only one bank had diligently developed processes to manage the risks associated with anti-money-laundering and KYC issues, while the anti-money-laundering processes of the remaining two banks were classified as weak or needing improvement. A second major area for our work was regulatory efforts to oversee offshore private banking activities, including the types of procedures regulators use and the deficiencies they have identified during examinations. Federal banking regulators and law enforcement officials have raised concerns about offshore private banking activities and their potential to be the private banking “soft spot” for money laundering. Although banking regulators believe that customers generally use offshore entities to establish or maintain private banking accounts for legitimate reasons, they are concerned that this practice may also serve to camouflage money laundering and other illegal acts. Offshore entities, including private investment companies and offshore trusts, provide customers with a high degree of confidentiality and anonymity while offering such other benefits as tax advantages, limited legal liability, and ease of transfer. Detecting or preventing money laundering by offshore entities can pose special difficulties because documentation identifying the individual or group that controls these offshore entities and their U.S. private banking accounts (referred to as their “beneficial owners”) is frequently maintained in the offshore jurisdiction rather than in the United States. Regulators recognize that the use of offshore entities to establish or maintain U.S. private banking accounts tends to obscure the account holders’ true identities. Consequently, they instruct their examiners to look for specific KYC procedures that enable banks to identify and profile the beneficial owners of these offshore entities. In the course of examinations, examiners may test the adequacy of beneficial-owner documentation maintained in the United States. At the time of our earlier review in 1998, with the exception of FRBNY, we found no evidence that examiners had attempted to examine the documentation that banks maintain in offshore secrecy jurisdictions. also a way to induce banks to develop or improve their systems for maintaining appropriately detailed information on the beneficial owners of offshore entities that maintain U.S. accounts. Other Federal Reserve and OCC examiners we contacted in 1998 expressed different views about accessing such documentation during examinations. Some examiners, for example, said that they do not see a need to examine offshore documents if they are confident about the bank’s commitment to combating money laundering. Since that time, according to a Federal Reserve official, its examiners have routinely attempted to examine documents maintained in offshore jurisdictions. Offshore branches are extensions of U.S. banks and are subject to supervision by U.S. regulators, primarily the Federal Reserve or OCC, as well as host countries. However, such branches are generally not subject to this country’s Bank Secrecy Act. For this reason, U.S. banking regulators do not attempt to determine whether offshore branches are in compliance with specific anti-money-laundering provisions contained in the Bank Secrecy Act, such as the one requiring that suspicious transactions be reported to U.S. authorities. Instead of monitoring formal compliance, U.S. banking regulators try to identify what efforts the branches are making to combat money laundering and to determine whether the banks’ corporate KYC policies are being applied to activities, such as private banking activities, that the offshore branches may engage in. Although examiners are able to review the written policies and procedures being used in these branches, they must rely primarily on the banks’ internal audit functions to verify that the procedures are actually being implemented in offshore branches where U.S. regulators may be precluded from conducting on-site examinations. They may also rely on external audits, but are less prone to do so because external audits tend to focus on financial, rather than anti-money-laundering, issues. beneficial owners of these offshore entities, maintaining such information in clients’ U.S. files, or having the ability to bring it on-shore in a reasonable amount of time, promotes sound private banking practices, according to the Federal Reserve. Our review in 1998 of FRBNY and OCC examinations found that examiners identified a number of general private banking deficiencies that also pertained to the banks’ offshore private banking activities. Two such deficiencies were inadequate client profiles and weak management information systems. For example, examiners found that some banks’ client profiles contained little or no documentation on the client’s background, source of wealth, or expected account activity, or on client contacts and visits by bank representatives. Examiners also found that some banks’ management information systems did not track client activity or did not allow bankers to systematically examine all accounts related to a given client. Both of these deficiencies make it difficult for banks to monitor clients’ accounts for unusual or suspicious activity, according to the banking regulators. At the time of our review in 1998, we noted that most banks with deficiencies identified during FRBNY’s private banking initiative had started to take corrective actions to address these deficiencies. For example, during follow-up examinations, examiners found that banks had started to make progress on improving client profiles. Some bank officials we interviewed during this assignment expressed concerns that securities brokers and dealers are not subject to the same regulations covering suspicious activity reports or to the same regulatory reviews of KYC policies that banks are subject to. They indicated that this inconsistency creates an “uneven playing field” that they felt was unfair, particularly since brokers and dealers are engaged in private banking activities similar to those of the banks themselves. Officials from the Securities and Exchange Commission and Treasury’s Financial Crimes Enforcement Network have indicated that they have been working together since 1997 to develop regulations for brokers and dealers regarding suspicious activity reports. As of October 1999, however, such regulations had not yet been issued. The third major area for our work was barriers to regulators’ efforts to oversee offshore banking activities in general. We found that secrecy laws in many offshore jurisdictions represent key barriers to U.S. oversight of offshore banking activities. According to U.S. and international agencies and organizations, all of the 20 offshore jurisdictions we reviewed have secrecy laws that protect the privacy of individual account owners, and 16 of them impose criminal sanctions for breaking those laws. While secrecy laws are intended to preserve the privacy of bank customers, they also restrict U.S. regulators from accessing individual account information and often prevent regulators from conducting on-site examinations at U.S. bank branches in offshore jurisdictions. In our earlier work in 1998, we reviewed nine jurisdictions in depth because of their private banking activities. Updated information on these nine jurisdictions showed that five would allow U.S. regulators to conduct on-site examinations of banking institutions in their jurisdictions and that only two of these five would provide some access to individual bank account information. Each of the jurisdictions had secrecy laws to protect the privacy of individual account owners. However, some jurisdictions provided for an exception to their secrecy laws when criminal investigations were involved. We were told that these jurisdictions had established judicial processes through which U.S. and other foreign law enforcement officials could obtain access to individual bank account or customer information. However, U.S. law enforcement officials we contacted expressed concerns about the difficulty they have in obtaining information from offshore secrecy jurisdictions, including those with established judicial processes. They noted, for example, that it can take an inordinate amount of time to obtain information requested through mutual legal assistance treaties. international anti-money-laundering efforts. However, the report notes that (1) a more aggressive legislative approach is needed to address the conditions that encourage a destabilizing level of capital flight and money laundering, and (2) Russia supervises its banks poorly. Details on the 20 jurisdictions are presented in attachment I. U.S. banking regulators are attempting to work around barriers created by offshore secrecy laws, but limitations hamper their efforts. For example, a limitation in some jurisdictions is that since regulators have been precluded from conducting on-site examinations, they rely primarily on banks’ internal audits to determine how well KYC policies and procedures are being applied to offshore branches of U.S. banks. Our 1998 review of examination reports, however, found several instances in which examiners noted that the bank’s internal audit of the offshore branch inadequately covered KYC issues pertaining to its private banking activities at these branches. Regulators’ reliance on internal audits for overseeing offshore branches is also impeded by their inability to review banks’ internal audit workpapers in some offshore jurisdictions that require that such workpapers be kept in the jurisdiction. Examiners explained that, without access to supporting audit workpapers, it is difficult to verify that audit programs were followed and to assess the general quality of internal audits of offshore branches. Also, without access to bank documents or internal audit workpapers, it is difficult to explain to bank management the basis for regulatory concerns about particular activities conducted in their offshore branches. All but 1 of the 20 offshore jurisdictions we reviewed were engaged in some type of anti-money-laundering activities. Twelve of the 20 jurisdictions were members of either the Basle Committee on Banking Supervision or the Offshore Group of Banking Supervisors, two international groups formed to foster cooperation among banking supervisory authorities. Both of these groups place special emphasis on the on-site monitoring of banks to ensure, for example, that they have effective KYC policies. Sixteen of the 20 offshore jurisdictions were also members of the Financial Action Task Force, the Caribbean Financial Action Task Force, or the Council of Europe Select Committee on Money Laundering, three international task forces created to develop and promote anti-money-laundering policies. (See attachment II.) Membership in any of these three task forces implies that the jurisdiction has stated its intention to work towards the task force’s principles and recommendations, including those related to establishing KYC policies and policies on reporting suspicious transactions. It is important to point out that membership in these task forces does not necessarily mean that these principles and recommendations are adequately being followed by the jurisdiction’s financial institutions or monitored by its government authorities. The State Department’s International Narcotics Control Strategy Report (INCSR) for 1998, for one, identifies 11 of the 20 offshore jurisdictions as having weak or nonexistent regulatory supervisory structures. Attachment III provides information on the 20 jurisdictions’ anti-money-laundering practices and the State Department’s classification of the extent to which the jurisdictions may be vulnerable to money laundering. Several challenging questions confront U.S. policymakers and others involved in ongoing domestic and international efforts to combat money laundering through offshore banking activities. A number of these questions are specific to offshore private banking activities of banks and offshore banking in general. Despite the recent anti-money-laundering activities of some key offshore jurisdictions, one central question is whether secrecy laws will continue to represent barriers to U.S. and other foreign regulators. A number of related questions follow from this question. For example, do the offshore jurisdictions that have enacted new money laundering laws have the regulatory infrastructure and adequate regulatory and law enforcement personnel to enforce the new laws? Another key question with important implications is how effective are the efforts of international task forces and supervisory groups to combat money laundering. A related question is what needs to be done to ensure that offshore jurisdictions give sufficient emphasis to preventing and detecting money laundering. An equally important, if narrower, question that grows out of the GAO work described here is what needs to be done to ensure that offshore jurisdictions allow the U.S. and other foreign governments adequate access to information needed for supervisory and law enforcement purposes. Other questions remain, related to the domestic oversight of banking and money laundering—especially with regard to the adequacy of current examination procedures, including knowing your customer. The National Money Laundering Strategy for 1999 marks a new stage in the government’s fight against money laundering. A major goal is to enhance regulatory oversight while making it cost-effective, with measurable results. We believe such a goal is worth achieving. U.S. law enforcement and judicial authorities allowed access to individual customer information x x x x x x x x x x x x x x x xCriminal sanctions exist for unauthorized disclosures, but “safe harbor” is provided for specific authorized disclosures to certain entities. Financial Action Task Force (FATF) Caribbean Financial Action Task Force (CFATF) Council of Europe Select Committee on Money Laundering x x x x x Bahrain is not a member country of FATF It is, however, a member of the Gulf Cooperation Council, one of two regional organizations that are members of FATF. Does the jurisdiction have KYC policies or guidelines for banks? Does the jurisdiction require banks to report suspicious transactions? Does the jurisdiction have corporate secrecy laws that include criminal sanctions? Does INCSR describe supervisory structure of the jurisdiction as weak or nonexistent? Information is for Guernsey, one of four islands known as the Channel Islands. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed money laundering in relation to private banking and highlighted some regulatory issues related to the vulnerability of selected offshore jurisdictions to money laundering, focusing on: (1) regulators' oversight of private banking in general; (2) regulators' oversight of private banking in selected offshore jurisdictions; (3) barriers that have hampered regulators' oversight of offshore banking; and (4) future challenges that confront regulators' efforts to combat money laundering in offshore jurisdictions. GAO noted that: (1) federal banking regulators have overseen private banking through examinations that, among other things, focus on banks' "know your customer" policies; (2) these policies enable banks to understand the kinds of transactions a particular customer is likely to engage in and to identify any unusual or suspicious transactions; (3) federal banking regulators have examination procedures that cover private banking activities conducted by banks operating in the United States; (4) in cases that involve private banking activities conducted by branches of U.S. banks operating in offshore jurisdictions, examiners rely primarily on banks' internal audit functions; (5) GAO found that the key barriers to U.S. regulators' oversight of offshore banking activities are secrecy laws that restrict access to banking information or that prohibit on-site examinations of U.S. bank branches in offshore jurisdictions; and (6) an important challenge that confronts efforts to combat money laundering is the extent to which such secrecy laws will continue to be barriers to U.S. and foreign regulators.
The Loma Prieta earthquake, measuring 7.1 on the Richter scale, struck northern California on October 17, 1989, causing many deaths and widespread property damage. It also severely damaged several major transportation structures in the Bay Area, including the Embarcadero Freeway, the Bay Bridge, and the Cypress Viaduct. To help the area cope with the earthquake’s impact on transportation, the Congress appropriated $1 billion in federal transportation emergency relief assistance in fiscal year 1990 and an additional $315 million in fiscal year 1994. California allocated over three-fourths of this assistance to the Cypress Viaduct project. The emergency relief program, administered by FHWA, provides financial assistance to states and local highway agencies to help repair federal-aid highways seriously damaged during natural disasters—hurricanes, earthquakes, volcanoes, and floods—or by catastrophic failures. As a kind of insurance against catastrophe, the program provides states with funding above and beyond their regular federal highway funding. The program’s funds are not subject to a state’s yearly funding limit and thus pay for projects that do not have to compete against other needs within the state. By law, FHWA can provide a state with up to $100 million in emergency relief funding for each natural disaster found eligible for funding. However, the Congress has passed special legislation lifting this cap for specific disasters. The criteria for administering emergency relief funds are set out in 23 C.F.R. section 668. In addition, FHWA’s Emergency Relief Manual provides FHWA’s division offices, located in each state, with the operating procedures for implementing the program. These offices process state highway agencies’ applications for funding and make decisions on the eligibility of specific projects. During the first 180 days following a disaster, the program covers up to 100 percent of emergency repairs to restore essential highway traffic service and protect remaining facilities. In addition, for the Cypress Viaduct replacement project, the Congress made all repairs during the first 180 days 100 percent eligible for emergency relief funding. For permanent restoration work or repairs after the first 180 days, the federal share of costs varies with the type of federal-aid highway. For projects on the interstate system, the federal share generally is 90 percent of eligible costs. The Cypress Viaduct project reestablishes a link in the Bay Area’s freeway system, which connects the East Bay area (including Oakland) with San Francisco via the Bay Bridge and with Interstate 80 to the north. The project replaces the 1.5-mile connection that was lost during the earthquake with roughly 5 miles of new freeway segments, providing direct access to both the Bay Bridge and Interstate 80. It also includes several new interchanges and improves access to the Port of Oakland. It realigns the original freeway to the west, taking it out of a residential neighborhood and into active rail yards (see fig. 1). The project comprises seven separate major construction projects, each covering a specific segment of the work and ranging in value from $22 million to $162 million. (App. I shows the location, scope, and status of each segment of the project.) Although the emergency relief program is designed to assist states in quickly repairing highways to predisaster conditions, several factors have slowed the replacement of the Cypress Viaduct. Part of the delay in constructing the project has resulted from public opposition to replacing the old, doubled-decked structure in its original location. In response to public concerns, Caltrans identified several alternative alignments that it studied in a 2-year environmental review. In 1991, Caltrans and FHWA decided to replace the destroyed 1.5-mile structure, which had bisected a residential area, with a new 5-mile structure running through active rail yards. Further delays occurred because Caltrans needed additional time to negotiate right-of-way issues with the railroads and because constructing the highway amid the rail yards created logistical problems. As of March 1996, FHWA had obligated nearly $1 billion to the project, or about 35 percent of all the emergency relief obligations FHWA has made nationwide since 1989. These obligations also represent over 95 percent of the emergency relief funding for the project. To date, Caltrans has awarded contracts for all of the major construction projects. As table 1 shows, the seven projects are at various stages: one project is complete, five are under way, and one is just beginning. According to Caltrans officials, taken as a whole, the project is about one-third complete. As of March 1996, Caltrans estimated that it will not complete the entire project until 1998. However, by offering contractors incentives for early completion, Caltrans expects to complete one major portion of the project in 1997. Completing this portion will allow traffic using Interstate 880 access to the Bay Bridge, thus reestablishing a critical link lost during the earthquake. Although it will take Caltrans about 9 years from the time of the earthquake to complete the entire project, most of the delays occurred before construction began. Immediately following the earthquake, FHWA and Caltrans planned to replace the Cypress Viaduct as it existed prior to the earthquake, with a new double-decked structure. However, immediate replacement of the viaduct was not possible because the original structure had divided an Oakland neighborhood, and local residents objected to replacing the structure as it had been before the earthquake. For example, numerous residents objected to rebuilding in the pre-earthquake location because they said doing so would cause pollution and congestion and reduce growth. In addition, in December 1989 the Oakland City Council passed a resolution opposing any construction in the viaduct’s original corridor, stating that rebuilding would continue to divide the community and hinder its economic and social growth. Consequently, Caltrans had to identify several new alternative alignments for the structure. Because of the size and complexity of the alternative alignments proposed, Caltrans had to assess their impact in an environmental impact statement (EIS), which it prepared in 1990-91. In January 1992, FHWA finalized the environmental review by issuing a record of decision on the project. When, as a result of the environmental review, Caltrans and FHWA selected an alignment that shifted the project out of the residential neighborhood and into the area of active rail yards, Caltrans had to undertake extensive and protracted negotiations with the Southern Pacific and Santa Fe railroads to work out the details of removing and relocating the rail yards. Between 1992 and 1994, as it developed the final engineering plans for major segments of the project, Caltrans and the railroads were reaching agreement on how to relocate the existing rail yards, what type of track and railroad standards would be needed, and what the total cost of relocating the rail yards would be. In early 1994, while these negotiations continued, Caltrans began constructing two major segments of the project. However, Caltrans has periodically had to halt construction to allow trains to pass through the site. Project B, in particular, has experienced construction delays because of the need to accommodate rail traffic. According to Caltrans officials, project B, which is currently 18 percent complete, is the only major project that is experiencing problems with its schedule. However, they expect project B, as well as all of the other major projects, to meet the estimated completion dates shown in table 1. As of March 1996, Caltrans estimated that the total cost of replacing the Cypress Viaduct will be $1.13 billion. Of this amount, $1.01 billion, or about 90 percent, will be federally financed through the emergency relief program; California will finance the remainder. This estimate is significantly higher—as much as $824 million—than the previous estimates documented in a 1989 post-earthquake damage assessment and in the EIS completed in 1991. The increases have occurred because of significant changes in the project’s scope and refinements from the earlier estimates. The current estimate is also $210 million higher than the baseline cost estimates prepared during 1990-91, prior to FHWA’s approval of the current project’s design. Most of these increases have occurred because Caltrans incurred additional costs for construction, traffic management, and relocation of the rail yards once construction began. Furthermore, although Caltrans does not anticipate further cost growth, an increase could occur because major construction projects worth about $560 million are still in the early stages. On October 30, 1989, FHWA engineers, following the agency’s Emergency Relief Manual, inspected the collapsed Cypress Viaduct. On the basis of this inspection, they prepared a damage assessment, estimating that replacing the destroyed structure along its predisaster alignment would cost $306 million. This estimate was a conceptual estimate based on the inspection rather than on detailed engineering. It included the costs for items such as removing the old structure, managing traffic, building a new structure, and engineering. In the estimate, FHWA recognized that more detailed engineering would be required to refine the project’s estimated costs. However, after preparing this initial estimate, FHWA and Caltrans did not complete a detailed estimate for rebuilding the Cypress Viaduct as it existed prior to the earthquake. Instead, as noted earlier, public opposition to rebuilding the structure at its original location led Caltrans to prepare an EIS and ultimately to select a new alignment for the project. In the EIS, the costs for this alternative were estimated at $695 million, or about $400 million more than the estimate based on the damage assessment, primarily because of additional costs for acquiring rights-of-way and relocating the rail yards. Furthermore, the estimate in the EIS included only the capital costs—for construction, rights-of-way, and relocation of the rail yards—and excluded the costs for engineering and traffic management. As a result, it did not provide a comprehensive initial, or baseline, cost estimate for the project. According to Caltrans and FHWA officials, the estimate in the EIS did not include the noncapital costs because there was no requirement to present them. To arrive at a complete baseline cost estimate, we worked with Caltrans to identify other cost estimates that it had developed while preparing the EIS, including estimates for engineering, traffic management, and several other items. By adding these cost estimates to the estimate in the EIS of $695 million, we calculated a baseline cost estimate of $919 million for the project. Caltrans’ current estimate of $1.13 billion is about $210 million higher than the baseline estimate of $919 million. Table 2 identifies the cost increases from the baseline estimate by project element. As table 2 shows, construction is the major element contributing to the cost increases. The increases are primarily due to the unplanned costs of controlling and disposing of contaminated soil and groundwater (approximately $40 million), additional requirements for seismic strengthening (approximately $35 million to $40 million), and provisions for contract incentives to speed up construction (approximately $24 million). Other major increases resulted because Caltrans underestimated the costs of managing traffic and relocating the rail yards. For example, Caltrans underestimated by about $22 million the costs of replacing the existing track and structures with equivalent facilities built to the rail industry’s current standards. In addition, after completing the EIS, Caltrans agreed to compensate the city of Oakland with a package of benefits, known as the performance agreement, to mitigate some of the financial impact of losing the Cypress Viaduct. The final cost could increase beyond the current estimate of $1.13 billion because major projects worth about $560 million are still in the early stages of construction. In addition, cost increases on the project have contributed to a shortfall in the emergency relief available for other damage caused by the Loma Prieta earthquake. According to Caltrans, it will be seeking an additional $112.5 million in emergency relief funding for three projects in San Francisco County that are eligible for funding through the emergency relief program. FHWA’s regulations allow the use of emergency relief funds for betterments. According to the regulations, such betterments are eligible for emergency relief funding only when they are clearly economically justified to prevent future recurring damage. FHWA officials told us they had approved funding to significantly realign the Cypress Viaduct without making such a finding because they did not consider the relocation of the project to be a betterment within the terms of their regulations. As a result of this interpretation, the agency based its funding decisions, in part, on guidance in its Emergency Relief Manual—guidance that provides inconsistent information on how to address improvements recommended as a result of an environmental review. While the design approved by FHWA may be a reasonable approach for addressing environmental concerns, the decision to fund the entire project with emergency relief funds raises questions about the appropriateness of using emergency relief funds to fully pay for future projects in similar circumstances. The emergency relief program is aimed at helping states quickly repair damage to federal-aid highways resulting from disasters. The program establishes limits on the use of the funds and precludes using the funds to correct non-disaster-related deficiencies or to improve replacement highway facilities beyond meeting the current standards. The emergency relief regulations do not address situations in which projects entail an environmental review. In addition, FHWA’s Emergency Relief Manual states that environmental reviews will not be a major factor for most emergency relief projects and that most emergency relief projects will be exempt from such reviews. The replacement of the Cypress Viaduct highlights a dilemma between quickly replacing a damaged facility using emergency relief funds and addressing environmental considerations. When FHWA officials, following the Emergency Relief Manual, assessed the damage to the Cypress Viaduct shortly after the earthquake and prepared the initial cost estimate of $306 million to rebuild it along the same alignment, their decision was consistent with the goals of the program—quickly replacing the destroyed facility and restoring predisaster traffic service. However, when community opposition and environmental concerns precipitated a call for alternatives, FHWA did not approve the relocation on the basis of the emergency relief regulations, which allow for relocations only when they are clearly economically justified to prevent recurring damage. Instead, FHWA approved the relocation on the basis of the results of the EIS, without preparing an economic justification. FHWA said that because the relocation was not a betterment, the emergency relief regulations, which place limits on funding improvements to or changes in the character of a destroyed facility, were not applicable. Instead, the agency relied on its Emergency Relief Manual to determine which of the project’s costs should be paid with emergency relief funds. However, the manual provides vague and inconsistent guidance on how to administer the program, particularly when a more expensive alternative is selected as a result of an environmental review. For example, one section of the manual states that betterments, including relocations, must be quickly justified without extensive public hearings or environmental, historical, right-of-way, or other encumbrances. However, the manual also states that betterments resulting from environmental or permit requirements beyond the control of the highway agency are eligible for emergency funds. Therefore, even if FHWA had determined that relocating the structure was a betterment, it would have faced inconsistent guidance in determining whether to fully fund the project with emergency relief funds. These and other inconsistencies confront FHWA officials when they are determining if emergency relief funds can be used to pay for highway improvements that enhance the postdisaster transportation network rather than return it to its predisaster condition. (App. II cites sections in FHWA’s manual that present inconsistent information.) According to FHWA officials, given the severe destruction and trauma of the disaster and the inconsistencies in the emergency relief guidance, it was difficult for them to make decisions about eligibility on the basis of hard and fast rules. Therefore, the officials used maximum discretion to ensure that the project was fully funded. Currently, the Department of Transportation (DOT) is contemplating changes to the emergency relief regulations (23 C.F.R. section 668). DOT’s notice of proposed rulemaking has focused on expanding the eligibility of the program by, for example, permitting a state to use emergency relief funds to repair roadways damaged as a result of overusing the existing roadways to reach and repair a disaster site. The proposal does not clarify the appropriate limits of the emergency relief program or address the inconsistencies in the current guidance concerning environmental reviews. The project to replace the Cypress Viaduct has taken longer and cost more to complete than initially estimated because local opposition, environmental requirements, and railroad relocation activities have delayed construction and expanded the scope of the project. Although the project is nearly one-third complete and most of the emergency relief funds have been obligated, the project can still offer some valuable lessons about FHWA’s regulations and guidance for administering the emergency relief program. We acknowledge the need to replace the Cypress Viaduct in a manner that addressed public concerns, and we do not take issue with the decision to shift the project from its predisaster location to its new location. However, we question whether the improvements and costs resulting from the significant relocation and changes in scope should have been funded through the emergency relief program rather than the traditional transportation programs. Under its regulations, FHWA could have required a baseline cost estimate for replacing the Cypress Viaduct along its original alignment and limited the use of emergency relief funds to those replacement costs. FHWA’s funding decisions raise questions about whether the agency’s regulations and guidance establish clear limits on funding projects through the emergency relief program, particularly when an environmental review recommends enhancements to a facility beyond its predisaster condition. As DOT rethinks its emergency relief program, it has an opportunity to clarify what costs are eligible for funding through the emergency relief program rather than the traditional federal-aid highway programs. Answering this question is important because emergency relief funds are provided to states above and beyond their annual highway allocations and are not subject to the states’ limitations on obligations. Clearly laying out the appropriate uses of emergency relief funding in situations involving environmental reviews would help define the limits of the program, enabling FHWA officials to better control the costs of major and complex emergency relief projects. We recommend that the Secretary of Transportation direct the Administrator, Federal Highway Administration, to modify the emergency relief guidance to (1) make the agency’s emergency relief regulations and manual consistent and (2) clearly define what costs can be funded through the emergency relief program, particularly when an environmental review recommends improvements or changes to the features of a facility from its predisaster condition in a manner that adds costs and risks to the project. We provided a draft of this report to DOT for review and met with DOT and FHWA officials, including the Associate Administrator for Program Development and the Acting Chief of the Federal Aid and Design Division, to discuss their comments on the draft. The FHWA officials reemphasized the importance of the environmental review process in their funding decisions. They also disagreed with our characterization of the project as a betterment and, therefore, disagreed with our conclusion about their funding decision. The FHWA officials explained that the Cypress Viaduct was damaged beyond repair by the Loma Prieta earthquake and that a replacement facility was eligible for emergency relief funding; however, because of the catastrophic failure of the original double-decked structure and reservations about the appropriate seismic design for a replacement structure, construction of a double-decked facility was neither practical nor feasible. In addition, these officials commented that a new double-decked structure would not have complied with the requirements of the environmental review process. Accordingly, these officials told us, various alternatives that provided functions and service comparable to those of the destroyed facility were developed and assessed through that process. In the view of these officials, replacing the facility as originally constructed was not a viable option and because the facility now under construction is comparable in service and function to the destroyed facility, the new structure is not a betterment. As a result, they disagreed with our conclusion that emergency relief funding should have been limited to the cost of replacing the destroyed facility in its original location. Finally, the officials indicated that it was not within FHWA’s statutory authority to cap emergency relief funding, as we suggested, at the amount of the estimated cost for replacing the facility in its original location. As we noted in our conclusions, we acknowledge the need to replace the Cypress Viaduct in a manner that addressed the environmental and public concerns, and we do not take issue with the decision to shift the facility to its new location. However, we believe that significantly altering the original alignment—a major relocation—is a betterment because (1) the emergency relief regulations describe a betterment as “relocation, replacement, upgrading or other added features not existing prior to the disaster”; (2) the scope of the replacement project changed the character of the facility by expanding the destroyed 1.5-mile structure to 5 miles of new highway structure; and (3) the new freeway segment adds several interchanges that improve access to local streets and port facilities. Although FHWA stated that it could not limit emergency relief funds, we believe that the existing regulations provided the agency with sufficient authority to limit the use of emergency relief funding on this replacement project. The existing regulations state that “emergency relief reimbursement is limited to the cost of a new facility to current design standards of comparable capacity and character to the destroyed facility.” Following the regulations, FHWA could have estimated the costs of replacing the Cypress Viaduct with a facility built to current design standards along the original alignment and limited the use of emergency relief funding to those costs. The state would then have had to use its federal-aid highway apportionments to cover any costs not funded through the emergency relief program. Finally, FHWA did not comment on our recommendation that it modify its emergency relief guidance by making the regulations and manual consistent and clearly defining what costs can be funded through the emergency relief program in cases involving environmental reviews. We believe that the existing regulations and manual contain inconsistencies, particularly in addressing environmental review requirements. If this issue is not clarified, questions will remain as to whether emergency relief funds or federal-aid highway funds are the appropriate means of funding highway improvements that are recommended by an environmental review and that either correct conditions not related to the disaster or enhance a facility. The FHWA officials also suggested technical and editorial changes to the report. Where appropriate, we incorporated these changes into the report. We performed our review from October 1995 through April 1996 in accordance with generally accepted government auditing standards. To accomplish our objectives, we gathered schedule and cost information from FHWA and Caltrans and assessed FHWA’s procedures for implementing the emergency relief program. Appendix III contains more detailed information on our scope and methodology. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested congressional committees; the Secretary of Transportation; and the Administrator, Federal Highway Administration. We will also make copies available to other upon request. Please call me at (202) 512-2834 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. . During our review of the Federal Highway Administration’s (FHWA) Emergency Relief Manual, which FHWA officials state is their principle source of guidance for administering the emergency relief program, we noted several sections on the criteria for funding eligibility that were inconsistent with other sections related to the environmental review process. In this appendix, we present those sections of the manual that are inconsistent with other sections when applied to the Cypress Viaduct project. These inconsistencies highlight the question as to whether emergency relief funds are the appropriate means of funding highway improvements that are recommended by an environmental review and that either correct conditions not related to the disaster or enhance a facility. “Emergency Relief (ER) funds are not intended to replace other Federal-aid, State, or local funds for new construction to increase capacity, correct non-disaster related deficiencies, or otherwise improve highway facilities.” “ER participation may be prorated to the cost of a comparable facility when the proposed replacement project exceeds the capacity or character of the destroyed facility.” “A betterment is defined as any additional feature, upgrading, or change in the capacity or character of the facility from its predisaster condition. Betterments are generally not eligible for ER funding unless justified on the basis of economy, suitability, and engineering feasibility and reasonable assurance of preventing future similar damage. Betterments should be obviously and quickly justifiable without extensive public hearing, environmental, historical, right-of-way, or other encumbrances. The justification must weigh the costs of the betterment against the probability of future recurring eligible damage and repair costs.” “Where relocation is necessary, each case must be considered carefully to determine what part of the relocation is justified for construction with the participation of ER funds.” “Extensive relocation of a replacement bridge is an ineligible betterment and ER participation will be normally limited to the cost of the structure and a reasonable approach length.” “Excessive delays in completing the environmental process may jeopardize an otherwise reasonable project by removing it from an eligible category under 23 U.S.C. 125. In other words, if a situation persists with no corrective action for an extended period of time, it may be unreasonable to continue to classify it as a disaster related emergency, but rather as a long-term need to be funded with regular Federal-aid.” “In cases where a categorical exclusion classification is not appropriate, an environmental assessment or environmental impact statement must be prepared.” “Betterments resulting from environmental or permit requirements beyond the control of the highway agency are eligible for ER funds if these betterments are normally required when the Agency makes repairs of a similar nature in its own work.” For information on the current status of the project, its estimated completion date, and the reasons for any delays, we interviewed officials at FHWA and the California Department of Transportation (Caltrans), performed in-depth file reviews, and reviewed Caltrans’ construction status reports. To identify the current estimated cost of the project and the reasons for any growth in costs, we interviewed officials at FHWA and Caltrans. We also conducted detailed file reviews at Caltrans’ headquarters and FHWA’s division office in California to identify the construction projects that constitute the overall Cypress project and to document their current estimated costs. We further obtained and reviewed cost information from FHWA’s financial system to independently validate Caltrans’ cost data. Where we found discrepancies, we conducted follow-up interviews with project managers and budget staff to reconcile the numbers. To identify any growth in the cost, we obtained baseline cost estimates prepared for the project and compared them with the current cost estimates. Working with Caltrans and FHWA officials, we categorized the cost growth by the specific dimensions of the project. To obtain and assess information on how FHWA has carried out its oversight responsibilities under the emergency relief program, we conducted interviews with FHWA headquarters personnel to understand the program’s requirements. We also reviewed legislation establishing the program, the program’s regulations, and FHWA’s Emergency Relief Manual to obtain details on the program’s requirements. In addition, we obtained and reviewed documents such as the environmental impact statement and FHWA’s work authorizations to document FHWA’s decisions about eligibility. We then compared the guidance and regulations with the actions FHWA took on the project. Michael G. Burros The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the replacement of the Cypress Viaduct in Oakland, California, focusing on the: (1) expected completion date and reasons for construction delays; (2) estimated cost of the project and reasons for any cost growth; and (3) guidance governing the Federal Highway Administration's (FHwA) use of emergency relief funds. GAO found that: (1) the California Department of Transportation (Caltrans) has completed one-third of the Cypress Viaduct's construction and expects to have the project completed by 1998; (2) the replacement of the Cypress Viaduct has been hampered by public opposition, environmental concerns, and railroad negotiations; (3) Caltrans estimates total project costs will be $1.13 billion, 90 percent of which is federally financed through an emergency relief program; (4) construction costs are higher than originally planned because Caltrans underestimated the costs of constructing the freeway, managing traffic, and relocating rail yards; (5) Caltrans risks incurring additional costs because the project is in the early construction stages; (6) FHwA approved funding for the Cypress Viaduct without determining whether it was economically justified to prevent recurring damage and without placing limits on the use of emergency relief funds; (7) FHwA based its funding decision on the emergency relief manual and whether improvements could be performed using emergency relief funds or traditional transportation funds; and (8) the alternative that FHwA approved resulted in more extensive construction, higher costs, and greater risk of delay than would have occurred in rebuilding the structure prior to the earthquake.
The federal budget is the primary financial document of the government. The Congress and the American people rely on it to frame their understanding of significant choices about the role of the federal government and to provide them the information necessary to make informed decisions about individual programs and the collective fiscal policy of the nation. In practice, the budget serves multiple functions—it is used to plan and control resources, assess and guide fiscal policy, measure borrowing needs, and communicate the government’s policies and priorities. All of these uses are important, but they can lead to conflicting criteria for judging a budget. For example, the budget should be understandable to policymakers and the public yet comprehensive enough to fully inform resource allocation decisions. Since no one method of budget reporting can fully satisfy all uses, choosing a reporting method ultimately reflects some prioritization of the various uses—and a judgment about the quality of information and what an acceptable degree of uncertainty might be. When I refer to reporting methods, I mean how things are measured in the budget. Spending can be measured on different bases, such as cash, accrual, or obligation. The basis of budget reporting influences decision-making because the way transactions are recorded affects our understanding of the relative cost of different activities, the way critical choices are framed, and how the deficit (or surplus) is measured. For a simple example, suppose the government extends insurance for which it collects $1 million in premiums in the first year but expects total losses of $3 million in future years. If the primary objective of the budget is to track cash flows, then it is appropriate to show the $1 million cash inflow as a reduction in the deficit or increase in the surplus in the first year and to show the payouts as outlays when they occur. But if we want the budget to show the full cost of a decision, then it might be more appropriate to record a net cost of the present value of $2 million in the year the insurance is extended. Both numbers provide useful information and can be tracked over time. However, they provide very different information to policymakers and may lead to different decisions. Although a comprehensive understanding of this hypothetical program requires knowing both numbers, generally only one has been the primary basis upon which budget decisions are made. Historically, government outlays and receipts have been reported on a cash basis, i.e., receipts are recorded when received and expenditures are recorded when paid, without regard to the period in which the taxes or fees were assessed or the costs incurred. Although this has the advantage of reflecting the cash borrowing needs of the government, over the years, analysts and researchers have raised concerns that cash-based budgeting does not adequately reflect either the cost of some programs or the timing of their impact on economic behavior. As a general principle, decision-making is best informed if the government recognizes the costs of its commitments at the time it makes them. For many programs, cash-based budgeting accomplishes this. And, as noted earlier, because it reflects the government’s actual borrowing needs (if in a deficit situation), it is a good proxy for the government’s effect on credit markets. In general, then, the arguments for cash-based budgeting are convincing and deviations should not be lightly undertaken. The cash-based budget, however, often provides incomplete or misleading information about cost where cash flows to and from the government span many budget periods, and/or where the government obligates itself to make future payments or incur future losses well into the future. This is true for federal credit, insurance, and retirement programs. The Federal Credit Reform Act of 1990 addressed this mismatch between budget reporting and cost for credit programs. This act changed the budgetary treatment of credit programs by requiring that the budget reflect the programs’ costs to the government on a net present value basis. This means that, for example, rather than recording a cash outlay for the full amount of a direct loan, the budget records an estimate of what will ultimately be lost, taking into account repayments, defaults, interest subsidies, and any other cash flows on a net present value basis. Such accrual-based budgeting is also being done for the government’s contribution to pensions for civilian employees covered under the Federal Employees Retirement System and for military personnel. Accrual-based reporting recognizes the cost of transactions or events when they occur regardless of when cash flows take place. As I will discuss, cash-based budgeting is misleading for insurance programs. Federal insurance programs are diverse, covering a wide range of risks that the private sector has traditionally been unable or unwilling to cover. The risks include natural disasters under the flood and crop insurance programs and bank and employer bankruptcies under the deposit and pension insurance programs. The federal government also provides life insurance for veterans and federal employees, political risk insurance for overseas investment activities, and insurance against war-related risks and adverse reactions to vaccines. The face value of all of this insurance—the total amount of insurance outstanding—is around $5 trillion, but this dollar amount overstates the potential cost to the government because it is very unlikely that it would ever face claims from all outstanding insurance. The fiscal year 1997 Consolidated Financial Statements of the United States Government reported a $14.6 billion liability for insurance programs—payments already owed by the government because of past events. The financial statement records liabilities incurred for events that have already happened. But budgets are forward-looking documents. Decisionmakers need to make decisions about future commitments as they debate them—before insurance is extended. Therefore, a different measure may be more appropriate—the expected net cost to the government of the risk assumed by extending the insurance commitment (i.e., the “missing premium”), which is the difference between the full premium that would be charged based on expected losses and the actual premium to be charged the insured. At the request of the Chairman, we reported last September on the shortcomings of cash-based budgeting for federal insurance programs and the potential use of accrual concepts in the budget for these programs. In general, cash-based budgeting for insurance programs presents several problems. Its focus on single period cash flows can obscure the program’s cost to the government and thus may (1) distort the information and incentives presented to policymakers, (2) skew the recognition of the program’s economic impact, and (3) cause fluctuations in the deficit unrelated to long-term fiscal balance. With the current cash-based reporting, premiums for insurance programs are recorded in the budget when collected and outlays are reported when claims are paid. This focus on annual cash flows generally does not adequately reflect the government’s cost for federal insurance programs because the time between the extension of the insurance, the receipt of premiums and other collections, the occurrence of an insured event, and the payment of claims may extend over several budget periods. As a result, the government’s cost may be understated in years that a program’s current premium and other collections exceed current payments and overstated in years that current claim payments exceed current collections. These distortions occur even if the collections and payments for an insurance commitment are equal over time. This is similar to the problem with loans prior to the Credit Reform Act. The budget showed direct loans as costly in the year they were extended but then as profitable in future years when repayments exceeded new loans being made. The reasons for the mismatch between insurance premium collections and claim payments vary across the programs. In the case of political risk insurance extended by the Oversees Private Investment Corporation, the length of the government’s commitment can run for up to 20 years. Similarly, benefit payments for pension plans assumed by the Pension Benefit Guaranty Corporation (PBGC) may not be made for years or even decades after a plan is terminated. This is because participants generally are not eligible to receive pension benefits until they reach age 65 and, once eligible, they receive the benefits for many years. In other programs, temporary transactions or the erratic occurrence of insured events cause the mismatch between collections and payments and distort the insurance programs’ apparent costs in the cash-based budget. For example, during the savings and loan crisis, large temporary cash flows from the acquisition and sale of assets from failed institutions resulted in the government’s cost for deposit insurance never being clearly presented in the annual budget. In years when assets were acquired, the full amount of cash required was recorded as an outlay; later, when the assets were sold, the proceeds were recorded as income. Thus, the cash-based budget overstated the cost of the deposit insurance in some years and understated it in others. The inability of the cash-based budget to capture the cost of the government’s insurance commitments at the time decisions are made has significant implications. Cash-based budgeting for federal insurance programs may provide neither the information nor incentives necessary to signal emerging problems, make adequate cost comparisons, control costs, or ensure the availability of resources to pay future claims. The shortcomings of cash-based budgeting for federal insurance programs became quite apparent during the 1980s and early 1990s as the condition of the two largest programs—deposit insurance and pension insurance—deteriorated while the budget continued to show positive cash flows and did not even recognize failures that had actually happened. Although we and others raised concerns at the time about the government’s rapidly accruing deposit insurance costs, the cash-based budget was not effective in signaling policymakers of the emerging problem because it did not show a cost until institutions were closed and depositors paid. This delayed recognition obscured the program’s, as well as the government’s, underlying fiscal condition and limited the usefulness of the budget process as a means for the Congress to assess the problem. At approximately the same time, PBGC was facing growing losses and sponsors of insured pension plans were coming under severe financial stress, yet the cash-based budget showed large and growing cash income for the program. While the financial condition of PBGC has improved considerably in recent years, the Office of Management and Budget reported in the President’s fiscal year 1999 budget that the government’s expected liability for current and future pension plan terminations is approximately $30 billion. Because the cash-based budget delays recognition of emerging problems, it may not provide policymakers with information or incentives to address potential funding shortfalls before claim payments come due. Policymakers may not be alerted to the need to address programmatic design issues because, in most cases, the budget does not encourage them to consider the future costs of federal insurance commitments. Thus, reforms aimed at reducing costs may be delayed. In most cases, by the time costs are recorded in the budget, policymakers do not have time to ensure that adequate resources are accumulated to pay for them or to take actions to control them. The late budget recognition of these costs can reduce the number of viable options available to policymakers, ultimately increasing the cost to the government. For example, the National Flood Insurance Program provides subsidized coverage without explicitly recognizing its potential cost to the government. Under current policy, the Congress has authorized the Federal Insurance Administration to subsidize a significant portion (approximately 38 percent) of the total policies in force without providing annual appropriations to cover these subsidies. Although the flood insurance program has been self-supporting since the mid-1980s—either paying claims from premiums or borrowing and repaying funds to the Treasury—the program has not been able to establish sufficient reserves to cover catastrophic losses and, therefore, cannot be considered actuarially sound. In some cases, the cash-based budget not only fails to provide incentives to control costs, but also may create a disincentive for cost control. Deposit insurance is a key example. Many analysts believe that the cash-based budget treatment of deposit insurance exacerbated the savings and loan crisis by creating a disincentive to close failed institutions. Since costs were not recognized in the budget until cash payments were made, leaving insolvent institutions open avoided recording outlays in the budget and raising the annual deficit but ultimately increased the total cost to the government. Cash-based budgeting also may not be a very accurate gauge of the economic impact of federal insurance programs. Although discerning the economic impact of federal insurance programs can be difficult, private economic behavior generally is affected when the government commits to providing insurance coverage. At this point, insured individuals or organizations alter their behavior as a result of insurance. However, as I noted above, the cash-budget records costs not at this point but rather when payments are made to claimants. These payments generally have little or no macroeconomic effect because they do not increase the wealth or incomes of the insured. Rather, they are merely intended to restore the insured to his or her approximate financial position prior to the insured event. The cash flow patterns of some federal insurance programs can result in fluctuations in the federal deficit unrelated to the budget’s long-term fiscal balance. As noted earlier, uneven cash flows may result from both the erratic nature of some insured risks or temporary cash flows, as in the case of the acquisition and subsequent sale of assets from failed savings and loan institutions. In addition, insurance programs with long-term commitments, such as pension and life insurance programs, can distort the budget’s long-term fiscal balance by reducing the aggregate deficit in years that premium income exceeds payments without recognizing the programs’ expected costs. While annual cash flows for federal insurance programs generally do not provide complete information for resource allocation and fiscal policy, the magnitude of the problem and the implications for budget decision-making vary across the insurance programs reviewed. For example, the implications of the shortcomings of the current budget treatment appear greatest for the largest programs, pension and deposit insurance. Because of their large size, incomplete or misleading information about their cost could distort resource allocation and fiscal policy significantly, making the limitations of cash-based budgeting more pronounced than for other federal insurance programs. In addition, the limitations of cash-based budgeting are most apparent when the government’s commitment extends over a long period of time, as with pension insurance, or when the insured events are infrequent or catastrophic in nature, such as severe flooding or depository losses. Conversely, the implications for budget decision-making may be less severe if relatively frequent claim payments prompt policymakers to consider the financial condition and funding needs of the program. The use of accrual-based budgeting for federal insurance programs has the potential to overcome a number of the deficiencies of cash-based budgeting—if the estimating problems I discuss below can be dealt with. Accrual-based reporting recognizes transactions or events when they occur regardless of when cash flows take place. An important feature of accrual-based reporting is the matching of expenses and revenues whenever it is reasonable and practicable to do so. In contrast to cash-based reporting, accrual reporting recognizes the cost for future insurance claim payments when the insurance is extended and provides a mechanism for establishing reserves to pay those costs. Thus, the use of accrual concepts in the budget has the potential to overcome the time lag between the extension of an insurance commitment, collection of premiums, and payment of claims that currently distorts the government’s cost for these programs on an annual cash flow basis. The use of forward-looking cost measures for federal insurance programs could improve budget reporting. As with the approach taken for credit programs, accrual-based reporting for insurance programs recognizes the cost of the government’s commitment when the decision is made to provide the insurance, regardless of when cash flows occur. For federal insurance programs, the key information is whether premiums over the long term will be sufficient to pay for covered losses and, if not, to identify the net cost to the government. The cost of the risk assumed by the government is the difference between the full risk premium, based on the expected cost of losses inherent in the insurance commitment, and the premium charged to the insured (the missing premium). Earlier recognition of the cost of the government’s insurance commitments under a risk-assumed accrual-based budgeting approach would (1) allow for more accurate cost comparisons with other programs, (2) provide an opportunity to control costs before the government is committed to making payments, (3) build budget reserves for future claims, and (4) better capture the timing and magnitude of the impact of the government’s actions on private economic behavior. It might or might not change the premium charged—that is a separate policy decision. Rather, better information on cost would mean that decisions would be better informed. A crucial component in the effective implementation of accrual-based budgeting for federal insurance programs is the ability to generate reasonable, unbiased estimates of the risk assumed by the federal government. Although the risk-assumed concept is relatively straightforward, generating estimates of these costs is complex and varies significantly across insurance programs. While in some cases, such as life insurance, generating risk-assumed estimates may not be problematic, in most cases, the difficulties faced may be considerably more challenging than those currently faced for some loan programs under credit reform. For insurance, the accuracy of estimated future claims is determined by the extent to which the probability of all potential outcomes can be determined. Unfortunately, probabilities are not known for certain for most activities more complex than the toss of a fair coin. However, for activities in which data on actual outcomes exist, like the length of a human life, the underlying probabilities can be estimated. When the probabilities of future events can be inferred, estimates are said to be made under the condition of risk and the risk undertaken by the insurer can be measured. However, when underlying conditions are not fully understood, estimates are said to be made under uncertainty. This is the case for most federal insurance programs due to the nature of the risks insured, program modifications, and other changes in conditions that affect potential losses. Lack of sufficient historical data for some federal insurance programs also constrains risk assessment. While private insurers generally rely on historical data on losses and claim costs to assess risk, data on the occurrence of insured events over sufficiently long periods under similar conditions are generally not available for federal insurance programs. Frequent program modifications as well as fundamental changes in the activities insured further reduce the predictive value of available data and complicate risk estimation. These factors, which limit the ability to predict losses and the potential for catastrophic losses, have been cited as preventing the development of commercial insurance markets for risks covered by federal insurance programs. Many federal insurance programs cover complex, case-specific, or catastrophic risks that the private sector has historically been unwilling or unable to cover. As a result, private sector comparisons are generally unavailable to aid in the risk estimation process. Thus, the development and acceptance of risk assessment methodologies for individual insurance programs vary considerably. For some programs, the development of risk-assumed estimates will require refining and adapting available risk assessment models while, for other programs, new methodologies may have to be developed. The degree of difficulty in developing estimates and the uncertainty surrounding these estimates will likely be greatest for programs—such as deposit and pension insurance—that require modeling complex interactions between highly uncertain macroeconomic variables and human behavior. Even after years of research, significant debate and estimation disparity exists in the modeling for these programs. This means that in practical terms, attempts to improve cost recognition occur on a continuum since insurance programs and insurable events vary significantly. The extent of improvement in information when moving from cash-based to accrual-based information would vary across programs depending on (1) the size and length of the government’s commitment, (2) the nature of the insured risks, and (3) the extent to which costs are currently captured in the budget. The diversity of federal insurance programs also implies that the period used for estimating risk assumed, the complexity of the models, and the policy responses to this new information will vary. In our report on budgeting for insurance programs, we looked at several different approaches to incorporating risk-assumed estimates into the budget, ranging from the addition of supplemental reporting to incorporation directly into budget authority, outlays, and the deficit. We concluded that although the potential for improved information argued for a risk-assumed approach, the analytic and implementation issues argued for beginning with supplemental information. I will describe the three approaches we explored and then discuss our conclusion. Supplemental approach: Under this approach, accrual-based cost measures would be included as supplemental information in the budget documents. Ideally, the risk-assumed estimates would be reported annually in a standard format along with the cash-based estimates. Showing the two together would highlight the risk-assumed cost estimates at the time budget decisions are made and also increase the likelihood that serious work on improving these estimates would continue. This approach has some advantages, particularly that it would allow time to test and improve estimation methodologies and increase the comfort level of users before considering whether to move to a more comprehensive approach. It would highlight the differences in the type of information provided on a cash basis versus an accrual basis without changing the reporting basis of total budget authority, net outlays, or the budget deficit or surplus. The disadvantage of the supplemental reporting approach is that it may not have a significant effect on the budget decision-making process because the cost information would not directly affect the budget totals and allocations to congressional committees. Therefore, if this approach is selected, it would be important to also create an incentive to improve cost estimates and risk assessment methodologies. For example, demonstrated congressional interest and stated intentions to move toward greater integration into the budget after a period of evaluation might help ensure that agencies and the Office of Management and Budget actively pursue improvements. Budget authority approach: Under this approach, accrual-based cost measures—the full cost of the risk assumed by the government—would be included in budget authority for the insurance program account and in the aggregate budget totals. Net outlays—and hence the budget deficit or surplus—would not change. Budget authority would be obligated when an insurance commitment was made and would be held as an interest-earning reserve. Future claims would be paid from the reserve. A key advantage of this approach is that it would provide earlier recognition of insurance costs directly in the budget while preserving cash-based reporting for net outlays and the budget results. This would incorporate cost estimates directly into the budget debate without potentially subjecting outlays and the deficit or surplus to the uncertainty of the risk-assumed estimates or changing the nature of the outlay and deficit/surplus measure. It might also focus attention on improving the estimates since they will be included in one of the key budget numbers. There are problems with this approach, however. Since the estimates would not be reflected in the deficit or surplus—the numbers that receive the most attention and scrutiny—it is unclear how much more effect this approach would have on the budget decision-making process than the supplemental information approach. In addition, the impact of this approach would be limited by the fact that most insurance programs are mandatory and thus any budget authority needed is automatically provided. In our report, we discuss a variation of this approach that would increase its impact. For mandatory insurance programs, a discretionary account could be created to record the government’s subsidy cost. An appropriation to that account could be required to cover the subsidy costs in the year the insurance is extended, unless alternative actions were taken to reduce the government’s cost, such as increasing program collections or reducing future programs costs. Since the discretionary appropriation would be subject to Budget Enforcement Act caps, decisionmakers would have an incentive to reduce the government’s costs. However, such a change in budgeting would also fundamentally change the nature of most federal insurance programs and, by changing the locus of decisions to the annual appropriation process, might change program operations. Outlay approach: Under this approach, accrual-based cost measures would be incorporated into both budget authority and net outlays for the insurance program account and in the budget totals. Thus, the reported deficit or surplus would reflect the risk-assumed estimate at the time the insurance is extended. Since the government’s insurance programs generally provide a subsidy, the deficit would be larger (or the surplus smaller) than when reported on a cash basis, which could prompt action to address the causes of the increased outlays. Without fundamentally changing the nature of most insurance programs, the outlay approach is the most comprehensive of the three approaches and has the greatest potential to achieve many of the conceptual benefits of accrual-based budgeting. It would recognize the government’s full cost when budget decisions are being made, permitting more fully informed resource allocation decisions. Since the cost is recognized in the budget’s overall results—the deficit or surplus—incentives for managing costs may be improved. Also, recognizing the costs at the time the insurance commitments are made would better reflect their fiscal effects. Conceptually, this approach has the appeal of taking the approach currently used for credit programs and applying it to insurance. However, it is important to recognize that developing estimates of the “missing premium” is much more difficult than developing subsidy estimates for credit programs. The uncertainty surrounding the estimates of the risk assumed presents a major hurdle to implementing accrual budgeting for insurance programs. Risk-assumed estimates for most insurance programs are either currently unavailable or not fully accepted. Even if they become more accepted, the Congress and the President would need to be comfortable with the fact that recognizing the risk-assumed estimate in outlays would mean that any reported deficit would depart further from representing the borrowing needs of the government. Choosing among the three approaches I have presented is further complicated by the fact that the relative implementation difficulties—and the benefits achieved—vary across federal insurance programs. The key implementation issue that I discussed earlier is whether reasonable, unbiased, risk-assumed cost estimates can be developed. The programs for which the risk-assumed estimates are perhaps most difficult to make—deposit and pension insurance—are also the ones for which having the estimates would potentially make the most difference in budget decision-making. While supplemental reporting of risk-assumed estimates would allow time to evaluate the feasibility and desirability of moving to a more comprehensive accrual-based budgeting approach for all insurance programs, the Congress and the President could also consider whether it would be reasonable to phase implementation by type of insurance program over time. If the latter approach were chosen, life, flood, and crop insurance programs could be the starting points because they have more established methodologies for setting risk-related premium rates. The methodology for life insurance is well established in actuarial science. For flood and crop insurance, some modifications and refinements to existing methodologies and other implementation challenges should be expected. Beyond generating estimates, there are other challenges that must be addressed, such as the increased uncertainty accrual-based estimates will inject into the budget. For example, while one of the major benefits of accrual-based budgeting is the recognition of the cost of future insurance claims when programmatic and funding decisions are being made, this recognition is dependent on estimates, which are in turn dependent upon many economic, behavioral, and environmental variables. There will always be uncertainty in the reported accrual-based estimates. However, uncertainty in the estimation of insurance program costs should be evaluated in terms of the direction and magnitude of the estimation errors. For budgeting purposes, decisionmakers probably would be better served by information that is more approximately correct on an accrual basis than they are by cash-based numbers that may be exactly correct but misleading. That said, the estimation uncertainty will make periodic evaluation of the risk estimation methodologies used to generate the estimates crucial. Other challenges to be addressed include how to establish and protect loss reserves, how to handle reestimates, funding shortfalls, previously accumulated program deficits, and administrative costs. To support current and future resource allocation decisions and be useful in the formulation of fiscal policy, the federal budget needs to be a forward-looking document that enables and encourages users to consider the future consequences of current decisions. The potential benefits of an accrual-based budgeting approach for federal insurance programs warrant continued effort in the development of risk-assumed cost estimates. The complexity of the issues involved and the need to build agency capacity to generate such estimates suggest that it is not feasible to integrate accrual-based costs directly into the budget at this time. Supplemental reporting of these estimates in the budget over a number of years could help policymakers understand the extent and nature of the estimation uncertainty and permit an evaluation of the desirability and feasibility of adopting a more comprehensive accrual-based approach. Supplemental reporting of risk-assumed cost estimates in the budget has several attractive features. It would allow time to (1) develop and refine estimation methodologies, (2) assess the reliability of risk-assumed estimates, (3) formulate cost-effective reporting procedures and requirements, (4) evaluate the feasibility of a more comprehensive accrual-based budgeting approach, and (5) gain experience and confidence in risk-assumed estimates. At the same time, the Congress and the executive branch will have had several years of experience with credit reform, which can help inform their efforts to apply accrual-based budgeting to insurance. During this period, policymakers should continue to draw on information provided in audited financial statements. If the risk-assumed estimates develop sufficiently so that their use in the budget will not introduce an unacceptable level of uncertainty, policymakers could consider incorporating risk-assumed estimates directly into the budget. While directly incorporating them in both budget authority and outlays would have the greatest impact on the incentives provided to decisionmakers, it would also significantly increase reporting complexity and introduce new uncertainty in reported budget data. Thus caution is called for in taking steps that move beyond supplemental reporting of risk-assumed estimates. One way to approach the incorporation of risk-assumed estimates in the budget is to start with programs that already have established methodologies for setting risk-assumed premium rates, such as life, flood, and crop insurance. By drawing attention to the need to change the budget treatment of insurance programs, this task force is moving the process in the right direction. As I have noted on other occasions, action and effort are usually devoted to areas on which light is shined. Mr. Chairman, this concludes my written statement. I would be happy to answer any questions you or your colleagues may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed: (1) current budget reporting and accrual-based reporting; and (2) accrual budgeting and its specific application for insurance programs. GAO noted that: (1) the cash-based budget often provides incomplete or misleading information about cost where cash flows to and from the government span many budget periods, or where the government obligates itself to make future payments or incurs losses well into the future; (2) the use of accrual-based budgeting for federal insurance programs has the potential to overcome a number of the deficiencies of cash-based budgeting--if estimating problems can be dealt with; (3) the use of accrual concepts in the budget has the potential to overcome the time lag between the extension of an insurance commitment, collection of premiums, and payment of claims that currently distorts the government's cost for these programs on an annual cash flow basis; (4) accrual-based reporting for insurance programs recognizes the cost of the government's commitment when the decision is made to provide insurance, regardless of when cash flows occur; (5) for federal insurance programs, the key information is whether premiums over the long term will be sufficient to pay for covered losses; (6) earlier recognition of the cost of the government's insurance commitments under a risk-assumed accrual-based budgeting approach would: (a) allow for more accurate cost comparisons with other programs; (b) provide an opportunity to control costs before the government is committed to making payments; (c) build budget reserves for future claims; and (d) better capture the timing and magnitude of the impact of the government's actions on private economic behavior; (7) a crucial component in the effective implementation of accrual-based budgeting for federal insurance programs is the ability to generate reasonable, unbiased estimates of the risk assumed by the federal government; (8) GAO reviewed three different approaches to incorporating risk-assumed estimates into the budget: (a) under the supplemental approach, accrual-based cost measures would be included as supplemental information in the budget documents; (b) under the budget authority approach, accrual-based cost measures would be included in budget authority for the insurance program account and in the aggregate budget totals; and (c) under the outlay approach, accrual-based cost measures would be incorporated into both budget authority and net outlays for the insurance program account and in the budget totals; and (9) the complexity of the issues involved and the need to build agency capacity to generate such estimates suggest that it is not feasible to integrate accrual-based costs directly into the budget at this time.
In 1984, a catastrophic accident caused the release of methyl isocyanate— a toxic chemical used to make pesticides—at a Union Carbide plant in Bhopal, India, killing thousands of people, injuring many others, and displacing many more from their homes and businesses. One month later, it was disclosed that the same chemical had leaked at least 28 times from a similar Union Carbide facility in Institute, West Virginia. Eight months later, 3,800 pounds of chemicals again leaked from the West Virginia facility, sending dozens of injured people to local hospitals. In the wake of these events, Congress passed the Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA). Among other things, EPCRA provides access by individuals and communities to information regarding hazardous materials in their communities. Section 313 of EPCRA generally requires certain facilities that manufacture, process, or otherwise use any of 581 individual chemicals and 30 additional chemical categories to annually report the amount of those chemicals that they released to the environment, including information about where they released those chemicals. EPCRA also requires EPA to make this information available to the public, which the agency does in a national database known as the Toxics Release Inventory. The public may access TRI data on EPA’s website and aggregate it by zip code, county, state, industry, and chemical. EPA also publishes an annual report that summarizes national, state, and industry data. Figure 1 illustrates TRI reporting using a typical, large coal-fired electric power plant as an example. The figure notes the chemicals that the facility may have to report to the TRI. The primary input to this facility is coal that contains small amounts of a number of toxic chemicals such as arsenic, chromium, and lead. The facility pulverizes coal and burns it to generate electricity. As part of its standard operations, the facility releases TRI chemicals such as hydrochloric acid and sulfuric acid to the air through its stack. The facility may also send ash from the burning process to an ash pond or landfill, including TRI chemicals such as arsenic, lead, and zinc. In addition, the facility may release chemicals in the water it uses for cooling. The facility will have to complete a TRI report for air, land, and water releases of each chemical it uses above a certain threshold. Owners of facilities subject to EPCRA comply its reporting requirements by submitting an annual Form R report to EPA, and their respective state, for each TRI-listed chemical that they release in excess of certain thresholds. Form R captures information about facility identity, such as address, parent company, industry type, latitude, and longitude and detailed information about the toxic chemical, such as quantity of the chemical disposed or released onsite to air, water, land, and underground injection or transferred for disposal or release off-site. This information is labeled as “Disposal or Other Releases” on the left side of figure 2. In the PPA, Congress declared that pollution should be prevented or reduced at the source whenever feasible; pollution that cannot be prevented should be recycled in an environmentally safe manner, whenever feasible; pollution that cannot be prevented or recycled should be treated in an environmentally safe manner whenever feasible; and disposal or other release into the environment should be employed only as a last resort and should be conducted in an environmentally safe manner. Consequently, EPA expanded TRI by requiring facilities to report additional information about their efforts to reduce pollution at its source, including the quantities of TRI chemicals they manage in waste, both on- and off-site, including amounts recycled, burned for energy recovery, or treated. EPA began capturing this information on Form R in 1991, as illustrated by “Other Waste Management” on the right side of figure 2. Beginning in 1995, EPA allowed facilities to use a 2-page Certification Statement (Form A) to certify that they are not subject to Form R reporting for a given non-PBT chemical provided that they (1) did not release more than 500 total pounds and (2) did not manufacture, process, or otherwise use more than one-million total pounds of the chemical. Form A contains the facility identification information found on Form R and basic information about the identity of the chemical being reported. However, Form A does not contain any of the Form R details about quantities of chemicals released or otherwise managed as waste. Beginning with Reporting Year 2001, EPA has provided the Toxics Release Inventory–Made Easy software (TRI-ME) to assist facilities with their TRI reporting. TRI-ME leads prospective reporters interactively through a series of questions that eliminate a good portion of the analysis required to determine whether a facility needs to comply with the TRI reporting requirements, including the threshold calculations needed to determine Form A eligibility. If TRI-ME determines that a facility is required to report, the software provides guidance for each of the data elements on the reporting forms. The software also provides detailed guidance for each step through an integrated assistance library. Prior to submission, TRI-ME performs a series of validation checks before the facility prints the forms for mailing, transfers the data to diskette, or submits the information electronically over the Internet. Each year, EPA compiles the TRI reports and stores them in a database known as the Toxics Release Inventory (TRI). In 2004—the latest year for which data are publicly available—23,675 facilities filed a total of nearly 90,000 reports, including nearly 11,000 Form As. In total, facilities reported releasing 4.24 billion pounds of chemicals to the environment and handling 21.8 billion pounds of chemicals through other waste management activities. EPA recently embarked on a three-phase effort to streamline TRI reporting requirements and reduce the reporting burden on industry. During the first phase, EPA removed some data elements from Form A and Form R that could be obtained from other EPA information collection databases to simplify reporting. As part of the second phase, EPA issued the TRI Burden Reduction Proposed Rule, which would have allowed a reporting facility to use Form A for (a) non-PBT chemicals, so long as its releases or other disposal were not greater than 5,000 pounds, and (b) for PBT chemicals when there are no releases or other disposal and no more than 500 pounds of other waste management (e.g., recycling or treatment). The phase III changes that EPA was considering proposing would have allowed alternate-year reporting, rather than yearly reporting. The phase II and III changes generated considerable public concern that they will negatively impact federal and state governments’ and the public’s access to important public health information. Although we have not yet completed our review, our preliminary observations are that EPA does not appear to have followed its own rulemaking guidelines in all respects when developing the new TRI reporting requirements. Throughout the rule development process, senior EPA management generally has the discretion to depart from the guidelines, including by accelerating the development of the proposed regulations. Nevertheless, we discovered several significant differences between the guidelines and the process EPA followed in this case: (1) late in the rulemaking process, senior EPA management directed consideration of a burden reduction option that the TRI workgroup had considered but which had subsequently been dropped from consideration; (2) EPA developed this option on an expedited schedule that appears to have provided a limited amount of time for conducting various impact analyses; and (3) the expedited schedule afforded little, if any, time for internal stakeholders to provide input to senior EPA management about the impacts of the proposal during Final Agency Review. First, the TRI workgroup charged with identifying options to reduce reporting burdens on industry identified three possible options for senior management to consider. The first two options allowed facilities to use Form A in lieu of Form R for PBT chemicals, provided the facility has no releases to the environment. Specifically, the workgroup considered and analyzed options to facilities to: report PBT chemicals using Form A if they have zero releases and zero total other waste management activities; or report PBT chemicals using Form A if they have zero releases and no more than 500 pounds of other waste management activities. The third option was to create a form, in lieu of Form R, for facilities to report “no significant change” if their releases changed little from the previous year. According a June 2005 briefing for the Administrator and interviews with senior EPA officials, the Office of Management and Budget (OMB) had suggested increasing the Form A eligibility for non-PBT chemicals from 500 to 5,000 pounds as a possible burden reduction option. However, the TRI workgroup had previously dropped that option from consideration. In fact, EPA’s economic analysis—dated July 2005—did not evaluate the impact of raising the Form A reporting threshold because the TRI workgroup pursued the “no significant change” option. Nonetheless, by the time the TRI burden reduction proposed rule was published in October 2005, it included the option to increase Form A reporting eligibility from 500 to 5,000 pounds. Second, although we could not determine from the documents EPA provided or the discussions we held with EPA officials what actions the agency took between the June 2005 briefing for the Administrator and the October 2005 publication of the TRI proposal in the Federal Register, the Administrator provided direction after the briefing to expedite the process in order to meet a commitment to OMB to provide burden reduction by the end of December 2006. Subsequently, EPA staff worked to revise the economic analysis to consider the impact of raising the Form A reporting threshold. However, that analysis was not completed before EPA sent the proposed rule to OMB for review and was only completed just prior to the proposal being signed by the Administrator on September 21, 2005 and ultimately published in the Federal Register for public comment on October 4, 2005. Third, it appears that EPA management received limited input from internal stakeholders, including the TRI workgroup, after directing that the proposed rule include the option to increase the Form A reporting threshold from 500 to 5,000 pounds. EPA conducted a Final Agency Review burden reduction proposal, as provided for in the internal rulemaking guidelines. Final Agency Review is the step where EPA’s internal and regional offices would have discussed with senior management whether they concurred, concurred with comment, or did not concur with the final proposal. It appears that the review pertained to the “no significant change” option rather than increased threshold option. As a result, the EPA Administrator or EPA Assistant Administrator for Environmental Information likely received limited input from internal stakeholders about the increased Form A threshold prior to sending the TRI Burden Reduction Proposed Rule to OMB for review and publication in the Federal Register for public comment. Finally, in response to the public comments to the proposed rule, nearly all of which were negative, EPA considered alternative options and revised the rule to allow facilities to report releases of up to 2,000 pounds on Form A. We continue to review EPA documents and meet with EPA officials to understand the process EPA followed in developing the TRI burden reduction proposal. We expect to have a more complete picture for our report in June. We believe that the impact of EPA’s changes to the TRI reporting requirements will likely have a significant impact on environmental information available to the public. While our analysis confirms EPA’s estimate that the TRI reporting changes could result less than 1 percent of total pounds of chemical releases no longer being included in the TRI database, the impact on information available to some communities is likely to be more significant than these national aggregate totals indicate. EPA estimated that these reports amount to 5.7 million pounds of releases not being reported to the TRI (only 0.14% of all TRI release pounds) and an additional 10.5 million pounds of waste management activities (0.06% of total waste management pounds). Examined locally, the impact on data available to some communities is likely to be more significant than these national totals indicate. To understand the potential impact of EPA’s changes to TRI reporting requirements at the local level, we used 2005 TRI data to estimate the number of detailed Form R reports that would no longer have to be submitted in each state and the impact this could have on data about specific chemicals and facilities. We provide a summary of our methodology and estimates of these impacts, by state, in Appendix I. In addition, preliminary results from our January 2007 survey of state TRI coordinators indicate that they believe EPA’s changes to TRI reporting requirements will have, on balance, a negative impact on various aspects of TRI, including environmental information available to the public. We estimated that a total of nearly 22,200 Form R reports could convert to Form A if all eligible facilities choose to take advantage of the opportunity to report under the new Form A thresholds. The number ranges by state from 25 Form Rs in Vermont (27.2 percent of Form Rs in the state) to 2,196 Form Rs in Texas (30.6 percent of Form Rs in the state). As figure 3 shows, Arkansas, Idaho, Nevada, North Dakota, and South Dakota could lose less than 20 percent of the detailed forms, while Alaska, California, Connecticut, Georgia, Hawaii, Illinois, Maryland, Massachusetts, New Jersey, New York, North Carolina, Rhode Island, and Texas could lose at least 30 percent of Form R reports. For each facility that chooses to file a Form A instead of Form R, the public would no longer receive detailed information about a facility’s releases and waste management practices for a specific chemical that the facility manufactured, processed, or otherwise used. While both Form R and Form A capture information about a facility’s identity, such as mailing address and parent company, and information about a chemical’s identity, such its generic name, only Form R captures detailed information about the chemical, such as quantity disposed or released onsite to air, water, and land or injected underground, or transferred for disposal or release off-site. Form R also provides information about the facility’s efforts to reduce pollution at its source, including the quantities managed in waste, both on- and off-site, such as amounts recycled, burned for energy recovery, or treated. We provide a detailed comparison of the TRI data on Form R and Form A in Appendix II. One way to characterize the impact of the TRI reporting changes on publicly available data is in terms of information about specific chemicals at the state level. The number of chemicals for which no information is likely to be reported under the new rule ranges from 3 chemicals in South Dakota to 60 chemicals in Georgia. That means that all quantitative information currently reported about those chemicals could no longer appear in the TRI database. Figure 4 shows that thirteen states—Delaware, Georgia, Hawaii, Iowa, Maryland, Massachusetts, Missouri, North Carolina, Oklahoma, Tennessee, Vermont, West Virginia, and Wisconsin—could no longer have quantitative information for at least 20 percent of all reported chemicals in the state. The impact of the loss of information from these Form R reports can also be understood in terms of the number of facilities that could be affected. We estimated that 6,620 facilities nationwide could chose to convert at least one Form R to a Form A, and about 54 percent of those would be eligible to convert all their Form Rs to Form A. That means that approximately 3,565 facilities would not have to report any quantitative information about their chemical releases and other waste management practices to the TRI, according to our estimates. The number of facilities ranges from 5 in Alaska to 302 in California. As an example, one of these facilities is ATSC Marine Terminal—a bulk petroleum storage facility in Los Angeles County, California. In 2005, it reported releases of 13 different chemicals—including highly toxic benzene, toluene, and xylene—to the air. Although the facility’s releases totaled about 5,000 pounds, it released less than 2,000 pounds of each chemical. As figure 5 shows, more than 10 percent of facilities in each state except Idaho would no longer have to report any quantitative information to the TRI. The most affected states are Colorado, Connecticut, the District of Columbia, Hawaii, Massachusetts, and Rhode Island, where more than 20 percent of facilities could choose to not disclose the details of their chemical releases and other waste management practices. Furthermore, our analysis found that citizens living in 75 counties in the United States—including 11 in Texas, 10 in Virginia, and 6 in Georgia—could have no quantitative TRI information about local toxic pollution. The Environmental Protection and Community Right-to-Know Act requires that facilities submit their annual TRI data directly to their respective state, as well as to EPA. Last month, we surveyed the TRI program contacts in the 50 states and the District of Columbia to gain their perspective on the TRI, including an understanding of how TRI is used by the states. We also asked for their beliefs about how EPA’s increase in the Form A eligibility threshold would affect TRI-related aspects in their state, such as information available to the public, efforts to protect the environment, emergency planning and preparedness, and costs to facilities for TRI reporting. Although our analysis of the survey is not final, preliminary results from 49 states and the District of Columbia show that the states generally believe that the change will have a negative impact on various aspects of TRI in their states. Very few states reported that the change will have a positive impact. The states reported that the TRI changes will have a negative impact on such TRI aspects as information available to the public and efforts to protect the environment. Specifically, 23 states—including California, Maryland, New York, and Oklahoma— responded that the changes will negatively impact information available to the public, 14 states—including Louisiana, Ohio, and Wyoming—reported no impact, and one state, Virginia, reported a generally positive impact. Similarly, 22 states responded that the change will negatively impact efforts to protect the environment, 11 reported no impact, and 5 said it will have a positive impact. States also responded that raising the eligibility threshold will have no impact on TRI aspects such as emergency planning and preparedness efforts and the cost to facilities for TRI reporting. For example, 22 states responded that the change will have no impact on the cost to facilities for TRI reporting, 12 said it will have a positive impact, and no states said it will have a negative impact. The totals do not always sum to 50 because some states responded that they were uncertain of the impact on some aspects of TRI. Finally, we evaluated EPA’s estimates of the burden reduction impacts that the new TRI reporting rules would likely have on industry’s reporting costs, the primary rationale for the rule changes. EPA estimated that the TRI reporting changes will result in an annual cost savings of approximately $5.9 million. (See table 1.) This amounts to about 4 percent of the $147.8 million total annual cost to industry, according to our calculations. This amounts to an average savings of less than $900 annually for each facility. EPA also projected that not all eligible facilities will chose to use Form A, based on the agency’s experience from previous years. Furthermore, according to industry groups, much of the reporting burden comes from the calculations required to determine and substantiate Form A eligibility, rather than from the amount time required to complete the forms. As a result, EPA’s estimate of nearly $6 million likely overestimates the total cost savings (i.e., burden reduction) that will be realized by reporting facilities. We are continuing to review EPA documentation and meet with EPA officials to understand the process they followed in developing the TRI burden reduction proposal. We expect to have a more complete picture for our report later this year. Perchlorate is a salt that is easily dissolved and transported in water and has been found in groundwater, surface water, drinking water, soil, and food products such as milk and lettuce across the country. Health studies have shown that perchlorate can affect the thyroid gland and may cause developmental delays during pregnancy and early infancy. In February 2005, EPA established a new safe exposure level, or reference dose, for perchlorate, equivalent to 24.5 parts per billion in drinking water. However, EPA has not established a national drinking water standard, citing the need for more research on health effects. As a result, perchlorate, like other unregulated contaminants, is not subject to TRI reporting. In May 2005 we issued a report that identified (1) the estimated extent of perchlorate found in the United States; (2) what actions the federal government, state governments, and responsible parties have taken to clean up or eliminate the source of perchlorate; and (3) what studies of the potential health risks from perchlorate have been conducted and, where presented, the author’s conclusions or findings on the health effects of perchlorate. Perchlorate has been found by federal and state agencies in groundwater, surface water, soil, or public drinking water at almost 400 sites in the United States. However, because there is not a standardized approach for reporting perchlorate data nationwide, a greater number of sites than we identified may already exist in the United States. Perchlorate has been found in 35 states, the District of Columbia, and 2 commonwealths of the United States, where the highest concentrations ranged from 4 parts per billion to more than 3.7 million parts per billion. (At some sites, federal and state agencies detected perchlorate concentrations as low as 1 part per billion or less, yet 4 parts per billion is the minimum reporting level of the analysis method most often used.) More than 50 percent of all sites were found in California and Texas, and sites in Arkansas, California, Texas, Nevada, and Utah had some of the highest concentration levels. However, roughly two-thirds of sites had concentration levels at or below 18 parts per billion, the upper limit of EPA’s provisional cleanup guidance, and almost 70 percent of sites had perchlorate concentrations less than 24.5 parts per billion, the drinking water concentration calculated on the basis of EPA’s recently established reference dose (see fig. 6). At more than one-quarter of the sites, propellant manufacturing, rocket motor testing, and explosives disposal were the most likely sources of perchlorate. Public drinking water systems accounted for more than one- third of the sites where perchlorate was found. EPA sampled more than 3,700 public drinking water systems and found perchlorate in 153 systems across 26 states and 2 commonwealths of the United States. Perchlorate concentration levels found at public drinking water systems ranged from 4 to 420 parts per billion. However, only 14 of the 153 public drinking water systems had concentration levels above 24.5 parts per billion. EPA and state officials told us they had not cleaned up these public drinking water systems, principally because there was no federal drinking water standard or specific federal requirement to clean up perchlorate. Further, EPA currently does not centrally track or monitor perchlorate detections or the status of cleanup activities. In fact, several EPA regional officials told us they did not always know when states had found perchlorate, at what levels, or what actions were taken. As a result, it is difficult to determine the extent of perchlorate in the United States or the status of cleanup actions, if any. Although there is no specific federal requirement to clean up perchlorate or a specific perchlorate cleanup standard, EPA and state environmental agencies have investigated, sampled, and cleaned up unregulated contaminants, such as perchlorate, under various federal environmental laws and regulations. EPA and state agency officials have used their authorities under these laws and regulations, as well as under state laws and action levels, to sample and clean up and/or require the sampling and cleanup of perchlorate by responsible parties. For example, according to EPA and state officials, at least 9 states have established non-regulatory action levels or advisories, ranging from under 1 part per billion to 18 parts per billion. Where these action levels or advisories are in effect, responsible parties have been required to sample and clean up perchlorate. Further, certain environmental laws and programs require private companies to sample for contaminants, which can include unregulated substances such as perchlorate, and report to environmental agencies. According to EPA and state officials, private industry and public water suppliers have generally complied with regulations requiring sampling for contaminants and agency requests to sample or clean up perchlorate. DOD has sampled and cleaned up when required by specific environmental laws and regulations but has been reluctant to sample on or near active installations, unless a perchlorate release due to DOD activities is suspected and a complete human exposure pathway is likely to exist. Finally, EPA, state agencies, and/or responsible parties are currently cleaning up or planning cleanup at 51 of the almost 400 sites where perchlorate has been found. The remaining sites are not being cleaned up for a variety of reasons. The reason most often cited by EPA and state officials was that they were waiting for a federal requirement to do so. We identified and summarized 90 studies of perchlorate health risks published since 1998. EPA and DOD sponsored the majority of these studies, which used experimental, field study, and data analysis methodologies. For 26 of the 90 studies, the findings indicated that perchlorate had an adverse effect. Eighteen of these studies found adverse effects on fetal or child development resulting from maternal exposure to perchlorate. Although the studies we reviewed examined whether and how perchlorate affected the thyroid, most of the studies of adult populations were unable to determine whether the thyroid was adversely affected. Adverse effects of perchlorate on the adult thyroid are difficult to evaluate because they may happen over longer time periods than can be observed in a research study. However, adverse effects of perchlorate on fetal or child development can be studied and measured within study time frames. We also found some studies considered the same perchlorate dose amount but identified different effects. The precise cause of the differences remains unresolved but may be attributed to an individual study’s design type or the physical condition of the subjects, such as their age. Such unresolved questions are one of the bases for the differing conclusions among EPA, DOD, and academic studies on perchlorate dose amounts and effects. In January 2005, NAS issued its report on the potential health effects of perchlorate. The NAS report evaluated many of the same health risk studies included in our review. NAS reported that certain levels of exposure may not adversely affect healthy adults but recommended that more studies be conducted on the effects of perchlorate exposure in children and pregnant women. NAS also recommended a perchlorate reference dose, which is an estimated daily exposure level from all sources that is expected not to cause adverse effects in humans, including the most sensitive populations. The reference dose of 0.0007 milligrams per kilogram of body weight is equivalent to a drinking water exposure level of 24.5 parts per billion, if all exposure comes from drinking water. In January 2006, EPA issued guidance stating that this exposure level is a preliminary cleanup goal for environmental cleanups involving perchlorate. We concluded that EPA needed more reliable information on the extent of sites contaminated with perchlorate and the status of cleanup efforts, and recommended that EPA work with the Department of Defense, other federal agencies and the states to establish a formal structure for better tracking perchlorate information. In December 2006, EPA reiterated its disagreement with the recommendation stating that perchlorate information already exists from a variety of other sources. However, we found that the states and federal agencies do not always report perchlorate detections to EPA and as a result EPA and the states do not have the most current and complete accounting of perchlorate as an emerging contaminant of concern. We continue to believe that the inconsistency and omissions in the available data that we found during the course of our study underscore the need for a more structured and formal system, and that such a system would serve to better inform the public and others about the locations of perchlorate releases and the status of clean ups. Contrary to EPA’s assertions, in our view EPA’s recent changes to the Toxics Release Inventory significantly reduce the amount of information available to the public about toxic chemicals in their communities. EPA’s portrayal of the potential impacts of the TRI reporting rule changes in terms of a national amount of pollution is quite misleading and runs contrary to the legislative intent of EPCRA and the principles of the public’s right-to-know. TRI is designed to provide states and public citizens with information about the releases of toxic chemicals by facilities in their local communities. Citizens drink water from local sources, spend much of their time on land near their homes and places of business, and breathe the air over their local communities. We believe that the likely reduction in publicly availability data about specific chemicals and facilities in local communities should be considered in light of the relatively small cost savings to industry afforded by the TRI reporting changes. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you and Members of the Committee may have. For further information about this testimony, please contact me, John Stephenson, at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include J. Erin Lansburgh, Assistant Director, and Terrance Horner, Senior Analyst; Mark Braza, John Delicath, Karen Febey, Edward Kratzer, Richard Johnson, and Jennifer Popovic also made key contributions. We analyzed 2005 TRI data provided by EPA to estimate the number of Form Rs that could no longer be reported in each state and determine the possible impacts that this could have on data about specific chemicals and facilities. Table 2 provides our estimates of the total number of Form Rs eligible to convert to Form A, including the percent of total Form Rs submitted by facilities in each state. The table also provides our estimates of the number of unique chemicals for which no quantitative information would have to be reported in each state, including the percent of total chemicals reported in each state. The last two columns provide our estimates for the number of facilities that would no longer have to provide quantitative information about their chemical releases and waste management practices, including the percent of total facilities reporting in each state. Trade secret information (if claiming that toxic chemical is trade secret) Parent company information (name, Dun & Bradstreet number) Parent company information (name, Dun & Bradstreet number) Not reported on Form A air as fugitive or non-point emissions surface water as discharges to receiving streams or water bodies (including names of streams or water bodies) land, including RCRA Subtitle C landfills, other landfills, land treatment/application farming, RCRA Subtitle C surface impoundments, other surface impoundments, other land disposal Basis for estimates of releases (i.e., monitoring data or measurements, mass balance calculations, emissions factors, other approaches) Recycling processes (e.g., metal recovery by smelting, solvent Energy recovery methods (e.g., kiln, furnace, boiler) Waste treatment methods (e.g., scrubber, electrostatic precipitator) for each waste stream (e.g., gaseous, aqueous, liquid non-aqueous, solids) Not reported on Form A and estimated totals for (3) the following and (4) second following years for: on-site disposal to underground injection wells, RCRA Subtitle C landfills, and other landfills other on-site disposal or other releases off-site transfer to underground injection wells, RCRA Subtitle C landfills, and other landfills other off-site disposal or other releases Source reduction activities the facility engaged in during the reporting year (e.g., inventory control, spill/leak prevention, product modifications) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. industry uses billions of pounds of chemicals to produce the nation's goods and services. Releases of these chemicals during use or disposal can harm human health and the environment. The Emergency Planning and Community Right-to-Know Act of 1986 requires facilities that manufacture, process, or otherwise use more than specified amounts of nearly 650 toxic chemicals to report their releases to water, air, and land. The Environmental Protection Agency (EPA) makes this data available to the public in the Toxics Release Inventory (TRI). Since 1995, facilities may submit a brief certification statement (Form A), in lieu of the detailed Form R report, if their releases of specific chemicals do not exceed 500 pounds a year. In January 2007, EPA finalized a proposal to increase that threshold to 2,000 pounds, quadrupling what facilities can release before they must disclose their releases and other waste management practices. Today's testimony addresses (1) EPA's development of the proposal to change the TRI Form A threshold from 500 to 2,000 pounds and (2) the impact these changes may have on data available to the public. It also provides an update to our 2005 report recommendations on perchlorate. GAO's preliminary observations on TRI are based on ongoing work performed from June 2006 through January 2007. Although we have not yet completed our evaluation, our preliminary observations indicate that EPA did not adhere to its own rulemaking guidelines in all respects when developing the proposal to change TRI reporting requirements. We have identified several significant differences between the guidelines and the process EPA followed. First, late in the process, senior EPA management directed the inclusion of a burden reduction option that raised the Form R reporting threshold, an option that the TRI workgroup charged with analyzing potential options, had dropped from consideration early in the process. Second, EPA developed this option on an expedited schedule that appears to have provided a limited amount of time for conducting various impact analyses. Third, the decision to expedite final agency review, when EPA's internal and regional offices determine whether they concur with the final proposal, appears to have limited the amount of input they could provide to senior EPA management. We believe that the TRI reporting changes will likely have a significant impact on information available to the public about dozens of toxic chemicals from thousands of facilities in states and communities across the country. First, we estimate that detailed information from more than 22,000 Form Rs could no longer be reported to the TRI if all eligible facilities choose to use Form A, affecting more than 33 percent of reports in California, Massachusetts, and New Jersey. Second, we estimate that states could lose all quantitative information about releases of some chemicals, ranging from 3 in South Dakota to 60 in Georgia. Third, we estimate that 3,565 facilities--including 50 in Oklahoma, 101 in New Jersey, and 302 in California--would no longer have to report any quantitative information to the TRI. In addition, preliminary results from our survey of state TRI coordinators indicate that many believe the changes will negatively impact information available to the public and efforts to protect the environment. Finally, EPA estimates facilities could save a total of $5.9 million as a result of the increased Form A eligibility--about 4 percent of the total annual cost of TRI reporting. According to our estimates, facilities will save less than $900 a year, on average. Because not all eligible facilities will utilize the increased eligibility, actual savings to industry are likely to be less. In our May 2005 perchlorate report, we identified over 400 sites in 35 states where perchlorate has been found in concentrations ranging from 4 parts per billion to more than 3.7 million parts per billion. We concluded that EPA needed more reliable information on the extent of contaminated sites and the status of cleanup efforts, and recommended that EPA work with the Department of Defense and the states to establish a way to track perchlorate information. In December 2006, EPA reiterated its disagreement with our recommendation. We continue to believe that the inconsistency and omissions in available perchlorate data underscore the need for a tracking system to better inform the public and others about the locations of perchlorate releases and the status of cleanups.
The federal government funds a wide array of programs intended to provide benefits and services or both to individuals, families, and households needing financial assistance or other social supports. Representing a range of programs available through federal and state partnerships, the seven programs in this review have different goals and purposes, and, thus, provide a range of benefits and services to specific target populations. For example, the Food Stamp Program provides nutrition assistance to low-income individuals, while the CSE program helps custodial parents, regardless of income, collect child support payments from noncustodial parents. Table 1 provides a brief description of each of the programs covered in this report. The programs included in this review also represent a wide range of spending, from $2.9 billion for Adoption Assistance to $37 billion for UI. For fiscal year 2004, total spending, including administrative and all other spending, for the seven programs was $119 billion. Additionally, each of the seven selected programs is administered or overseen by one of three different federal departments. Table 2 shows the agency responsible for each program and total program expenditures for fiscal year 2004, the most current year available. The programs covered by this review have varying federal-state relationships in administering and funding the programs. The level of government with responsibility for designing the rules, services, and processes varies by program. Some programs have federally standardized designs while other programs provide states with flexibility to develop their own eligibility criteria, benefit levels, and program rules. All of the programs are funded through some form of federal-state partnership; however, the rules governing funding responsibility vary widely by program. Table 3 summarizes the level of government with which responsibility for the design and funding resides for each of the seven programs. An individual low-income family is likely to be eligible for and participate in several human service programs. For example, in 2001, 88 percent of families receiving TANF also received food stamp benefits and 98 percent received Medicaid. The programs are typically administered out of a local assistance office that offers benefits from several programs. Depending on the administrative structure of the local assistance office, a family or individual might provide necessary information to only one caseworker who determines eligibility and benefits for multiple programs, or they might work with several caseworkers who administer benefits for different programs. Because eligibility determination as well as other activities are often conducted jointly for multiple programs, some programs require states to have a process to ensure that costs are appropriately charged to the correct federal programs for federal reimbursement purposes. The cost allocation process is the formal process for sharing the costs of activities that are performed jointly for more than one program. Formal “cost allocation plans” specify how costs for those activities are to be covered by the various programs. For example, when a local eligibility worker determines that an applicant is eligible for TANF, food stamp benefits, and CCDF, the cost of her or his time is allocated across these programs in accordance with the state’s approved cost allocation plan. These costs are then reported as administrative or programmatic costs, depending on the laws and regulations governing funding for each of the programs. Across the seven programs in our review, the legal definitions and the federal funding rules for administrative costs vary. The statutes and regulations for each program define administrative costs differently, even though many of the same activities are performed to administer the programs. The laws for each program also include different mechanisms for state and federal participation in funding administrative costs, including specific matching rates, block grants, and spending caps, which can affect state decisions on administrative spending. Although many of the programs we reviewed conduct similar activities to administer the program, not all of the activities are defined in laws and regulations as administrative costs for financial reporting purposes. Based on our analysis of information collected from state and local officials, we identified several categories of administrative activities that are common across many of these programs. In particular, we found that the administration of each program involves at least (1) determining who is eligible to participate in the program or processing applications for new participants, (2) monitoring program quality, (3) conducting general program management and planning, (4) developing and maintaining IT systems, and (5) training employees, either at the state or local levels. All of the programs also issue benefits or provide services to eligible participants, with the exception of CSE, which generally collects and distributes payments from noncustodial parents to custodial families. Additional activities, such as case management and outreach, are performed in only some of the programs or some states. However, the statutes and regulations for each program differ on which of these activities are defined as administrative costs and which are not. For example, the TANF regulations and CCDF statute defining administrative costs specifically exclude costs associated with providing direct program services, while Food Stamp statute specifically includes the costs of providing direct program services, such as certifying applicant households and issuing food stamp benefits, as administrative costs. In addition, some statutes and regulations are more comprehensive in identifying which activities or items are specifically included or excluded from the definition of administrative costs. For example, while UI legislation allows for amounts “necessary for the proper and efficient administration” of state programs with few other qualifiers, the Food Stamp legislation and regulations list dozens of specific costs, including such items as audit services, advisory councils, building lease management, and certain advertising costs. Appendix II identifies the activities and items that are specifically included in the definitions of administrative costs in the statutes and regulations for each program. Nonetheless, most of the lists of activities in program statutes and regulations are only illustrative and not exhaustive. Phrases such as “these activities may include but are not limited to…” are commonplace and leave the exact definitions of administrative costs somewhat ambiguous. Such ambiguity may lead to inconsistent interpretation of the definitions of administrative costs. Our prior work on administrative costs in the Adoption Assistance and Foster Care programs found that state program officials and HHS regional offices make different decisions as to what costs are appropriate to claim as administrative. The statutes and regulations defining administrative costs differ across the programs in part because these programs have evolved separately over time and have different missions, priorities, services, and clients. The CSE program, in particular, differs from the other programs in our review in that CSE does not provide public financial benefits to its participants; rather, CSE services include collecting and distributing payments from noncustodial parents to custodial families, other states, and federal agencies. In addition, although the programs conduct similar activities, differences in missions and priorities may add to differences in spending on particular activities. For example, the Food Stamp Program’s extensive requirements for monitoring program quality may result in more spending on this activity than for a program with few quality control requirements. The number of congressional committees and federal agencies involved in developing laws and regulations for these programs has also contributed to differences in the definitions of administrative costs and can make coordination across programs difficult. The division of legislative and executive responsibility allows multiple points of access for Members of Congress, interest groups, and the affected public, but the various legislative committees and executive agencies do not necessarily collaborate with each other to develop consistent laws and regulations across programs. Federal legislation for all of the programs in our review, except the Food Stamp Program, is under the jurisdiction of the Senate Finance Committee and the House Ways and Means Committee, although some aspects of these programs are under the jurisdiction of other congressional committees. Federal regulations for Adoption Assistance and Foster Care, CCDF, CSE, and TANF are developed by various offices within the HHS Administration for Children and Families; Food Stamp Program regulations are developed by the U.S. Department of Agriculture (USDA) Food and Nutrition Service; and UI regulations are developed by the DOL Employment and Training Administration. Federal and state officials we interviewed disagreed on whether it was problematic to have different definitions of administrative costs across programs. According to officials from the OMB, whose role is to improve administrative management of federal programs, differences in legal definitions of administrative costs across programs are not a barrier to program management. OMB officials stated that it is important to accept that programs are different and that it may not be possible to compare costs across programs. A number of state budget officials responsible for financial reporting, however, described how the variation in definitions of administrative costs creates difficulties. For example, one budget official stated that it can be difficult to develop coding for accounting and budgeting that can be used across programs and, as a result, it can be difficult to monitor costs accurately. A budget official in another state argued, similarly, that having consistent definitions of administrative costs and consistent caps on administrative spending would help to simplify the process for allocating costs across programs and, therefore, might reduce costs. On the other hand, state officials responsible for developing program policies and overseeing local implementation of the programs reported fewer difficulties with the differences in administrative cost definitions. Several of these officials reported that they pay little attention to which aspects of their jobs are defined as administrative activities and which are not. Federal and state participation in funding the administrative costs of human service programs is governed by federal laws that establish matching rates, block grants, spending caps, and other funding mechanisms. These funding mechanisms, described below, play an important role in managing the federal government’s risk and can affect states’ spending behavior by producing financial incentives and funding restrictions. Matching rates—In programs funded through federal matching rates, the federal government covers a portion of states’ spending on program administration. For example, if a program has a 50-percent matching rate, the federal government is obligated to reimburse states for 50 percent of their spending on administration, as defined in law. Funding of Foster Care, Adoption Assistance, CSE, the Food Stamp Program, and a portion of CCDF include matching rates. Block grants—Block grants provide states with a statutorily fixed amount of funding. TANF and a portion of CCDF are funded through block grants. The TANF block grant does not change when caseloads change, nor is it adjusted for inflation. In both TANF and CCDF, states are required to spend a certain amount of their own funds to be eligible to receive the full amount of federal funds. Spending caps—Spending caps limit the amount or percent of state or federal funds that can be spent for particular purposes. For example, the TANF statute prohibits states from spending more than 15 percent of federal funds received on administrative costs, while the CCDF statute prohibits states from spending more than 5 percent of aggregate program funds on administrative costs. Other funding rules—The legislation governing the funding of administrative costs for the UI program gives responsibility for administrative funding to the federal government. DOL uses information gathered from the states to determine how much of the available funds each state will receive. While states are not required to spend their own funds on administrative costs, over 40 states chose to provide additional state funds to cover some administrative costs of the UI program in 2004. Table 4 summarizes the rules governing state and federal funding of administrative costs. The table identifies for each program the federal funding mechanism and any federal matching rates, caps on administrative expenditures, and other rules regarding funding of administrative costs. Administrative funding mechanisms can create financial incentives that affect state spending behavior; however, state responses to these incentives vary, according to the federal and state officials we interviewed. In some cases, matching rates can encourage states to spend more money on a program because for each dollar of its own resources the state invests, the state receives additional federal funding for the program. For example, the grants manager in one of the states we visited said that the federal matching rate gives the state an incentive to maintain its funding and to provide more services. In other cases, however, state officials reported that they limit their use of federal matching funds because they have limited state resources to invest in the program. For example, a budget official in one of the states we visited reported that when a new expenditure could be charged to either the Food Stamp Program, which has a matching rate, or the TANF block grant, the state or county might decide to use the TANF funds to avoid the need for the state to provide additional funding to meet its share of the matching funds. However, block grants also create general incentives for states to meet demand for services with limited spending because the federal funding amount is fixed. CCDF officials in Michigan stated that because they receive a fixed amount of funding, running the program efficiently is always in the front of their minds. Spending caps on the percentage of a block grant that can be spent on administrative costs are, by definition, designed to limit spending. However, officials in four of the five states we visited said that the CCDF and TANF caps on administrative spending were not a major factor in their administrative spending decisions. TANF administrative spending in the states we visited was well below the 15-percent cap. Nationally, state spending on administration was 7.7 percent for TANF and 2.3 percent for CCDF in fiscal year 2004. Some CCDF officials reported that their administrative spending decisions were influenced more by state limits on administrative spending than by the federal spending cap. For example, the California Department of Education estimates that for fiscal year 2006-2007, only 1 percent of program funds will be available for program administration due to current state budget constraints. This amount is well below the federal 5-percent cap, because, according to CCDF program officials in California, the state legislature wanted to put every possible dollar into additional child care vouchers. In addition, the funding allocation method for the UI program is designed to encourage states to administer their programs efficiently. Total funding appropriated for the UI program is less than the amount the states report needing to administer their UI programs. To promote efficiency, DOL reduces the requests of states with higher costs for certain “controllable” aspects of the budget by greater percentages than lower cost states. For example, the longer it takes a state to process claims, the greater its reduction in the allocation process. Federal UI officials we interviewed argued that this process provides states with an incentive to increase efficiency. However, some state officials argued that the funding process creates disincentives for states to improve efficiency and reduce administrative spending. For example, they argued that if they invest in technologies that improve their efficiency in administering the program, they do not get to keep the savings they gain. Rather, spending less in one year could result in less federal funding the next year. While funding mechanisms can create incentives for states to limit administrative spending, officials in each of the states we visited cautioned that if administrative spending is reduced too far, it can negatively affect client services. Several officials described how reduced administrative spending due to state budget cuts had already affected the quality of their services. For example, state human service officials in Maryland stated that a hiring freeze has resulted in a slower rate of application processing and an increase in Food Stamp administrative errors, such as eligible applicants being denied benefits. Local human service officials in Michigan reported that budget cuts had resulted in increased office waiting times for applicants and the elimination of services such as home visits and prevention services. Administrative spending for the seven programs combined, as defined for financial reporting purposes by program statutes and regulations, made up about 18 percent of total program spending in fiscal year 2004. However, amounts varied widely across the programs and states. Between fiscal years 2000 and 2004, administrative spending increased in five of the seven programs, but generally increased at a lower rate than total program spending. Officials in the five states we visited reported that staff and technology made up a large portion of the administrative spending in their programs. In fiscal year 2004, administrative spending, as defined for financial reporting purposes by program statutes and regulations, amounted to about 18 percent—or $21 billion—of the $119 billion in total program spending for the seven programs combined; however, there were large differences in the amounts spent by programs and states. As shown in figure 1, the amount spent on administration varied widely among the seven programs, ranging from $200 million in CCDF to $5.2 billion in the Food Stamp Program and $5.3 billion in CSE. As a percentage of total program spending, administrative spending ranged from 2 percent in CCDF to 58 percent in the Foster Care program, with the exception of CSE in which all program spending is considered administrative. While administrative spending amounts varied significantly across the seven programs, this variation does not necessarily indicate that certain programs are more efficiently administered. Instead, differences in spending largely reflect the differences in how each program’s laws and regulations define what counts as an administrative cost. As a result, comparing spending across programs is not a useful means for determining efficiency. Each of the seven programs in our review is funded through a combination of federal and state contributions. For the seven programs combined, federal funds made up roughly 60 percent—or $13 billion—of the $21 billion spent on administration in fiscal year 2004. Federal spending accounted for roughly half or more of the total amount spent to administer each of the seven programs. Figure 2 shows the federal and state shares of administrative spending for each program. These shares are largely representative of the different funding requirements set in law for each program, as described earlier in the report. For example, the federal government matches state administrative spending at specified rates in four of the seven programs. The federal match rate set out by law for administrative spending in the CSE program is 66 percent, while the match rate for the Adoption Assistance, Food Stamp, and Foster Care programs is 50 percent. (See table 4, for a description of the matching rates and other funding rules that govern state and federal spending in each program.) As with spending across programs, in fiscal year 2004, the combined federal and state amount spent on administration also varied greatly by state within programs, as shown in figure 3. In some programs, this variation is considerable. For example, in the Foster Care program, the percentage of total program spending on administration in fiscal year 2004 ranged from 21 percent to 86 percent. Such variation may suggest some opportunities for improved administrative efficiencies in some states; however, other factors also may account for the wide ranges in the percent spent on administration. Specifically, in the Foster Care and Adoption Assistance programs, our prior work cited differences in states’ claiming practices as well as differences in oversight among HHS regional offices that may contribute to differences in state administrative spending. In addition, federal officials we interviewed said that, given high fixed costs, a small state might expend a higher percentage of its total program budget on administration than a larger state that serves more people with the same fixed costs. Our recent work on administrative costs in CSE suggests that states’ structures for administering their support programs may also contribute to the cost of running the programs. Specifically, we reported that from fiscal year 2000 to fiscal year 2004, the median net federal expenditure for CSE agencies with state-operated programs decreased about 4 percent while the median net federal expenditure for county-operated programs increased about 11 percent. A few officials we interviewed said that states with county-administered programs required more administrative spending due to the duplication of effort at the county and state levels. However, in Ohio—a state with a county-administered structure—officials reported that while the county- administered system may contribute to some inefficiencies, moving to a state-administered system would require the state to equalize pay scales and building costs around the state, which would likely increase administrative spending. From fiscal years 2000 to 2004, administrative spending increased in most of the seven programs covered in this review, but at a lower rate than total program spending. As shown in figure 4, from fiscal years 2000 to 2004, combined federal and state administrative spending rose in five of the seven programs: Adoption Assistance, CSE, Food Stamp, Foster Care, and UI. In the remaining two programs, CCDF and TANF, administrative spending declined. CCDF administrative spending hovered just above $200 million, declining slightly, while TANF administrative spending declined by $300 million over the 5 years. In each of the five programs in which administrative spending rose, it increased by between about 17 and 19 percent over the 5 years. Administrative spending declined by 3 percent in CCDF and by 12 percent in TANF. Figure 5 shows the percent change in administrative spending during this time period for each of the seven programs. Over the same period, the rate of price inflation was 9 percent. Therefore, as illustrated in the figure, in the five programs in which administrative spending increased between fiscal years 2000 and 2004, the increase was much smaller when adjusted for inflation, shrinking to an increase of less than 10 percent in each program. In the five states we visited, officials reported that staff salaries and benefits were among the largest costs associated with running their programs. According to DOL’s wage index, average salaries and benefits for state and local government workers increased by 16 percent between 2000 and 2004. The percent change in administrative spending for the majority of the programs in this review was slightly higher than this average, ranging between about 17 percent and 19 percent, as previously stated. In two of the programs, CCDF and TANF, the percent change fell below this average. While administrative spending may include several other types of spending beyond staff salaries and benefits, such as overhead and IT, rising salaries and benefits may explain some of the increase in spending among the programs in this review. Although administrative spending increased between fiscal years 2000 and 2004 in the Adoption Assistance, Food Stamp, and UI programs, it increased at a lower rate than total program spending. As a result, in these three programs, as well as CCDF and TANF, administrative spending declined compared to total program spending between fiscal years 2000 and 2004, as shown in figure 6, indicating that the amount spent on direct benefits and services was rising faster than the amount spent on administering these benefits and services. Administrative spending increased compared to total program spending in one program, Foster Care. Officials in the five states we visited reported that staff and IT account for substantial portions of the spending related to operating their programs. In all five states we visited, officials reported that spending on staff, including salaries and benefits, was among the largest costs associated with running their programs, in part because certain program rules are complicated, requiring a considerable amount of staff time. To the extent that these costs are included in programs’ definitions of administrative costs, this will affect the programs’ reported administrative spending. As we have reported in our prior work, eligibility determination activities make up a substantial portion of administrative spending for some programs. Policy experts and researchers have found that the complexity and variations in eligibility rules have increased substantially the staff resources needed to determine eligibility and benefit levels and thereby increased the costs of administering programs. Some of the officials in the states we visited said that multiple or outdated IT systems also require a great deal of staff time. For example, front-line staff and officials we interviewed reported that in order to determine eligibility, staff must manually work outside the computer systems to work around problems in the systems. In addition, county officials we interviewed in one state said that the same client information must be entered into three separate systems. While outdated IT systems increase the staff resources needed at the local level, officials in four of the five states we visited reported that developing and maintaining IT systems also require a significant amount of administrative dollars. For example, in California, county officials we interviewed reported that two new case management systems have been expensive to develop and implement. They said that initial system problems and training for staff to learn the new systems added to the costs. However, officials in a few states said they believed that their new systems will eventually reduce administrative effort and they expected administrative costs to decrease as a result. The federal government, including both Congress and the executive agencies, may help balance long-term administrative cost savings with program effectiveness and integrity by simplifying policies and facilitating technology improvements. Simplifying policies—especially those related to eligibility determination processes and federal funding structures— could save resources, improve productivity, and help staff focus more time on performing essential program activities, such as providing quality services and accurate benefits to recipients. In addition, by helping states facilitate technology enhancements across programs, the federal government can help streamline processes and potentially reduce long- term costs. Together, simplified policies and improved technology could streamline administrative processes and potentially reduce administrative costs. We acknowledge that all levels of government have attempted to streamline processes across human service programs for the past 20 years. However, many of these efforts have had limited success due, in part, to the considerable challenges that streamlining program processes entail, such as the challenge of achieving consensus among the numerous congressional committees and federal agencies involved in shaping human service program policies, and a lack of information on how program changes would affect particular populations. We believe one challenge in particular—the lack of information on the effect streamlining efforts might have on program and administrative costs—is thwarting progress in this area. Our current and previous reviews indicate that simplifying policies— especially those related to eligibility determination processes and federal funding structures—could potentially save resources, improve productivity, and help staff focus more time on performing essential program activities, such as providing quality services and accurate benefits to recipients. Simplifying policies is particularly important for those programs that are administered jointly at the local level. In many localities, single offices administer TANF, food stamp benefits, CCDF, Medicaid, and SCHIP, and make referrals to or have some interaction with CSE, Adoption Assistance, Foster Care, UI, LIHEAP, and housing programs. Even though the programs are administered jointly, each has its own funding structure and eligibility rules, which can be cumbersome and require duplicative effort from staff. For example, when a family applies for TANF and food stamp benefits, the caseworker applies different rules and tests to determine who is eligible for benefits from either or both programs. This determination can be complicated as most programs have different definitions of who is a part of an eligible unit. In the Food Stamp Program, an eligible unit, or household, generally consists of all the persons who purchase food and prepare meals together. In TANF, the eligible unit (which states define in accordance with certain federal requirements) often includes only dependent children, their siblings, and the parents or other caretaker relatives. Consequently, a family member may be eligible for benefits in one program and ineligible for benefits in another program. To ensure that time is divided among and allocated to the correct programs, most of the local staff we spoke with track the amount of time they spend working on different programs and report this information to financial managers. Local financial managers then determine what portion of staff’s time is defined as administrative costs in each of the programs and charge the programs appropriately. Our current and previous reviews show that these involved procedures stem, in part, from programs having different target populations, different federal funding silos, and different statutory and regulatory requirements. Excessive time spent working through complex procedures can consume resources and diminish staff’s ability to focus on other activities that preserve program effectiveness and integrity. Many of the state and local officials we visited emphasized that one of the best ways to balance cost savings with program integrity and effectiveness is to simplify program eligibility determination processes and funding structures across programs: Simplify Eligibility Determination Processes: According to state and local officials, the complexity and variation in financial eligibility rules have contributed to the time-consuming and duplicative administrative processes. The administrative processes can be particularly time consuming when caseworkers determine eligibility for more than one program at a time, a common practice when applicants may be eligible for multiple programs. The issues raised during our site visits echo what we reported to Congress in 2001. In that report, we identified federal statutes and regulations as a source of some of this variation, although states do have considerable flexibility, especially in programs such as TANF, in setting eligibility rules. Some states have taken advantage of recent changes and additional flexibility granted by the federal government to simplify eligibility determination processes across programs. For example, states may automatically extend eligibility to food stamp applicants based on their participation in the TANF cash assistance program—a provision referred to as “categorical eligibility.” Another way states have simplified eligibility processes is by aligning program rules. For example, officials in Maryland told us that they took advantage of the flexibility offered by the Farm Security and Rural Investment Act of 2002 (the “Farm Bill”) and matched the Food Stamp Program rules for counting income and assets to TANF and Medicaid rules. This allows them, for example, to determine the value of a car the same way across programs. Maryland officials believe that this change has helped them to provide benefits and services more quickly and accurately. While some states have taken advantage of such flexibility, others have not. In a 2004 report on state implementation of the Farm Bill’s options, we concluded that although federal law and program rules allowed states to align food stamp reporting rules with those of other assistance programs, state officials in most states had not made the broad changes that would result in greater consistency among programs. State decisions to increase program alignment may have been hindered by concerns regarding the cost of programming changes into state computers and the concern that benefit costs may increase. On the other hand, savings could result from reducing the administrative burden on caseworkers. Ultimately, it is not clear whether costs would rise or savings would be realized. Simplify Funding Structures: Because the programs are financially supported through different federal funding streams and mechanisms, state officials argue that serving the needs of families comprehensively and efficiently is difficult. Similar to the variation in eligibility rules, program administrators face an array of different funding sources associated with different federal programs, each with its own financial reporting requirements, time frames, and other rules. Often, to meet individuals or families’ needs, states fund a range of services drawn from multiple programs and funding sources. For example, to provide child care subsidies, some states use funding from both TANF and CCDF to assist families, but very different rules apply to reporting requirements and funding restrictions, complicating program administration. Many believe that being able to draw funds from more than one federal assistance program, simplifying the administrative requirements for managing those funds, consolidating small grants, or standardizing administrative spending caps across programs would ease states’ administrative workload and reduce administrative spending. To experiment with simplifying funding structures, Ohio’s child welfare department officials told us they received a waiver in 1997 to implement a flexible-funding demonstration project. Participating counties received a monthly allotment to fund any child services free of any eligibility and allocation restrictions. According to Ohio state officials, during the first 6 years of the demonstration, 11 of the 14 counties operated below the average costs, resulting in a total savings of $33 million. The need for simplifying program policies, including both those related to simplifying eligibility determination processes and funding structures, has been voiced recurrently for the past several decades. Stretching as far back as the 1960s, studies and reports have called for changes to human service programs, and we issued several reports during the 1980s that focused on welfare simplification. In the early 1990s, a national commission as well as a congressionally created advisory commission suggested ways to simplify policies and procedures, including steps such as developing a common framework for streamlining eligibility requirements, formulating standard definitions, and easing administrative and documentation requirements. To address these issues, Congress has acted in the past to simplify the federal grant system. For example, the Omnibus Budget Reconciliation Act of 1981 consolidated a number of human service programs into several block grants that allowed for greater state and local autonomy and flexibility in designing strategies to address federal objectives. More recently, in 1996 Congress replaced the previous welfare program with the TANF block grant and consolidated several child care programs into one program, providing states with additional flexibility to design and operate programs. In addition, numerous pilot and demonstration projects have given particular states and localities flexibility to test approaches to integrating and coordinating services across a range of human service programs. While the need for simplification of program policies has been widely acknowledged, there has also been a general recognition that achieving substantial improvements in this area is exceptionally difficult. For example, implementing systematic changes to the federal rules for human service programs can be challenging because of differences among federal program goals and purposes and because jurisdiction for these programs is spread among numerous congressional committees and federal agencies. An additional challenge to systematic policy simplification efforts is the lack of information on the costs and effects of these efforts. Lack of information on the potential cost to the federal government of streamlining policies has been a limiting factor in moving forward in this area. In our 2001 report, we concluded that determining eligibility across human service programs is cumbersome and can be simplified; however, we said that additional information is needed about the effects these simplification efforts would have on both program and administrative costs. Similarly, a Congressional Research Service review of pilot and demonstration projects on service integration strategies—one way to simplify policies—found that there was little information on the cost- effectiveness of these strategies. Information is also lacking on the potential effects that streamlining policies would have on various populations. Streamlining policies could expand client access and increase caseloads, but it could also limit access for particular populations, depending on which policies were adopted. In addition, no definitive information exists to demonstrate the type and extent of changes that might result in reduced administrative costs or to demonstrate how strategies might work differently in different communities. To help address this issue, we asked Congress in our 2001 report to consider authorizing state and local demonstration projects designed to simplify policies. Recent legislative proposals in both the House and the Senate have sought to increase states’ abilities to waive federal program rules to address program simplification issues, although some provisions have been criticized by policy makers and others for allowing states too much latitude to set aside federal rules considered important to protecting program integrity and services to people in need. One legislative proposal, included as part of broader welfare legislation, passed in the House but has not been enacted into law. Our previous and current work indicates that the federal government can help streamline processes and potentially reduce long-term costs by facilitating technology enhancements across programs. Technology plays a central role in the management of human service programs and keeping up with technological advancements offers opportunities for improving the administration of human services. Recognizing the importance of automated systems in state-administered federal human service programs, for more than 2 decades, Congress has provided varying levels of federal funding to encourage states to implement certain systems to improve the efficiency of some programs. The federal agencies have also played a role in helping states implement IT systems to streamline their processes. For example, agencies responsible for child welfare, CSE, and the Food Stamp Program must review and approve states’ IT planning documents before federal technology funds are passed down to states. In contrast, no specific federal regulations guide states’ use of federal TANF or CCDF funds for IT systems. DOL provides some technical assistance to states under the UI program. With congressional and federal support, states have increasingly relied on technology to streamline their program processes. Having modern systems that support multiple human service programs is important for streamlining eligibility processes and providing timely and accurate services. For example, the counties we visited in California developed a single IT system, known as CalWIN. This system—like others around the country—replaced several separate IT systems and automated the eligibility determination processes across multiple and complex programs, such as TANF, the Food Stamp Program, and Medicaid. According to some officials, while the new system has experienced some problems, it has already improved program integrity and is intended to reduce administrative costs. Additionally, many believe that sharing data across programs and agencies can further streamline processes. Data-sharing arrangements allow programs to share client information that they otherwise would each collect and verify separately, thus reducing duplicative effort, saving money, and improving integrity. For example, by receiving verified electronic data from SSA, state human service offices are able to determine SSI recipients’ eligibility for food stamp benefits without having to separately collect and verify applicant information. According to South Carolina officials we interviewed, this arrangement saves administrative dollars and reduces duplicative effort across programs. While most agree that IT can help streamline processes, our previous and current work shows that technology projects are difficult, and many fall short of expectations, creating additional work for staff or compromising program integrity. Although many states’ computer systems for TANF, the Food Stamp Program, and Medicaid are already integrated, we found that states are often using IT systems that are outdated, error-prone, and do not effectively share information with additional human service programs. This compounds the challenges in updating technology in a human services environment that increasingly requires coordination across programs. For example, the Michigan Department of Human Services is in the process of implementing a new integrated IT system that is intended to replace the three systems that staff currently use to process eligibility for several programs. Michigan officials explained that the third system was initially intended to replace the other two systems. However, due to political and financial reasons and a lack of commitment, only the first phase of the project was implemented. As a result, the system failed to replace the other systems and created an additional step in the enrollment process. The ability to share data across programs also may be limited by laws that have been established to protect individuals’ privacy. For example, while state CSE programs sometimes utilize information from other federal programs, they are often prohibited by law from sharing information about their own clients. Michigan CSE officials noted that the CSE program must protect its clients’ information because it handles money from private citizens rather than providing government benefits. Another concern regarding efforts to further enhance technology is that there is limited information available on the cost-effectiveness of some of these systems. Finally, our previous collaborative work with other organizations highlighted challenges related to obtaining federal funding for information systems. To the extent that state IT systems support more than one human services program, state cost allocation plans for systems development and acquisition projects must be approved by each federal agency expected to provide funding, in addition to the regular approval process. To address concerns about IT systems funding and to identify other ways that the federal government may facilitate states’ technology improvements, we recommended in April 2000 that a multiagency federal working group be created. In response to this recommendation as well as state complaints about the approval process, agencies within HHS and USDA convened a working group to improve the federal approval process. This group made some progress in identifying needed changes to the federal process. However, after about a year of work, the progress stalled, due to changes in leadership at the agencies involved and a lack of consensus among the federal partners about the direction to take in improving the federal process. This helps to highlight the challenges involved in identifying and reaching agreement on needed improvements to existing processes, particularly when multiple programs and agencies are involved. More information on specific barriers that states face when attempting to make technology improvements when federal funds are involved could help facilitate progress in this area. Progress on technology improvements could be further facilitated through greater collaboration across program agencies and levels of government. At the time of our visit, officials in Ohio said that in their efforts to replace their outdated IT system for TANF and the Food Stamp Program, they would have appreciated more information about what other states were doing to implement more efficient and economical IT systems. Officials stated that while they had talked to other states about their experiences developing IT systems, more comprehensive information on best practices would save states time and money. Some agencies are successfully collaborating and sharing best practices. For example, counties that we visited in California successfully shared technology information that helped to save resources. Officials in San Mateo County, California, reported that by automating the case management process for the Medicaid and Food Stamp programs through call centers, they avoided spending additional dollars on personnel costs associated with rising case levels. According to county officials, this strategy has helped them reduce staff’s workload, avoid increased administrative costs, and increase integrity across programs. Officials in nearby Santa Cruz County reported that they adopted this strategy after learning of its effectiveness in San Mateo County. Michigan UI officials reported that DOL has a strong partnership with state UI agencies. DOL sponsors a central online forum for sharing technology information. Michigan UI officials noted that this effort provides a forum for exchanging ideas and learning from the mistakes of others. DOL’s initiative has helped states develop call centers and developed sample computer programming code for Internet claims systems, which it shared with states. Further sharing of technology strategies like these across programs, states, and agencies potentially could yield more cost savings elsewhere. To use taxpayer dollars responsibly, federal programs must be administered in a cost-efficient manner. Administrative costs are an important component of the total cost of providing supports to vulnerable people. This report shows slow but steady increases in administrative spending among many of the human service programs included in this review, although administrative spending increased at a lower rate than total program spending. Spending data cannot be compared across programs due to programmatic differences, and little information is available regarding what level of administrative spending for human service programs is appropriate. Even so, there are opportunities available to the federal government to assist state and local governments in better identifying and implementing cost-saving initiatives that also ensure accurate and timely provision of benefits and services. However, minimal information is available on which opportunities are most effective and what any actual cost savings might be. The costs associated with running human service programs have been a long-standing concern for policy makers interested in maximizing the dollars that go directly to helping vulnerable people. While all levels of government have made efforts to reduce the time and money required to run these programs, it is unclear the extent to which these efforts have actually reduced costs. This is especially true with efforts to streamline processes across programs by simplifying program rules and facilitating technology enhancements. Simplifying policies across programs may increase or decrease the number of eligible individuals, which in turn may affect program costs. Technology enhancements likely come with start-up costs and may initially create additional work for staff. Because of the complexities of such strategies for streamlining processes, there are no easy solutions for reducing administrative costs. However, it is appropriate to move forward to test the cost-effectiveness of various strategies. Only then can more systematic approaches be taken to maximize the dollars that are spent to run human service programs. Our previous work recommended that Congress consider authorizing state and local demonstration projects designed to simplify and coordinate eligibility determination. Maintaining the status quo of stovepiped program rules and policies related to eligibility determination and other processes will continue to result in program rules that are complex and vary across programs and processes that are duplicative and cumbersome to administer. Providing states with demonstration opportunities would allow them to challenge the current stovepipes and open the door to new cost-efficient approaches for administering human service programs. Demonstration projects would allow for testing and evaluating new approaches that aim to balance cost savings with program effectiveness and integrity. The information from these evaluations would help the federal government determine which strategies are most effective without investing the time and resources in unproven strategies. Congress can allow for such approaches to thrive by not only giving states opportunities to test these approaches but by following up to identify and implement successful strategies. While it may be difficult to fully determine the extent to which observed changes are the result of the demonstration projects, such projects would be useful to identify lessons learned. Members of Congress have identified the usefulness of demonstration projects, and both the House and Senate have considered proposals to authorize such demonstration projects, although legislation has not been enacted. Therefore, continued efforts are needed to move forward. As suggested in our prior work (GAO-02-58), we continue to believe that Congress should consider authorizing state and local demonstration projects designed to streamline and coordinate eligibility determination and other processes for federal human service programs. Such projects would provide states and localities with opportunities to test the cost- effectiveness of changes designed to simplify or align program rules, expand data sharing across agencies, or enhance information technology systems to facilitate eligibility determinations and other processes. Once authorized, states and localities or both could submit proposals for demonstration projects and relevant federal agencies working in a coordinated manner could review them, suggest modifications as needed, and make final approval decisions. Federal agencies should consider certain criteria for the demonstration projects, including oversight and internal controls to help ensure that effectiveness and integrity are preserved and vulnerable populations are protected. Demonstration projects would include waivers of federal statutes and regulations as needed and deemed appropriate. While our review covered seven federal support programs, we are not suggesting that the demonstration projects must include all of these programs or exclude others. States should be given the opportunity to try various approaches aimed at streamlining processes that consider all feasible programs. Projects must be given sufficient time to be fully implemented and must include an evaluation component. Cost neutrality of both administrative and program costs would be most desirable for federal approval of these projects. However, projects should not be rejected solely because they are unable to guarantee cost neutrality over the short run. It would be expected that, over a period of time, state and federal efforts to streamline processes would create administrative cost savings that could help offset any increased program costs. Evaluations of the projects should include an analysis of whether administrative cost savings were indeed achieved in the long-run, which specific laws or regulations were waived to facilitate the project, and whether the effectiveness and integrity of program services were maintained. To enhance the information from each of the projects, Congress should consider authorizing a capping report that would compile information from each the individual demonstration projects and identify lessons learned. We shared a draft of this report with HHS, USDA, and DOL for comment. HHS agreed with the report’s emphasis on the need for cost-effective administration of federal programs and noted that HHS has taken steps to increase cost-effectiveness in a number of the programs it oversees. HHS also provided a number of specific examples of Child Care Bureau efforts. HHS’s written comments appear in appendix III. In their comments, officials from the USDA Food and Nutrition Service suggested that, in order to acknowledge the complexity of the Food Stamp Program, we add more detailed information to the report on several topics, including: differences in administrative cost definitions, how programmatic requirements may affect costs, state by state cost comparisons, program level impact analyses on past proposed changes to eligibility rules, and strategies for facilitating technology. We added more information where appropriate, although our focus in this report remains on a national perspective across programs rather than in-depth, program specific or state-level analyses. In addition, the officials questioned the use of the GDP to adjust for inflation and stated that staff salaries and benefits constitute a large proportion of total costs. As we state in the report, we used nominal dollars to discuss historical administrative spending. In addition to nominal dollars, we used GDP to discuss the percent change in spending over time. Recognizing that staff salaries and benefits make up a large portion of spending, we also used DOL’s Employment Cost Index to discuss how average salaries and benefits for state and local government workers changed over time. DOL, as well as HHS, provided technical comments, which we incorporated in the report where appropriate. None of the agencies commented directly on the matter for congressional consideration. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days after its issue date. At that time, we will send copies of this report to the Secretaries of the Departments of Agriculture, Health and Human Services, and Labor; relevant congressional committees; and others who are interested. Copies will be made available to others upon request, and this report will also be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Additional GAO contacts and acknowledgments are listed in appendix IV. We designed our study to provide information on (1) how administrative costs are defined in selected programs and what rules govern federal and state participation in funding these costs; (2) what is known about the amounts of federal and state administrative spending for selected programs and how they have changed over time; and (3) what opportunities exist at the federal level to help states balance cost savings with program effectiveness and integrity. To obtain information on these issues, we compiled expenditure data for each of the programs covered in this review, conducted state and local site visits, interviewed federal program officials, and reviewed relevant laws, regulations, and reports. We focused our study on seven key programs: Adoption Assistance, Child Care & Development Fund (CCDF), Child Support Enforcement (CSE), the Food Stamp Program, Foster Care, Temporary Assistance for Needy Families (TANF), and Unemployment Insurance (UI). We issued two related reports in June and July 2006 that focused on the administrative costs of the Adoption Assistance and Foster Care programs (GAO-06-649, Foster Care and Adoption Assistance: Federal Oversight Needed to Safeguard Funds and Ensure Consistent Support for States’ Administrative Costs, June 2006) and the Child Support Enforcement program (GAO-06-491, Child Support Enforcement: More Focus on Labor Costs and Administrative Cost Audits Could Help Reduce Federal Expenditures, July 2006). We coordinated our data collection efforts for all three reports, so some of the information on the CSE, Adoption Assistance, and Foster Care programs in this report is drawn from work conducted for the earlier reports. For example, for this report, we supplemented our data collection efforts with spending data collected for the other two reports as well as information collected from interviews conducted for the other reports. We also coordinated efforts to assess the reliability of the administrative spending data used in the three reports. We conducted our work between July 2005 and August 2006 in accordance with generally accepted government auditing standards. We obtained spending data for each of the seven programs from the departments of Agriculture, Health and Human Services, and Labor. We analyzed administrative spending data for each program, as defined for financial reporting purposes by program laws and regulations, for fiscal years 2000-2004, including federal and state shares of spending. Fiscal year 2004 data were the most current data available at the time of our review. We also analyzed 2004 state spending data to learn about the variations in spending across states. We assessed the reliability of the administrative spending data by interviewing (1) agency officials knowledgeable about the data and (2) state officials in the five states we visited knowledgeable about the data as reported to the federal government. We also reviewed state single audit reports and talked to state auditors in the five states we visited to identify any known problems with the administrative spending data or the systems that store the data. Our reviews and discussions did not identify significant problems with the data. We determined that these data were sufficiently reliable for the purposes of this report. We visited state agencies and county offices in five states—California, Maryland, Michigan, Ohio, and South Carolina. The counties we visited were: San Mateo and Santa Cruz Counties in California, Wicomico County in Maryland, Wayne County in Michigan, Butler and Licking Counties in Ohio, and Newberry County in South Carolina. We selected the states to provide a range of total program spending and share of spending on administration, as well as a mixture of state and county administrative structures, urban and rural demographics, and geography. Although our selection includes a range of states, our findings are not generalizable beyond the states included in our study. In each of the five states, we visited at least one county office to talk to county program officials and local staff. We developed a questionnaire to capture the types of administrative activities that occur in each program at the state and local levels. We asked the state and local program officials we visited to fill out the questionnaire, and we analyzed the results to learn how administrative activities compared across the programs. We interviewed state and local program officials and staff about administrative activities, costs, options for reducing costs while preserving services and challenges to and consequences of these options. During the interviews, we also inquired about any interactions between our key programs and other programs that support vulnerable people, including Medicaid, the State Children’s Health Insurance programs (SCHIP), the Low-Income Home Energy Assistance Program (LIHEAP), and housing programs. We interviewed federal program officials at the departments of Agriculture, Health and Human Services, and Labor and the Office of Management and Budget about administrative costs, options for reducing costs while preserving services and challenges to and consequences of these options. In addition, we conducted phone interviews with state audit officials from the five states about any similar work they have conducted. We also discussed our objectives with representatives from the American Public Human Services Association, Center for Law and Social Policy, and National Governors Association. These discussions covered each of the objectives and the participants shared their views and insights. We reviewed laws and regulations on definitions of administrative costs and federal/state participation in funding these costs for the selected programs. We also reviewed relevant circulars issued by the Office of Management and Budget. We obtained and reviewed A-133 state single audit reports for the states we visited. In addition, we reviewed documents and reports prepared by the Center for Law and Social Policy, Congressional Research Service, and other research organizations as well as several prior GAO reports. Adoption Assistance and Foster Care All expenditures of a state to plan, design, develop, install and operate the statewide automated child welfare information system (without regard to whether the system may be used with respect to foster or adoptive children other than those on behalf of whom foster care maintenance or adoption assistance payments may be made) Implementing and operating the immigration status verification system related to the tracking and monitoring of TANF requirements (e.g., for a personnel and payroll system for state staff) The cost of mailing unemployment coordination of programs, including contract costs and all indirect or overhead costs compensation statements, even if information about the earned income credit is mailed along with it (except that a portion of the mailing costs may be counted as a non-administrative cost if the inclusion of materials related to the tax credit increases the postage required to mail the information) Costs for the goods and services required for administration of the program such as the costs for supplies, equipment, travel, postage, utilities, and rental of office space and maintenance of office space, provided that such costs are not excluded as a direct administrative cost for providing program services Salaries and benefits of staff performing administrative and coordination functions (but not salaries and benefits for program staff) The following staff members made major contributions to the report: Heather McCallum Hahn (Assistant Director), Cady S. Panetta (Analyst-in-Charge), David Bellis, William Colvin, Cheri Harrington, Gale Harris, Sheila McCoy, Luann Moy, and Tovah Rom. Child Support Enforcement: More Focus on Labor Costs and Administrative Cost Audits Could Help Reduce Federal Expenditures. GAO-06-491. Washington, D.C.: July 6, 2006. Foster Care and Adoption Assistance: Federal Oversight Needed to Safeguard Funds and Ensure Consistent Support for States’ Administrative Costs. GAO-06-649. Washington, D.C.: June 15, 2006. Means-Tested Programs: Information on Program Access Can Be an Important Management Tool. GAO-05-221. Washington, D.C.: March 11, 2005. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. TANF and Child Care Programs: HHS Lacks Adequate Information to Assess Risk and Assist States in Managing Improper Payments. GAO-04-723. Washington, D.C.: June 18, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Human Services: Federal Approval and Funding Processes for States’ Information Systems. GAO-02-347T. Washington, D.C.: July 9, 2002. Welfare Reform: States Provide TANF-Funded Work Support Services to Many Low-Income Families Who Do Not Receive Cash Assistance. GAO-02-615T. Washington, D.C.: April 10, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Food Stamp Program: Program Integrity and Participation Challenges. GAO-01-881T. Washington, D.C.: June 27, 2001. Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington, D.C.: January 19, 2001. Benefit and Loan Programs: Improved Data Sharing Could Enhance Program Integrity. GAO/HEHS-00-119. Washington, D.C.: September 13, 2000. Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. GAO/HEHS-00-48. Washington, D.C.: April 27, 2000. Food Stamp Program: States Face Reduced Federal Reimbursements for Administrative Costs. GAO/RCED/AIMD-99-231. Washington D.C.: July 23, 1999. Food Stamp Program: Various Factors Have Led to Declining Participation. RCED-99-185. Washington, D.C.: July 2, 1999. Welfare Reform: Few States Are Likely to Use the Simplified Food Stamp Program. GAO/RCED-99-43. Washington, D.C.: January 29, 1999. Welfare Programs: Opportunities to Consolidate and Increase Program Efficiencies. GAO/HEHS-95-139. Washington, D.C.: May 31, 1995. Means-Tested Programs: An Overview, Problems, and Issues. GAO/T-HEHS-95-76. Washington, D.C.: February 7, 1995. Welfare Simplification: States’ Views on Coordinating Services for Low- Income Families. GAO/HRD-87-110FS. Washington, D.C.: July 29, 1987. Welfare Simplification: Thirty-Two States’ Views on Coordinating Services for Low-Income Families. GAO/HRD-87-6FS. Washington, D.C.: October 30, 1986. Welfare Simplification: Projects to Coordinate Services for Low-Income Families. GAO/HRD-86-124FS. Washington, D.C.: August 29, 1986. Needs-Based Programs: Eligibility and Benefit Factors. GAO/HRD-86-107FS. Washington, D.C.: July 9, 1986.
The cost of administering human service programs has been a long-standing concern among policy makers interested in ensuring that federal programs are run in a cost-efficient manner so that federal funds go directly to helping vulnerable people. Little is known about how administrative costs compare among programs, or about opportunities to better manage these costs. GAO looked at (1) how administrative costs are defined and what rules govern federal and state participation in funding these costs; (2) what is known about the amounts of administrative spending and how they have changed over time; and (3) what opportunities exist at the federal level to help states balance cost savings with program effectiveness and integrity. GAO's review included seven programs: Adoption Assistance, Child Care and Development Fund (CCDF), Child Support Enforcement (CSE), food stamps, Foster Care, Temporary Assistance for Needy Families (TANF), and Unemployment Insurance (UI). To address the questions, GAO reviewed laws, analyzed spending data, and visited five states. The statutes and regulations for the seven programs define administrative costs differently, even though many of the same activities are performed to administer the programs. The laws for each program also include different mechanisms for state and federal participation in funding administrative costs, including matching rates, block grants, and spending caps. The seven programs combined spent $21 billion on administration, as defined in law, making up about 18 percent of total program spending in fiscal year 2004. However, amounts varied widely across the programs and states. Administrative spending varied from 2 percent in CCDF to 58 percent in Foster Care, with the exception of CSE in which all program spending is considered administrative. Between fiscal years 2000 and 2004, administrative spending increased in five of the seven programs, generally at a lower rate than total program spending. The federal government may help balance administrative cost savings with program effectiveness and integrity by simplifying policies and facilitating technology improvements. Simplifying policies--especially those related to eligibility determination processes and federal funding structures--could save resources, improve productivity, and help staff focus more time on performing essential program activities. By helping states facilitate technology enhancements across programs, the federal government can help streamline processes and potentially reduce long-term costs. Over the past 20 years, many attempts to streamline processes across programs have had limited success due, in part, to the considerable challenges that streamlining program processes entail. GAO believes one challenge in particular--the lack of information on the effect streamlining efforts might have on program and administrative costs--is thwarting progress in this area.
Many of NASA’s projects are one-time articles, meaning that there is little opportunity to apply knowledge gained to the production of a second, third, or future increments of spacecraft. In addition, NASA often partners with other domestic partners and other space-faring countries, including several European nations, Japan, and Argentina. These partnerships go a long way to foster international cooperation in space, but they also subject NASA projects to added risk such as when partners do not meet their obligations or run into technical obstacles they cannot easily overcome. While space development programs are complex and diffi cult by nature, and most are one-time efforts, the nature of its work should not preclude NASA from achieving what it promises when requesting and receiving funds. We have reported that NASA would benefi t from a more disciplined approach to its acquisitions. The development and execution of a knowledge-based business case for these projects can provide early recognition of challenges, allow managers to take corrective action, and place needed and justifi able projects in a better position to succeed. Our studies of best practice organizations show the risks inherent in NASA’s work can be mitigated by developing a solid, executable business case before committing resources to a new product development. In its simplest form, this is evidence that (1) the customer’s needs are valid and can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources— that is, proven technologies, design knowledge, adequate funding, and adequate time to deliver the product when needed. A program should not go forward into product development unless a sound business case can be made. If the business case measures up, the organization commits to the development of the product, including making the fi nancial investment. Our best practice work has shown that developing business cases based on matching requirements to resources before program start leads to more predictable program outcomes—that is, programs are more likely to be successfully completed within cost and schedule estimates and deliver anticipated system performance. At the heart of a business case is a knowledge-based approach to product development that is a best practice among leading commercial fi rms. Those fi rms have created an environment and adopted practices that put their program managers in a good position to succeed in meeting expectations. A knowledge-based approach requires that managers demonstrate high levels of knowledge as the program proceeds from technology development to system development and, fi nally, production. In essence, knowledge supplants risk over time. This building of knowledge can be described over the course of a program, as follows: When a project begins development, the customer’s needs should match the developer’s available resources—mature technologies, time, and funding. An indication of this match is the demonstrated maturity of the technologies needed to meet customer needs—referred to as critical technologies. If the project is relying on heritage—or pre-existing— technology, that technology must be in appropriate form, fi t, and function to address the customer’s needs within available resources. The project will normally enter development after completing the preliminary design review, at which time a business case should be in hand. Then, about midway through the product’s development, its design should be stable and demonstrate it is capable of meeting performance requirements. The critical design review takes place at that point in time because it generally signifi es when the program is ready to start building production-representative prototypes. If design stability is not achieved, but a product development continues, costly re-designs to address changes to project requirements and unforeseen challenges can occur. By the critical design review, design should be stable and capable of meeting performance requirements. Finally, by the time of the production decision, the product must be shown to be producible within cost, schedule, and quality targets and have demonstrated its reliability, and the design must demonstrate that it performs as needed through realistic system-level testing. Lack of testing increases the possibility that project managers will not have information that could help avoid costly system failures in late stages of development or during system operations. Our best practices work has identifi ed numerous other actions that can be taken to increase the likelihood that a program can be successfully executed once that business case is established. These include ensuring cost estimates are complete, accurate, and updated regularly and holding suppliers accountable through such activities as regular supplier audits and performance evaluations of quality and delivery. Moreover, we have recommended using metrics and controls throughout the life cycle to gauge when the requisite level of knowledge has been attained and when to direct decision makers to consider criteria before advancing a program to the next level and making additional investments. The consequence of proceeding with system development without establishing and adhering to a sound business case is substantial. GAO and others have reported that NASA has experienced cost and schedule growth in several of its projects over the past decade, resulting from problems that include failing to adequately identify requirements and underestimating complexity and technology maturity. We have found that the need to meet schedule is one of the main reasons why programs cannot execute as planned. Short cuts, such as developing technology while design work and construction are already underway, and delaying or reducing tests, are taken to meet schedule. Ultimately, when a schedule is set that cannot accommodate the work that needs to be done, costs go up and capability is delayed. Delaying the delivery of these capabilities can also have a ripple effect throughout NASA projects as staff must then stay on a given project longer than intended, thus increasing the project’s costs, and crippling other projects that had counted on using newly available staff to move forward. In 2005, we reported that NASA’s acquisition policies did not conform to best practices for product development because those policies lacked major decision reviews at several key points in the project life-cycle that would allow decision makers to make informed decisions about whether a project should be authorized to proceed in the development life cycle. Based in part on our recommendations, NASA issued a revised policy in March 2007 that institutes several key decision points (KDP) in the development life cycle for space fl ight programs and projects. At each KDP, a decision authority is responsible for authorizing the transition to the next life-cycle phase for the project. In addition, NASA’s acquisition policies also require that technologies be suffi ciently mature at the preliminary design review before the project enters implementation, that the design is appropriate to support proceeding with full-scale fabrication, assembly, integrating and test at the critical design review, and that the system can be fabricated within cost, schedule, and performance specifi cations. These changes brought the policy more in line with best practices for product development. A more detailed discussion of NASA’s acquisition policy and how it relates to best practices is provided in appendix III of this report. Further, in response to GAO’s designation of NASA acquisition management as a high risk area, NASA developed a corrective action plan to improve the effectiveness of NASA’s program/project management. The approach focuses on how best to ensure the mitigation of potential issues in acquisition decisions and better monitor contractor performance. The plan identifi es fi ve areas for improvement—program/project management, cost reporting processes, cost estimating and analysis, standard business processes, and management of fi nancial management systems—each of which contains targets and goals to measure improvement. As part of this initiative, NASA has taken a positive step to improve management oversight of project cost, schedule, and technical performance with the establishment of a baseline performance review reporting to NASA’s senior management. Through monthly reviews, NASA intends to highlight projects that are predicted to exceed internal NASA cost and/or schedule baselines, which are set lower than cost and schedule baselines submitted to Congress, so the agency can take preemptive actions to minimize the projects’ potential cost overruns or schedule delays. During our data collection efforts, we reviewed several projects’ monthly and quarterly status reports, which gave us insight into their status, risks, and issues. While this reporting structure might enable management to be aware of the issues projects are facing, it is too early to tell if the monthly reviews are having the intended impact of enabling NASA management to take preemptive cost saving actions, such as delaying a design review or canceling a project. As a part of the continuing effort to improve its acquisition processes, NASA has begun a new initiative—Joint Cost and Schedule Confi dence Levels (JCL)—to help programs and projects with management, cost and schedule estimating, and maintenance of adequate levels of reserves. Under this new policy, cost, schedule, and risk are combined into a complete picture to help inform management of the likelihood of a project’s success. Utilizing JCL, each project will receive a cost estimate with a corresponding confi dence level—the percentage probability representing the likelihood of success at the specifi ed funding level. NASA believes the application of this policy will help reduce the cost and schedule growth in its portfolio and improve transparency, and increase the probabilities of meeting those expectations. NASA’s goal is for all projects that have entered the implementation phase to have a JCL established by spring 2010. While these efforts are positive steps, it is too early to assess their impact and they will be limited if project offi cials are not held accountable for demonstrating that elements of a knowledge-based business case are demonstrated at key junctures in development. For projects to have better outcomes not only must they demonstrate a high level of knowledge at key junctures, but decision makers must also use this information to determine whether and how best a project should proceed through the development life cycle. If done successfully, these measures should enable NASA to foster the expansion of a business-oriented culture, reduce persistent cost growth and schedule delays, and maximize investment dollars. We assessed 19 large-scale NASA projects in this review. Four of these projects were in the formulation phase where cost and schedule baselines have yet to be established, while 15 had entered implementation. Nine of the 15 projects experienced signifi cant cost and/or schedule growth from their project baselines, while fi ve of the remaining projects had just entered implementation and their cost and schedule baselines were established in fi scal year 2009. NASA provided cost and schedule data for 14 of the 15 projects in the implementation phase of the project life cycle. Despite being in implementation, NASA did not provide cost or schedule data for the Magnetospheric Multiscale (MMS) project. NASA will not formally release its baseline cost and schedule estimates for this project until the fi scal year 2011 budget submission to Congress, and late in our review process agency offi cials notifi ed us that they will not provide project estimates to GAO until that time. NASA also did not provide formal cost and schedule information for the projects in formulation, citing that those estimates were still preliminary. See fi gure 1 for a summary of these projects. Based on our analysis, development costs for projects in our review increased by an average of over 13 percent from their baseline cost estimates—including one project that increased by over 68 percent— and an average delay of almost 11 months to their launch dates. These averages were signifi cantly higher when the four projects that just entered implementation are excluded. Specifi cally, there are 10 projects of analytical interest because (1) they are in the implementation phase, and (2) their baselines are old enough to begin to track variances. Most of these 10 projects have experienced signifi cant cost and/or schedule growth, often both. These projects had an average development cost growth of 18.7 percent—or almost $121.1 million—and schedule growth of over 15 months, and a total increase in development cost of over $1.2 billion. Over half of this total increase in development cost—or $706.6 million—occurred in the last year. These cost growth and schedule delays have all occurred within the last 3 years, and a number of these projects had experienced considerable cost growth before baselines were established in response to the 2005 statutory reporting requirement. See table 1 below for the cost and schedule growth of the NASA projects in the implementation phase. Despite having baselines established in fi scal year 2008, two projects have sought reauthorization from Congress because of development cost growth in excess of 30 percent. Congress reauthorized the Glory project in fi scal year 2009, and new cost and schedule baselines were established after the project experienced a 53 percent cost growth and 6-month launch delay from original baseline estimates. The Glory project has since breached its revised schedule baseline by 16 months and exceeded its development cost baseline by over 14 percent—for a total development cost growth of over 75 percent in just 2 years. Project offi cials also indicated that recent technical problems could cause additional cost growth. Similarly, the Mars Science Laboratory project is currently seeking reauthorization from Congress after experiencing development cost in excess of 30 percent. All six factors we assessed can lead to project cost and schedule growth: technology maturity, design stability, contractor performance, development partner performance, funding issues, and launch manifest issues. These factors—characterized as project challenges—were evident in the projects that had reached the implementation phase of the project life cycle, but many of them began in the formulation phase. We did not specifi cally correlate individual project challenges with specifi c cost and/or schedule changes in each project. The degree to which each specifi c challenge contributed to cost and schedule growth varied across the projects in this review and we did not assign any specifi c challenge as a primary factor for cost and/or schedule growth. Table 2 depicts the extent to which each of the six challenges occurred for each of the 19 projects we reviewed. Technology Maturity was by far the most prevalent challenge, affecting 15 of the 19 projects. When combined with design instability—another metric related to technical diffi culty—17 projects were affected. A discussion of each challenge follows. Our past work on systems acquisition has shown that beginning an acquisition program before requirements and available resources are matched can result in a product that fails to perform as expected, costs more, or takes longer to develop. We have found that these problems are largely rooted in the failure to match customer’s needs with the developer’s resources—technical knowledge, timing, and funding—when starting product development. In other words, commitments were made to deliver capability without knowing whether the technologies needed could really work as intended. Time and costs were consistently underestimated, and problems that surfaced early cascaded throughout development and magnifi ed the risks facing the program. Our best practices work has shown that a technology readiness level (TRL) of 6— demonstrating a technology as a fully integrated prototype in a relevant environment—is the level of maturity needed to minimize risks for space systems entering product development. NASA’s acquisition policy states that a TRL of 6 is desirable prior to integrating a new technology. Technology maturity is a fundamental element of a sound business case, and the absence is a marker for subsequent problems, especially in design. Similarly, our work has shown that the use of heritage technology—proven components that are being modifi ed to meet new requirements—can also cause problems when the items are not suffi ciently matured to meet form, fi t, and function standards by the preliminary design review (PDR). NASA states in its Systems Engineering Handbook that particular attention must be given to heritage systems because they are often used in architectures and environments different from those in which they were designed to operate. Although NASA distinguishes critical technologies from heritage technologies, our best practices work has found critical technologies to be those that are required for the project to successfully meet customer requirements, regardless of whether or not they are based on existing or heritage technology. Therefore, whether technologies are labeled as “critical” or “heritage,” if they are important to the development of the spacecraft or instrument—enabling it to move forward in the development process—they should be matured by PDR. Of the 14 projects for which we received data and that had entered the implementation phase, four entered this phase without fi rst maturing all their critical technologies, and 10 encountered challenges in integrating or modifying heritage technologies. Additionally, two projects in formulation— Ares I and Orion—also encountered challenges with critical or heritage technologies. These projects did not build in the necessary resources for technology modifi cation. For instance, the recent cost and schedule growth in the Mars Science Laboratory (MSL) highlights the problems that can be realized when a project proceeds past the formulation phase with immature technologies. MSL reported seven critical technologies were not mature at the time of its preliminary design review, and over a year later two of these technologies were still immature at the critical design review; however, the project moved forward into the implementation phase with established cost and schedule baselines and the lack of technology maturity contributed to an unstable design. In part as a result of immature technologies and an unstable design, MSL delayed its launch date by 25 months, and development costs have grown by more than $660 million. In November 2008, the GRAIL project also moved beyond its PDR with an immature heritage technology—the reaction wheel assembly. This technology has been fl own on other NASA missions, but the project team must modify it for GRAIL by integrating electronics into the assembly. NASA acknowledges in its Systems Engineering Handbook that modifi cation of heritage systems is a frequently overlooked area in technology development and that there is a tendency on the part of project management to overestimate the maturity and applicability of heritage technology. NASA recognizes that as a result of not placing enough emphasis on the development of heritage technologies, key steps in the development process are not given appropriate attention, and critical aspects of systems engineering are overlooked. The importance of establishing a stable design at a project’s critical design review (CDR) is also critical. The CDR provides assurance that the design is mature and will meet performance requirements. An unstable design can result in costly re-engineering and re-work efforts, design changes, and schedule slippage. Quantitative measures employed at CDR, such as percentage of engineering drawings, can provide evidence that the design is stable and “freeze” it to minimize changes in the future. Our work has shown that release of at least 90 percent of engineering drawings at the CDR provides evidence that the design is stable. Though NASA’s acquisition policy does not specify how a project should achieve design stability by CDR, NASA’s Systems Engineering Handbook adheres to this metric of 90 percent drawings released by the CDR. Eight projects in our assessment have already held their CDR and were able to provide us with the number of engineering drawings completed and released. None of these 8 projects met the 90 percent standard for design stability at CDR; however, NASA believes that some of these projects had stable designs and pointed to other activities that occurred prior to CDR as evidence. Nevertheless, the percentage of engineering drawings released at CDR by these 8 projects averaged less than 40 percent, and more than three-fourths of these projects had signifi cant cost and/or schedule growth from their established baselines after their CDR when their design was supposed to be stable. Although all the cost and schedule growth for these projects cannot be directly attributed to a lack of design stability, we believe that this was a contributing factor. Discussions with project offi cials showed the metric was used inconsistently to gauge design stability. For example, Goddard Space Flight Center requires greater than 80 percent drawings released at CDR, yet we were told by several project offi cials that the rule of thumb for NASA projects is between 70 and 90 percent drawings released at CDR. However, there was no consensus among the offi cials. For example, one project manager from Goddard Space Flight Center told us the project is planning to have 70 percent of the drawings released at CDR; a project manager from the Jet Propulsion Laboratory cited that having 85 to 90 percent of the drawings released is what he prefers at CDR; otherwise, he does not consider the project design to be complete. Goddard’s Chief Engineer said that, as a member of a design review board, he will generally question projects that have less than 95 percent of engineering drawings released, especially if the project is using heritage technologies. Offi cials added that at CDR it is more important to have drawings completed that relate to critical technologies than those related to integration activities. In addition to released drawings, NASA often relies on subject matter experts in the design review process and other methods to assure that a project has a stable design. Some projects indicated that completing engineering models, which are preproduction prototypes, and holding sub-system level CDR’s for instruments and components helped to assess design stability, at least in part. Offi cials for these projects indicated that use of engineering models helps decrease risk of fl ight unit development; projects that did not use engineering models indicated they might have caught problems earlier had they used them. For example, at CDR the Mars Science Laboratory’s engineering models were incomplete and could have been a cause of concern. Mars Science Laboratory project offi cials were aware that avionics were an issue at CDR, but were unaware of future problems with other project components, such as the actuators. Project offi cials told us that if the engineering models for all subsystems had been completed at CDR, many of the later problems would have been caught and mitigated earlier in the process, thereby avoiding schedule delays. However, these project offi cials added that engineering models are expensive to employ and not all projects have the available funding required to utilize them. NASA relies heavily on the work of its contractors. Offi cials at fi ve of the projects we reviewed indicated that the contractors for their projects had trouble moving their work forward after experiencing technical and design problems with hardware that disrupted development progress. Since about 85 percent of NASA’s annual budget is spent by its contractors, the performance of these contractors is instrumental to the success of the projects. Shifts in the industrial base and a lack of expertise at the contractors affected performance. For example, project offi cials for the SOFIA project reported that the contractor for the aircraft modifi cation was bought and sold several times during the development process. Project offi cials further reported that the contractor had limited experience with this type of work and did not fully understand the statement of work. Consequently, the contractor had diffi culty completing this work, which led to signifi cant cost overruns. While project offi cials told us that issues with that contractor have since been resolved, this year another SOFIA contractor that is responsible for developing hardware and software has performed poorly, which offi cials attribute to a recent buyout of the company. In addition, agency offi cials said that NASA is a low priority for the contractor, and the project is fi nding it diffi cult to exert pressure to ensure better performance. Project offi cials told us that they currently have three people at the contractor’s site as a permanent presence. They added that if the contract were to be cancelled due to poor performance, this work would be brought in-house and would result in a one year delay. In addition, the Glory project has struggled for several years to develop a key instrument. The Glory project manager cited management ineffi ciencies with the instrument’s contractor including senior leadership changes, a loss of core competencies because of a plant closure, and a lack of proper decision authority. The contractor agreed that the plant closure and the need to re-staff were major project challenges. Six projects in our review encountered challenges with their development partners. In these cases the development partner could not meet their commitments to the project within the planned schedules. For example, NASA collaborated with the European Space Agency (ESA) on the Herschel space observatory. NASA delivered its two instruments to ESA in a timely manner, but ESA encountered diffi culties developing its instruments, and the result was a 14-month delay in Herschel’s schedule. Because of this delay, NASA incurred an estimated $39 million in cost growth because of the need to fund component developers for a longer period of time than originally planned. We found that of the projects that are currently in implementation and have experienced cost and/or schedule growth, those with international or domestic partners experienced more than one-and-a- half times as much schedule growth on average as those with no partner. Table 3 below shows the average schedule growth for projects with partners as compared to those without partners. During the course of our review, we identifi ed six projects in the implementation phase, as well as three projects still in formulation, that had experienced issues related to the project’s funding because of issues such as agency-directed funding cuts early in the project life-cycle and projects whose budgets do not match the work expected to be accomplished. For example, NASA management cut $35 million from the Kepler project’s fi scal year 2005 budget—a cut amounting to one-half of the project’s budget for the year. Contractor offi cials told us that this forced the shutdown of signifi cant work, interrupted the overall fl ow and scheduling for staff and production, and required a renegotiation of contracts. This funding instability, according to a NASA project offi cial, contributed to an overall 20-month delay in the project’s schedule and about $169 million in cost growth. The funding instability for Kepler affected more than that one project. The WISE project had to extend the formulation phase since funding was unavailable at the time of the confi rmation review in November 2005. According to NASA and contractor offi cials, the WISE project experienced funding cuts when NASA took money from that project to offset increased costs for the Kepler project. As a result of the extended formulation phase, the WISE project manager told us that development costs increased and the launch readiness date slipped 11 months. This is an example of how, when problems arise, one project can become the bill-payer for another project, making it diffi cult to manage the portfolio and make investment decisions. We also identifi ed several projects where, according to NASA offi cials, the projected budget was inadequate to perform work in certain fi scal years. For example, the Constellation program’s poorly phased funding plan has diminished both the Ares I and Orion projects’ ability to deal with technical challenges. NASA initiated the Constellation program relying on the accumulation of a large rolling budget reserve in fi scal years 2006 and 2007 to fund program activities in fi scal years 2008 through 2010. Thereafter, NASA anticipated that the retirement of the space shuttle program in 2010 would free funding for the Constellation program. The program’s risk management system identifi ed this strategy as high risk, warning that shortfalls could occur in fi scal years 2009 through 2012. According to the Constellation program manager, the program’s current funding shortfalls have reduced the fl exibility to resolve technical challenges. In addition, the James Webb Space Telescope project had to delay its scheduled launch date by one year in part because of poor phasing of the project’s funding plan. We identifi ed four projects in our assessment that are experiencing launch delays or other launch manifest-related challenges. By their nature, launch delays can contribute signifi cantly to cost and schedule growth, as months of delay can translate into millions of dollars in cost increases. For example, the Solar Dynamics Observatory (SDO) project missed its scheduled launch date in August 2008 because of test scheduling and spacecraft parts problems. This delay resulted in the SDO project moving to the end of the manifest for the Atlas V launch vehicles on the East coast, causing an 18-month launch delay and $50 million cost increase. While the primary reason for the cost growth is that the SDO project could not meet its original schedule for launch, the project is incurring additional costs to maintain project staff longer than originally planned as they await their turn in the launch queue. According to SDO offi cials, this has also affected staffi ng at Goddard Space Flight Center since these personnel were scheduled to move to other projects. Furthermore, launch delays of one project can potentially impact the launch manifest for other projects. The 25-month delay of the Mars Science Laboratory project has the potential to cause disruptions for other projects on the launch manifest in late 2011, including those outside of NASA, since planetary missions—those missions that must launch in a certain window because of planetary alignments— receive launch priority to take advantage of optimal launch windows. Some NASA projects are also experiencing launch manifest-related challenges. For example, the Gravity Recovery and Interior Laboratory project is monitoring the availability of trained launch personnel as that mission is the last to launch on the Delta II vehicle. United Launch Alliance offi cials told us that they are taking active steps, such as cross-utilizing the Delta II personnel with other launch vehicles, to ensure that trained launch personnel are available for all the remaining Delta II launches. In addition, the recent failure of the Taurus XL launch vehicle during the launch of the Orbiting Carbon Observatory has the potential to delay the Glory mission if the Taurus XL is not cleared for use before Glory has corrected its technical problems. The 2-page assessments of the projects we reviewed provide a profi le of each project and describe the challenges we identifi ed. On the fi rst page, the project profi le presents a general description of the mission objectives for each of the projects; a picture of the spacecraft or aircraft; a schedule timeline identifying key dates for the project; a table identifying programmatic and launch information; and a table showing the baseline year cost and schedule estimates and the most current available cost and schedule data; a table showing the challenges relevant to the project; and a project status narrative. On the second page of the assessment, we provide an analysis of the project challenges and the extent to which each project faces cost, schedule, or performance risk because of these challenges. In addition, NASA project offi ces were provided an opportunity to review drafts of the assessments prior to their inclusion in the fi nal product, and the projects provided both technical corrections and more general comments. We integrated the technical corrections as appropriate and characterized the general comments below the detailed project discussion. See fi gure 2 below for an illustration of the layout of each two-page assessment. We provided a draft of this report to NASA for review and comment. In its written response, NASA agrees with our fi ndings and states that it will strive to address the challenges that lead to cost and schedule growth in its projects. NASA agrees that GAO’s cost and schedule growth fi gures refl ect what the agency has experienced since the baselines were established in response to the 2005 statutory reporting requirements. Importantly, NASA has begun to provide more data regarding cost growth prior to these baselines, and we look forward to working with NASA to increase transparency into cost and schedule information of large-scale projects even further in the future. NASA noted that its projects are high-risk and one-of-a-kind development efforts that do not lend themselves to all the practices of a “business case” approach that we outlined since essential attributes of NASA’s project development differ from those of a commercial or production industry. We agree, however NASA could still benefi t from a more disciplined approach to its acquisitions whereby decisions are based upon high levels of knowledge. Currently, inherent risks are being exacerbated due to projects moving forward with immature technologies and unstable designs and diffi culties working with contractors and international partners, leading to cost and schedule increase which make it hard for the agency to manage its portfolio and make informed investment decisions. NASA’s comments are reprinted in appendix I. NASA also provided technical comments, which we addressed throughout the report as appropriate and where suffi cient evidence was provided to support signifi cant changes. We will send copies of the report to NASA’s Administrator and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offi ces of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to report on the status and challenges faced by NASA systems with life-cycle costs of $250 million or more and to discuss broader trends faced by the agency in its management of system acquisitions. In conducting our work, we evaluated performance and identifi ed challenges for each of 19 major projects. We summarized our assessments of each individual project in two components—a project profi le and a detailed discussion of project challenges. We did not validate the data provided by the National Aeronautics and Space Administration (NASA). However, we took appropriate steps to address data reliability. Specifi cally, we confi rmed the accuracy of NASA-generated data with multiple sources within NASA and, in some cases, with external sources. Additionally, we corroborated data provided to us with published documentation. We determined that the data provided by NASA project offi ces were suffi ciently reliable for our engagement purposes. We developed a standardized data collection instrument (DCI) that was completed by each project offi ce. Through the DCI, we gathered basic information about projects as well as current and projected development activities for those projects. The cost and schedule data estimates that NASA provided were the most recent updates as of October 2009; performance data that NASA provided were the most recent updates as of November 2009. At the time we collected the data, 4 of the 19 projects were in the formulation phase and 15 were in the implementation phase. NASA only provided cost and schedule data for 14 projects in implementation. Despite being in the implementation phase, NASA did not provide cost or schedule data for the Magnetospheric Multiscale (MMS) project. To further understand performance issues, we talked with offi cials from most project offi ces and NASA’s Offi ce of Program Analysis and Evaluation (PA&E). The results collected from each project offi ce, Mission Directorate, and PA&E were summarized in a 2-page report format providing a project overview; key cost, contract, and schedule data; and a discussion of the challenges associated with the deviation from relevant indicators from best practice standards. The aggregate measures and averages calculated were analyzed for meaningful relationships, e.g. relationship between cost growth and schedule slippage and knowledge maturity attained both at critical milestones and through the various stages of the project life cycle. We identifi ed cost and/or schedule growth as signifi cant where, in either case, a project’s cost and/or its schedule exceeded the baselines that trigger reporting to the Congress. To supplement our analysis, we relied on GAO’s work over the past years examining acquisition issues across multiple agencies. These reports cover such issues as contracting, program management, acquisition policy, and estimating cost. GAO also has an extensive body of work related to challenges NASA has faced with specifi c system acquisitions, fi nancial management, and cost estimating. This work provided the context and basis for large parts of the general observations we made about the projects we reviewed. Additionally, the discussions with the individual NASA projects helped us identify further challenges faced by the projects. Together, the past work and additional discussions contributed to our development of a short list of challenges discussed for each project. The challenges we identifi ed and discussed do not represent an exhaustive or exclusive list. They are subject to change and evolution as GAO continues this annual assessment in future years. Our work was performed primarily at NASA headquarters in Washington, D.C. In addition, we visited NASA’s Marshall Space Flight Center in Huntsville, Alabama; Dryden Flight Research Center at Edwards Air Force Base in California; and Goddard Space Flight Center in Greenbelt, Maryland, to discuss individual projects. We also met with representatives from NASA’s Jet Propulsion Lab in Pasadena, California and a provider of NASA launch services, the United Launch Alliance. NASA only provided specifi c cost and schedule estimates for 14 of the 19 projects in our review. For one project, the Magnetospheric Multiscale project, NASA will not formally release its baseline cost and schedule estimates until the fi scal year 2011 budget submission to Congress, and late in our review process agency offi cials notifi ed us that they will not provide project estimates to GAO until that time. For three of the projects that had not yet entered implementation, NASA provided internal preliminary estimated total (life-cycle) cost ranges and associated schedules, from key decision point B (KDP-B), solely for informational purposes. NASA formally establishes cost and schedule baselines, committing itself to cost and schedule targets for a project with a specifi c and aligned set of planned mission objectives at key decision point C (KDP-C), which follows a non- advocate review (NAR) and preliminary design review (PDR). KDP-C refl ects the life-cycle point where NASA approves a project to leave the formulation phase and enter into the implementation phase. NASA explained that preliminary estimates are generated for internal planning and fi scal year budgeting purposes at KDP-B, which occurs mid-stream in the formulation phase, and hence, are not considered a formal commitment by the agency on cost and schedule for the mission deliverables. NASA offi cials contend that because of changes that occur to a project’s scope and technologies between KDP-B and KDP-C, estimates of project cost and schedule can change signifi cantly heading toward KDP-C. Finally, NASA did not provide data for the Global Precipitation Measurement mission because NASA offi cials said it did not have a requirement for a KDP-B review, because it was authorized to be formulated prior to the requirements of NPR 7120.5D being in place. This section of the 2-page assessment outlines the essentials of the project, its cost and schedule performance, and its status. Project essentials refl ect pertinent information about each project, including, where applicable, the major contractors and partners involved in the project. These organizations have primary responsibility over a major segment of the project or, in some cases, the entire project. Project performance is depicted according to cost and schedule changes in the various stages of the project life cycle. To assess the cost and schedule changes of each project we obtained data directly from NASA PA&E and from NASA’s Integrated Budget and Performance documents. For systems in implementation, we compared the latest available information with baseline cost and schedule estimates set for each project in the fi scal year 2007, 2008, or 2010 budget request. All cost information is presented in nominal “then year” dollars for consistency with budget data. Baseline costs are adjusted to refl ect the cost accounting structure in NASA’s fi scal year 2009 budget estimates. For the fi scal year 2009 budget request, NASA changed its accounting practices from full-cost accounting to reporting only direct costs at the project level. The schedule assessment is based on acquisition cycle time, which is defi ned as the number of months between the project start, or formulation start, and projected or actual launch date. Formulation start generally refers to the initiation of a project; NASA refers to project start as key decision point A, or the beginning of the formulation phase. The preliminary design review typically occurs during the end of the formulation phase, followed by a confi rmation review, referred to as key decision point C, which allows the project to move into the implementation phase. The critical design review is held during the fi nal design period of implementation and demonstrates that the maturity of the design is appropriate to support proceeding with full scale fabrication, assembly, integration, and test. Launch readiness is determined through a launch readiness review that verifi es that the launch system and spacecraft/ payloads are ready for launch. The implementation phase includes the operations of the mission and concludes with project disposal. We assessed the extent to which NASA projects exceeded their cost and schedule baselines. To do this, we compared the project baseline cost and schedule estimates with the current cost and schedule data reported by the project offi ce in October 2009. To assess the project challenges for each project, we submitted a data collection instrument to each project offi ce. We also held interviews with representatives from most of the projects to discuss the information on the data collection instrument. These discussions led to identifi cation of further challenges faced by NASA projects. These challenges were largely apparent in the projects that had entered the implementation phase. We then reviewed pertinent project documentation, such as the project plan, schedule, risk assessments, and major project reviews. To assess technology maturity, we asked project offi cials to assess the technology readiness levels (TRL) of each of the project’s critical technologies at various stages of project development. Originally developed by NASA, TRLs are measured on a scale of one to nine, beginning with paper studies of a technology’s feasibility and culminating with a technology fully integrated into a completed product. (See appendix IV for the defi nitions of technology readiness levels.) In most cases, we did not validate the project offi ces’ selection of critical technologies or the determination of the demonstrated level of maturity. However, we sought to clarify the technology readiness levels in those cases where the information provided raised concerns, such as where a critical technology was reported as immature late in the project development cycle. Additionally, we asked project offi cials to explain the environments in which technologies were tested. Our best practices work has shown that a technology readiness level of 6— demonstrating a technology as a fully integrated prototype in a relevant environment—is the level of maturity needed to minimize risks for space systems entering product development. In our assessment, the technologies that have reached technology readiness level 6 are referred to as fully mature because of the diffi culty of achieving technology readiness level 7, which is demonstrating maturity in an operational environment—space. Projects with critical technologies that did not achieve maturity by the preliminary design review were assessed as having a technology maturity project challenge. We did not assess technology maturity for those projects which had not yet reached the preliminary design review at the time of this assessment. To assess the complexity of heritage technology, we asked project offi cials to assess the TRL of each of the project’s heritage technologies at various stages of project development. We also interviewed project offi cials about the use of heritage technologies in their projects. We asked them what heritage technologies were being used, what effort was needed to modify the form, fi t, and function of the technology for use in the new system, whether the project encountered any problems in modifying the technology, and whether the project considered the heritage technology as a risk to the project. Heritage technologies were not considered critical technologies by several of the projects we reviewed. Based on our interviews, review of data from the data collection instruments, and previous GAO work on space systems, we determined whether complexity of heritage technology was a challenge for a particular project. To assess design stability, we asked project offi cials to provide the percentage of engineering drawings completed or projected for completion by the preliminary and critical design reviews and as of our current assessment. In most cases, we did not verify or validate the percentage of engineering drawings provided by the project offi ce. However, we collected the project offi ces’ rationale for cases where it appeared that only a small number of drawings were completed by the time of the design reviews or where the project offi ce reported signifi cant growth in the number of drawings released after CDR. In accordance with GAO’s best practices, projects were assessed as having achieved design stability if they had released at least 90 percent of projected drawings by the critical design review. Projects that had not met this metric were determined to have a design stability project challenge. Though some projects used other methods to assess design stability, such as computer and engineering models and analyses, we did not analyze the use of these other methods and therefore could not assess the design stability of those projects. We could not assess design stability for those projects that had not yet reached the critical design review at the time of this assessment. To assess whether projects encountered challenges with contractor performance, we interviewed project offi cials about their interaction and experience with contractors. We also relied on interviews we held in 2008 with contractor representatives from Orbital Sciences Corporation, Ball Aerospace and Technologies Corporation, and Raytheon Space Systems about their experiences contracting with NASA. We were informed about contractor performance problems pertaining to their workforce, the supplier base, and technical and corporate experience. We also discussed the use of contract fees with NASA and contractor’s representatives. We assessed a project as having this challenge if these contractor performance problems—as confi rmed by NASA and, where possible, the project contractor—caused the project to experience a cost overrun, schedule delay, or decrease in mission capability. For projects that did not have a major contractor, we considered this challenge inapplicable to the project. To assess whether projects encountered challenges with development partner performance, we interviewed NASA project offi cials about their interaction with international or domestic partners during project development. Development partner performance was considered a challenge for the project if project offi cials indicated that domestic or foreign partners were experiencing problems with project development that impacted the cost, schedule, or performance of the project for NASA. These challenges were specifi c to the partner organization or caused by a contractor to that partner organization. For projects that did not have an international or domestic development partner, we considered this challenge not applicable to the project. To assess whether projects encountered challenges with funding, we interviewed offi cials from NASA’s Program Analysis and Evaluation Division, NASA project offi cials, and project contractors about the stability of funding throughout the project life-cycle. Funding stability was considered a challenge if offi cials indicated that project funding had been interrupted or delayed resulting in an impact to the cost, schedule, or performance of the project, or if project offi cials indicated that the project budgets do not have suffi cient funding in certain years based on the work expected to be accomplished. We corroborated the funding changes and reasons with budget documents when available. To assess whether projects encountered challenges with their launch manifests, we interviewed NASA Launch Services offi cials and offi cials from one of NASA’s contracted providers for launch services about project launch scheduling, launch windows, and projects that missed their opportunities. Launch manifest was considered a challenge if, after establishing a fi rm launch date, a project had diffi culty rescheduling its launch date because it was not ready, if the project could be affected by another project slipping its launch, or if there were launch vehicle fl eet issues. Projects that have not yet entered into the implementation phase have not yet set a fi rm launch date and were therefore not assessed. In addition, NASA received an appropriation from the American Recovery and Reinvestment Act of 2009 (ARRA). NASA provided a record of projects involved in our review that received ARRA funds. The individual project offi ces were given an opportunity to comment on and provide technical clarifi cations to the 2-page assessments prior to their inclusion in the fi nal product. We conducted this performance audit from April 2009 to February 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain suffi cient, appropriate evidence to provide a reasonable basis for our fi ndings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our fi ndings and conclusions based on our audit objectives. GAO has previously conducted work on NASA’s acquisition policy for space- fl ight systems, and in particular, on its alignment with a knowledge-based approach to system acquisitions. The fi gure below depicts this alignment. As the fi gure shows, NASA’s policy defi nes a project life cycle in two phases—the formulation and implementation phases, which are further divided into incremental pieces: Phase A through Phase F. Project formulation consists of Phases A and B, during which time the projects develop and defi ne the project requirements and cost/schedule basis and design for implementation, including an acquisition strategy. During the end of the formulation phase, leading up to the preliminary design review (PDR) and non-advocate review (NAR), the project team completes its preliminary design and technology development. NASA Interim Directive NM 7120-81 for NASA Procedural Requirements 7120.5D, NASA Space Flight Program and Project Management Requirements, specify that the project complete development of mission-critical or enabling technology, as needed, with demonstrated evidence of required technology qualifi cation (i.e., component and/or breadboard validation in the relevant environment) documented in a technology readiness assessment report. The project must also develop, document, and maintain a project management baseline that includes the integrated master schedule and baseline life-cycle cost estimate. Implementing these requirements brings the project closer to ensuring that resources and needs match, but it is not fully consistent with knowledge point 1 of the knowledge-based acquisition life-cycle. Our best practices show that demonstrating technology maturity at this point in the system life cycle should include a system or subsystem model or prototype demonstration in a relevant environment, not only component validation. As written, NASA’s policy does not require full technology maturity before a project enters the implementation phase. After project confi rmation, the project begins implementation, consisting of phases C, D, E, and F. During phases C and D, the project performs fi nal design and fabrication as well as testing of components and system assembly, integration, test, and launch. Phases E and F consist of operations and sustainment and project closeout. A second design review, the critical design review (CDR), is held during the implementation phase toward the end of phase C. The purpose of the CDR is to demonstrate that the maturity of the design is appropriate to support proceeding with full scale fabrication, assembly, integration, and test. Though this review is not a formal decision review, its requirements for a mature design and ability to meet mission performance requirements within the identifi ed cost and schedule constraints are similar to knowledge expected at knowledge point 2 of the knowledge-based acquisition life-cycle. Furthermore, after CDR, the project must be approved at KDP D before continuing into the next phase. The NASA acquisition life-cycle lacks a major decision review at knowledge point 3 to demonstrate that production processes are mature. According to NASA offi cials, the agency rarely enters a formal production phase due to the small quantities of space systems that they build. None (paper studies and analysis) Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. None (paper studies and analysis) Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Analytical studies and demonstration of nonscale individual components (pieces of subsystem). Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fi delity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Low fi delity breadboard. Integration of nonscale components to show pieces will work together. Not fully functional or form or fi t but representative of technically feasible approach suitable for fl ight articles. Fidelity of breadboard technology increases signifi cantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fi delity” laboratory integration of components. High fi delity breadboard. Functionally equivalent but not necessarily form and/or fi t (size weight, materials, etc). Should be approaching appropriate scale. May include integration of several components with reasonably realistic support elements/subsystems to demonstrate functionality. Lab demonstrating functionality but not form and fi t. May include fl ight demonstrating breadboard in surrogate aircraft. Technology ready for detailed design studies. Representative model or prototype system, which is well beyond the breadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fi delity laboratory environment or in simulated realistic environment. Prototype. Should be very close to form, fi t and function. Probably includes the integration of many new components and realistic supporting elements/subsystems if needed to demonstrate full functionality of the subsystem. High-fi delity lab demonstration or limited/restricted fl ight demonstration for a relevant environment. Integration of technology is well defi ned. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in a realistic environment, such as in an aircraft, vehicle or space. Examples include testing the prototype in a test bed aircraft. Prototype. Should be form, fi t and function integrated with other key supporting elements/subsystems to demonstrate full functionality of subsystem. Flight demonstration in representative realistic environment such as fl ying test bed or demonstrator aircraft. Technology is well substantiated with test data. Technology has been proven to work in its fi nal form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifi cations. Actual application of the technology in its fi nal form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of the last “bug fi xing” aspects of true system development. Examples include using the system under operational mission conditions. There are 7 NASA projects in our review, including all of those in formulation, that are receiving money from the American Recovery and Reinvestment Act (ARRA) of 2009. See table 4 below for the NASA projects in our review receiving this funding and the intended use of those funds. In addition to the contact named above, Jim Morrison, Assistant Director; Jessica M. Berkholtz; Greg Campbell; Richard A. Cederholm; Kristine R. Hassinger; Jeff R. Jensen; Kenneth E. Patton; Brian A. Tittle; and Letisha T. Watson made key contributions to this report.
The National Aeronautics and Space Administration (NASA) plans to invest billions in the coming years in science and exploration space fl ight initiatives. The scientifi c and technical complexities inherent in NASA's mission create great challenges in managing its projects and controlling costs. In the past, NASA has had diffi culty meeting cost, schedule, and performance objectives for many of its projects. The need to effectively manage projects will gain even more importance as NASA seeks to manage its wide-ranging portfolio in an increasingly constrained fi scal environment. This report provides an independent assessment of selected NASA projects. In conducting this work, GAO compared projects against best practice criteria for system development including attainment of knowledge on technologies and design. GAO also identifi ed other programmatic challenges that were contributing factors in cost and schedule growth of the projects reviewed. The projects assessed are considered major acquisitions by NASA--each with a life-cycle cost of over $250 million. No recommendations are provided in this report; however, GAO has reported extensively and made recommendations on NASA acquisition management in the past. GAO has designated NASA's acquisition management as a high risk area since 1990. GAO assessed 19 NASA projects with a combined life-cycle cost of more than $66 billion. Of those 19 projects, 4 are still in the formulation phase where cost and schedule baselines have yet to be established, and 5 just entered the implementation phase in fi scal year 2009 and therefore do not have any cost and schedule growth. However, 9 of the 10 projects that have been in the implementation phase for several years experienced cost growth ranging from 8 to 68 percent, and launch delays of 8 to 33 months, in the past 3 years. These 10 projects had average development cost growth of almost $121.1 million--or 18.7 percent--and schedule growth of 15 months, and a total increase in development cost of over $1.2 billion, with over half of this total--or $706.6 million--occurring in the last year. In some cases, cost growth was higher than is reported because it occurred before project baselines were established in response to the statutory requirement in 2005 for NASA to report cost and schedule baselines for projects and implementation with estimated life-cycle cost of more than $250 million. Additionally, NASA was recently appropriated over $1 billion through the American Recovery and Reinvestment Act of 2009. Many of the projects GAO reviewed experienced challenges in developing new or retrofi tting older technologies, stabilizing engineering designs, managing the performance of their contractors and development partners, as well as funding and launch planning issues. Reducing the kinds of problems this assessment identifi es in acquisition programs hinges on developing a sound business case for a project. Based, in part, on GAO's previous recommendations, NASA has acted to adopt practices that would ensure programs proceed based on a sound business case and undertaken initiatives aimed at improving program management, cost estimating, and contractor oversight. Continued attention to these efforts and effective, disciplined implementation should help maximize NASA's acquisition investments.
A global technological upheaval, fueled by rapid advances in information processing, storage, switching, and transmission technologies, is beginning to blur the lines between computing, telephony, television, and publishing. This convergence is creating a new breed of information service industry, and permitting the development of the much discussed National Information Infrastructure (NII), commonly known as the information superhighway. The administration envisions the superhighway as a seamless web of communications networks, computers, databases, and consumer electronics—built, owned, and operated principally by the private sector—that will put vast amounts of information at users’ fingertips. It believes that the superhighway, if freed from the constraints imposed by rigid regulatory regimes, will fundamentally change the way we work, learn, shop, communicate, entertain ourselves, and get health care and public services. Despite the dramatic advances in technology and the changes sweeping the communications industry, the superhighway’s development is expected to be slow and arduous. As such, its development should not be viewed as a cliff that is suddenly confronted, but rather an increasingly steep slope that society has been climbing since the early communications networks were established. A national and global information infrastructure, which will serve as the foundation for the superhighway, already exists. Telephones, televisions, radios, computers, and fax machines—interconnected through a complex web of fiber optics, wires, cables, satellites, and other communications technologies—are used every day to receive, store, process, display, and transmit data, text, voice, sound, and images in homes and businesses throughout the world. However, the information superhighway is expected to offer much more than separate telephone, data, or video services; it is expected to integrate these services into an advanced high-speed, interactive, broadband, digital communications system. Some of the advanced capabilities and services envisioned for the superhighway are beginning to be provided—albeit at a relatively high cost and at low transmission speeds—by the existing information infrastructure. For example, the Internet—a global metanetwork, or “network of networks,” linking over 59,000 networks, 2.2 million computer systems, and over 15 million users in 92 countries—provides many of the services envisioned for the information superhighway. Similarly, a growing number of on-line services, such as CompuServe, America Online, and Prodigy, provide their subscribers with a rich array of information services. Finally, hundreds of communities across America are served by electronic bulletin boards dispensing information to hundreds of thousands of users. The administration, believing that the technologies to create, manipulate, manage, and use information are of strategic importance to the United States, has formed a multiagency group—the Information Infrastructure Task Force (IITF)—to articulate a vision for the information superhighway and to guide its development. The task force, chaired by the Secretary of Commerce, is responsible for addressing a wide range of regulatory and technical issues related to the information superhighway and for the coordination of existing federal efforts in the communications area. The task force is examining, through its committees and working groups, a wide range of technical issues relevant to the development and growth of the information superhighway. A more detailed description of the IITF structure and its activities is presented in appendix I. While industry is beginning to build the information superhighway, little is known about how the superhighway will be structured and what services it will provide. Nevertheless, a common vision of its capabilities is beginning to form among policymakers and public interest groups. First, there is an emerging agreement that the superhighway should be structured as a metanetwork that will seamlessly link thousands of broadband digital networks. Second, it should allow a two-way flow of information, with users being able to both receive and transmit large volumes of digital information. Third, it should be open, ensuring equal access for service and network providers. Finally, it should ensure the security and privacy of databases and users’ communications, and provide a high degree of interoperability and reliability. Achieving the grand vision will depend largely on how successfully industry integrates advanced technologies and capabilities into the various layers of the information superhighway. To better understand the integration of advanced telecommunication technologies into the existing communication infrastructure, we developed a conceptual model of the information superhighway, as shown in figure 1.1. The model presents the following five critical layers—management, applications, information, networks, and transport—linked with pervasive security, interoperability, and reliability requirements: the transport layer consists of optical fibers, coaxial cable, copper wire, switches, routers, satellites, and transmitters the networks layer consists of thousands of logical networks superimposed on the transport layer the information layer includes databases and electronic libraries containing text, images, and video the applications layer contains software and consumer electronics needed to access the superhighway’s information and services the management layer consists of operations and administrative centers, emergency response teams, and security services. With a few exceptions, such as the recently proposed global satellite networks, most experts anticipate that the superhighway will be built on the foundation of the existing communications infrastructure. Over the years, this infrastructure has evolved into three separate, and frequently incompatible, communications networks. These are the wire-based voice and data telephone networks, the cable-based video networks, and the wireless voice, data, and video networks. The wire-based voice and data telephone networks are part of the global telephone network. The voice networks provide ubiquitous, highly interoperable, high-speed, and flexible telephone service to millions of users. The data networks provide high-speed digital data communications services. The cable-based video networks rely on various approaches to broadcast a one-way broadband video signal to individual subscribers. Finally, the wireless networks use a wide range of analog and digital radio technologies to deliver voice, data, and video services. The principal shortcoming of the existing communications infrastructure is its inability to provide integrated voice, data, and video services. Over the years, the voice and data networks have evolved separately, with voice networks relying on circuit switching while data networks largely using packet switching techniques. Thus, a business user requiring voice, data, and videoconferencing services may have to use three separate networks—a voice network, a data network, and a videoconferencing network. The emergence of multimedia applications and the high bandwidth applications in health care, industry, education, and business are beginning to require a network infrastructure capable of supporting multiple types of information. The basic architecture of the three types of networks is shown in figure 1.2 (see appendix II for an overview of each of these networks). The communications industry is beginning to introduce several new and innovative technologies that could enable the superhighway’s developers to achieve the administration’s vision of the information superhighway. These technologies include narrowband Integrated Services Digital Network (ISDN), advanced signaling and intelligent networks, broadband ISDN (B-ISDN), personal communications networks, and broadband in the local loop. These technologies, described in more detail in appendix III, will help provide many of the advanced services and capabilities of the information superhighway. The development of the superhighway will also require the expenditure of tens of billions of dollars to build the local broadband “on-ramps” connecting residential, institutional, and business users with the evolving superhighway. Further, its users are expected to be offered viable services and information products beyond the much touted 500 channels of high-definition television. In light of the strategic importance of the information superhighway, we identified the socioeconomic, regulatory, and technical issues and challenges associated with the development of the information superhighway. Our previous report addressed all three areas. Our objective in this report is to address in more detail the key technical issues: security and privacy, interoperability, and network reliability. To accomplish our objective, we surveyed an extensive body of technical literature and industry journals, searched and reviewed related documents from Internet networks, and reviewed postings to various Internet news groups with interest in telecommunications and information security issues. To obtain the views of federal officials on the technical challenges related to the development of the information superhighway, we met with representatives from the Federal Communications Commission (FCC), the National Telecommunications and Information Administration, the Information Infrastructure Task Force (IITF), the National Institute of Standards and Technology (NIST), the National Science Foundation, the Department of Defense (DOD), the Advanced Research Projects Agency (ARPA), and the National Security Agency (NSA). We also met with representatives of the telephone, cable, and communication industry to obtain their views on technical issues related to the superhighway. We conducted our work in Washington, D.C., and vicinity between September 1993 and October 1994, in accordance with generally accepted government auditing standards. In addition, we discussed the contents of this report with representatives of the National Telecommunications and Information Administration, IITF, FCC, NIST, DOD, ARPA, and NSA, and have incorporated their comments where appropriate. Much of the information that will be on the superhighway, including health care records, business documents, engineering drawings, purchase orders, or credit card transactions, will be proprietary or privacy sensitive and must be protected. As it evolves, the superhighway will become an increasingly tempting target for intruders with the technical expertise and resources to cause great harm, including insiders, hackers, foreign governments conducting political and military intelligence operations, domestic and foreign enterprises engaged in industrial espionage, and terrorist groups seeking to disrupt our society or cripple our economy.Unauthorized disclosure, theft, modification, or malicious destruction of such information could bankrupt a business, interrupt vital public service, or destroy lives. Information security plays a key role in protecting computer systems, networks, and information—including voice, fax, and data communications—from harm, disclosure, or loss. Privacy depends heavily on security. In essence, there is little or no privacy protection afforded by poorly secured information systems and networks. While privacy-enhancing legislation, regulations, and management practices play an important role in reducing the threat to individual privacy, it is security technology that will provide many of the safeguards. Significant effort will be needed to define, develop, test, and implement measures to overcome the security challenge posed by the increasing complexity, interconnectivity, and the sheer size of the evolving superhighway. These measures include identifying the superhighway’s security and privacy requirements and developing tools and techniques to satisfy the requirements. The federal government, because of its extensive experience and expertise in developing secure networks, is addressing selected aspects of security and privacy. However, critics of federal involvement argue that the current federal strategy represents a danger to civil liberties and that individuals should be free to choose the technical means for achieving information security. As a result, the challenge will be establishing a reasonable level of consensus among the major players—the government, the computer and communications industry, the business community, and civil liberty groups—on how to ensure information security and privacy on the information superhighway. The vulnerability of interconnected computer systems is periodically highlighted by attacks on the thousands of computer systems connected to the Internet. These attacks provide an important lesson. The Internet—the world’s largest network of networks—has many of the same attributes that will eventually be found in the information superhighway. The information superhighway may not only share similar vulnerabilities, but it may face similar, albeit greatly magnified, threats. Two major security incidents affecting the Internet illustrate the risk to the evolving information infrastructure. On November 8, 1988, thousands of computers connected to the Internet were attacked by a worm. While the worm did not damage or compromise data, it did deny service to thousands of users working at the nation’s major research centers. We found that a number of vulnerabilities facilitated this attack, including the lack of a central focal point to address Internet-wide security problems; security weaknesses at host computer sites; and problems in developing, distributing, and installing software patches to operating system software. In response to this incident, the Advanced Research Projects Agency established a Computer Emergency Response Team to assist the Internet community in responding to attacks. Several federal agencies and private-sector organizations also established additional computer emergency response teams coordinated by NIST. Five years later, in January 1994, intruders again exploited similar weaknesses. This time, the attack was more serious. The intruders gained access to a number of hosts (computer systems) linked to the Internet. The intruders then installed software that captured user names, passwords, and hosts’ addresses for Internet traffic terminating at, or passing through, the attacked sites. In addition, they installed two Trojan horse programs, one program to provide back-door access for the intruders to retrieve the captured passwords, and a second program to disguise the network monitoring process. With this information, the intruders could access 100,000 Internet accounts. The Department of Defense reported that the attacks compromised a major portion of the international commercial networks as well as major portions of the unclassified Defense information infrastructure. Defense functions affected by the attacks included ballistic weapons research, ocean surveillance, and the military health care systems. Reducing the frequency and damage of attacks against the national networks will require a significant effort to provide the tools and resources necessary for the development and deployment of infrastructure-wide security services. These services include identification and authentication—the ability to verify a user’s identity and a message’s authenticity, access control and authorization—the protection of information from confidentiality—the protection of information from unauthorized disclosure, integrity—the protection of information from unauthorized modification or accidental loss, nonrepudiation—the ability to prevent senders from denying they have sent messages and receivers from denying they have received messages, and availability—the ability to prevent denial of service, that is, to ensure that service to authorized users is not disrupted. Cryptography will play a key role in the development of five of the six security services for the information superhighway. It helps, through password encryption, to improve identification and access control; it protects confidentiality and data integrity by encrypting the data; and finally, it improves, through encrypted electronic signature and related means, nonrepudiation services. Two basic types of cryptographic systems exist: secret key systems (also called symmetric systems) and public key systems (also called asymmetric systems). In secret key cryptography, two or more parties use the same key to encrypt and decrypt data. As the name implies, secret key cryptography relies on keeping the key secret. If this key is compromised, the security offered by cryptography is eliminated. The best known secret key algorithm is the Data Encryption Standard. It is currently the most widely accepted, publicly available symmetric cryptographic algorithm. Secret key systems also require that a secure communications channel be established for the delivery of the secret key from the sender to receiver. Such a secure, nonelectronic communications channel for the distribution of secret keys is costly to establish and maintain. Unlike secret key cryptography, which employs a single key shared by two or more parties, public key cryptography uses a pair of matched keys for each party. One of these keys is public and the other private. The public key is made known to other parties—mainly through electronic directories—while the private key must be kept confidential. Thus, under the public key system, there is no need to establish a secure channel to distribute keys. The sender encrypts the message with the recipient’s freely disclosed, unique public key. The recipient, in turn, uses her unique private key to decrypt the message. Public key cryptography also enables the user to produce an electronic signature. The user encrypts the signature using the private key, which, when decrypted with the public key, provides verification that the message originated from that user. The best known public key algorithm is the Rivest-Shamir-Adelman algorithm. The Pretty Good Privacy software, which implements the Rivest-Shamir-Adelman algorithm, is probably one of the best known public key cryptographic systems. Figure 2.1 highlights the principal features of the secret and public key cryptographic systems. A host of related security technologies, including computer memory cards, will also play an important role in securing the information superhighway. Computer memory technology uses a credit-card-size electronic module to store digital information that can be recognized by a network or a host system. Figure 2.2 shows a computer memory card—the Tessera Crypto Card—developed by the National Security Agency. The Tessera Crypto Card is a small, portable cryptographic module that provides high-speed authentication and encryption services. Federal involvement in communication security is fueling a debate over the federal role in regulating the development and use of encryption and communications technologies. Critics of federal involvement, such as the Electronic Frontier Foundation—a public interest organization focused on protecting civil liberties in digital environments—believe that government control of encryption technologies and their implementation represents a danger to civil liberties, and that individuals should be free to choose the technical means for meeting their security requirements. Others, including NIST and Defense officials, maintain that the federal government’s participation and guidance in securing the information superhighway may be needed for several reasons. First, the government is a major consumer of telecommunications services and has unique national security and law enforcement needs that must be addressed. Second, the government, and particularly the Department of Defense, has considerable experience in the areas of computer and communications security. Defense, the developer and operator of the world’s largest secure communications network, could provide expertise needed to help develop the superhighway’s security architecture. The need for such an architecture was underscored by a recent study which noted that it is “imperative to develop at the outset a security architecture that will lay the foundation for protections of privacy, security, and intellectual property rights—safeguards that cannot be supplied as effectively on an add-on basis.” Since the invention of the telegraph and telephone, intelligence and law enforcement agencies have conducted legal intercepts of communications both here and abroad. In general, these agencies used technically simple intercepts that targeted unprotected communications. However, the emergence of digital technologies and the increased availability of sophisticated encryption tools has dramatically eroded the government’s electronic intelligence and analysis capabilities. The proliferation of digital communications is making wiretapping increasingly difficult, while robust encryption prevents third parties, including law enforcement and intelligence agencies, from deciphering and understanding intercepted messages. The administration, after coordination with the Congress, industry, and public advocacy groups, has developed a strategy designed to preserve the government’s ability to conduct electronic surveillance, wiretapping, and analysis of voice and data communications between criminals, terrorists, drug dealers, and foreign agents. This strategy includes a major new federal cryptography initiative known as the Key Escrow Standard (popularly known as the “Clipper chip” program), the Communications Assistance for Law Enforcement Act requiring the information industry to provide “built-in” wiretapping support in its digital communications systems, and restrictions on the export of encryption technology. The Key Escrow initiative is a voluntary program to improve the security and privacy of telephone communications in the private sector while meeting the legitimate needs of law enforcement. In essence, the initiative is the government’s attempt to preempt the threat posed by sophisticated encryption capabilities by offering the industry a relatively inexpensive, albeit government-controlled, hardware-based encryption system capable of providing secure voice, fax, and data services. To ensure that law enforcement agencies are able to understand Clipper-encrypted voice communications, the private encryption keys assigned to each individual Clipper chip are to be escrowed with the government. These keys will be made available to law enforcement agencies for court-ordered wiretaps. The Clipper chip, developed by NSA, is a microcircuit incorporating a classified encryption algorithm known as Skipjack. The chip, and its close relative, the Capstone chip, contain a unique key that is used to encrypt and decrypt messages, programmed by the escrow agents. This unique key is then split into two components and delivered to two federal agencies—or escrow agents—for safekeeping. When federal authorities encounter Clipper chip encrypted voice or Capstone chip encrypted data communications during the course of court-authorized wiretapping, they may obtain the unique key necessary for the decryption of the wiretapped communications from the escrow agents. Figure 2.3 shows a Capstone chip and three prototypes of a Clipper chip. In April 1993, the President directed the Attorney General to (1) request manufacturers of communications hardware that incorporates encryption to install the Clipper chip in their products, and (2) designate two government organizations as “key escrow” holders. The President also directed the Secretary of Commerce to initiate, through NIST, a process to develop federal key escrow encryption standards. Despite strong industry opposition, the administration reaffirmed its 1993 directive and instructed the Secretary of Commerce to approve the Clipper chip as a voluntary national standard for encrypted telephone communications. In February 1994, NIST formally approved the new standard. At the same time, the Attorney General designated NIST and the Automated Systems Design Division of the Department of the Treasury as the key escrow agents. Critics of the Key Escrow initiative argue that NSA’s refusal to declassify and publish the Skipjack encryption algorithm raises the possibility that the algorithm may have a built-in “trap door.” Such a trap door would allow intelligence agencies to decrypt Clipper and Capstone encrypted communications at will, without obtaining the private keys from the escrow agents. The critics also note that since robust encryption technology is available both in the U.S. and abroad, there is no incentive for domestic and international industry or private citizens to adopt the Clipper/Capstone technology. The misgivings about the Key Escrow initiative were also shared by the Computer System Security and Privacy Advisory Board. In its June 4, 1993, resolution, the Board stated that the administration has not (1) provided a convincing statement of the problem that Clipper attempts to solve, (2) considered other escrow alternatives including the designation of a third, non-government escrow agent, and (3) fully examined the legal and economic implications of the Clipper chip initiative. The Board recommended that the Key Escrow encryption technology not be deployed beyond current implementations planned within the Executive Branch until the significant public policy and technical issues inherent with this encryption technique are fully understood. The Congress asked the National Research Council to conduct a comprehensive study of national cryptography policy and submit, within 2 years, a report to the Secretary of Defense. In December 1993, the Board endorsed the proposal, noting that the study should be conducted as quickly as possible. In July 1994, the administration reaffirmed its commitment to the Key Escrow scheme in general, and to the use of the Clipper chip for telephone communications in particular. It also offered a compromise on the development of the Capstone chip for computer and video networks. Specifically, the administration said that it understood the concerns that industry has regarding the Capstone chip and welcomed the opportunity to work with industry to design a more versatile, less expensive system. NIST and the information security industry have now initiated a joint effort to explore alternative approaches. Such alternative key escrow schemes would be implemented in software, firmware, or hardware, or a combination thereof; would not rely on a classified algorithm; would be voluntary; and would be exportable. To address concerns about the potential loss of wiretapping capability due to the rapid deployment of digital communications, in October 1994 the Congress enacted the Communications Assistance for Law Enforcement Act. The act requires common carriers to ensure that they posses sufficient capability and capacity to accommodate law enforcement’s wiretapping needs. Specifically, the act requires that telecommunications carriers develop the capability to expeditiously isolate the content and call-identifying information of a targeted communication and enable the government to access targeted communication at a point away from the carrier’s premise. The act requires the government to reimburse carriers for all reasonable costs associated with complying with the act’s requirements. Critics of the act—including the Electronic Frontier Foundation—argue that it further erodes communication privacy, and that the Federal Bureau of Investigation has not adequately documented its need for sophisticated digital wiretap capability. Many of the U.S. encryption technologies, whether developed commercially or by the government, are subject to export controls. The Departments of State and Commerce share responsibility for controlling the exports of these technologies. However, computer industry representatives view the encryption export controls as counterproductive and economically damaging. For example, the representatives noted that because robust, sophisticated encryption technologies, including technologies on the U.S. Munitions List, are widely available in foreign markets, the export controls are reducing their international sales. Our brief search of foreign Internet sites confirms industry’s assertion that sophisticated encryption software is widely available to foreign users. For example, we found that a number of European Internet sites are offering U.S.-made encryption software. In less than two hours, we identified several European sites offering the Pretty Good Privacy software, obtained it from an Internet site in Great Britain, installed the software on our computer, and encrypted a message (shown in figure 2.4). Interoperability—the ability of two or more components of a system or network to interact with each other in a meaningful way—is a key goal of the information superhighway. However, full interoperability among the thousands of networks, communications devices, and services that will comprise the information superhighway will be difficult to achieve. To do so, governments, industry, and standards-setting organizations must agree on well-defined international standards for rapidly advancing communications technologies, while manufacturers and service providers need to provide products and services conforming to these standards. However, the telecommunications industry is already deploying, or plans to deploy, a host of technologies and services that are based on ill-defined, anticipatory, or competing standards. To address this dilemma, the federal and private sectors have initiated interoperability efforts, including the assessment of various “open network” architectures. Interoperability will define the information superhighway. Without interoperability, the information superhighway will be fragmented into thousands of poorly integrated communications networks providing a bewildering choice of incompatible services. While policymakers, public interest groups, and industry agree that interoperability is a key requirement, they also agree that it will be difficult to achieve among the thousands of communications networks, computers, databases, and consumer electronics that will comprise the information superhighway. As discussed in chapter 1, the existing infrastructure suffers from significant interoperability problems. Because of competitive pressures, the desire to provide new capabilities, and a belief that the traditional standards-setting process is unable to keep up with the fast pace of technological change, industry is deploying, or is planning to deploy, a host of new technologies and services. However, many of these technologies and services are based on ill-defined, anticipatory, or competing standards, thereby further complicating efforts to achieve interoperability. The effects of deploying new technology based on ill-defined standards is illustrated by the implementation of the ISDN. ISDN is an end-to-end digital network evolving from the existing telephone network. It is viewed as the first step in the conversion to a fully digital network. However, the initial deployment of ISDN resulted in the proliferation of “island” ISDN services that could not interoperate because the ISDN standards provided only a broad outline and lacked enough detail to ensure that all implementations would be identical. For example, ISDN users in New York and New England are unable to communicate data with ISDN users in the middle atlantic states. To alleviate the ISDN interoperability problems, the industry announced a plan to establish a consistent interface that would provide interoperability between local telephone companies, long distance telephone companies, and equipment manufacturers. The deployment of the Asynchronous Transfer Mode (ATM) services provides an example of a technology deployed based on anticipatory standards. The broadband ISDN (B-ISDN) technology, which is expected to lay the foundation for the superhighway’s interactive, high-speed digital communications infrastructure, will rely on ATM/SONET optical fiber networks. However, critical ATM standards including global routing and addressing, resource management, multicast, and network management remain undefined. The industry is also developing products and services in the absence of less visible, but equally important standards, for data display and exchange, accounting and billing, network addressing and naming, and telephone number portability (see appendix IV). The introduction of competing technologies is highlighted by the deployment of digital cellular systems. Digital cellular systems are viewed as a key component of the evolving personal communications networks. While digital systems will offer dramatically better performance than their analog counterparts, their near-term value in serving as a key link in the emerging B-ISDN network is reduced by compatibility problems. There are three principal digital cellular standards—the U.S. standard, known as the North American Dual-Mode Cellular System; the European standard, known as Global System for Mobile Communications; and the Japanese Digital Cellular standard. Although all three standards are based on the time division multiple mode access technique, they are not interoperable. While the key players—the federal government, the computer and communication industries, and various user groups—appear to agree on the need for a fully interoperable information superhighway, there is no agreement yet on how it should be achieved. The principal federal organizations focused on superhighway interoperability include NIST and the National Research Council’s Computer Science and Telecommunications Board. The overall coordination of federal interoperability efforts is being examined by IITF’s Technology Policy Working Group. In the private sector, the FCC is working with industry to ensure the interoperability of selected technologies deployed in public networks. Industry has also established a consortium for the development and testing of superhighway applications. One promising approach to the planning for interoperability is to develop a high-level architecture—or framework—of the superhighway. This approach was advocated by a recent National Research Council report that presented a vision of the superhighway based on an open data network concept. Under this concept, the superhighway must be open to users: it does not force users into closed groups or deny access to any sector of society, but permits universal connectivity, as does the telephone system, open to service providers: it provides an open and accessible environment for competing commercial or intellectual interests, including information providers, open to network providers: it makes it possible for any network providers to meet the necessary requirements to attach and become a part of the aggregate of interconnected networks, and open to change: it permits the introduction of new applications and services over time; it also permits the introduction of new transmission, switching, and control technologies as these become available. This concept, expressed as a high-level network architecture, could provide a set of specifications to guide the detailed design of the information superhighway. Without such a framework, the pieces of the emerging superhighway may not fit together. The IITF’s Technology Policy Working Group is planning to examine the open data network concept and its applicability to various industries, including cable television, broadcasting, communications and computer. In an attempt to improve interoperability, the Network Operation Forum of the Alliance for Telecommunications Industry Solutions established the Internetwork Interoperability Test Plan Ad Hoc Committee. However, the committee’s effort was limited to solving problems with the Signaling System 7 (SS7) switching systems. The requirements for intranetwork, product-to-product, and stand-alone equipment modeling and testing were considered to be outside of the committee’s charter. Other aspects of existing networks such as interoperability testing requirements of newer technologies were also not addressed. So far, the committee has developed scenarios designed to test the interoperability of SS7 systems. Ensuring the reliability of the information superhighway will be essential. The public and private sectors are increasingly dependent on the existing telecommunications networks, which will be the foundation of the information superhighway, to meet their business needs. Yet recent outages on these networks have raised concerns and caused economic losses. Moreover, new technologies and industry trends will likely increase network vulnerability, making reliability of the superhighway a key challenge. The government and industry have recently taken several steps to address reliability, including the formation of the Network Reliability Council and the Alliance for Telecommunications Industry Solutions. In providing critical commercial and personal services, the superhighway will require a highly reliable network. The nation is already dependent on the existing networks, which will provide the underpinning for the superhighway. For example, in addition to conventional telephone services, computers are networked together, facsimile machines provide almost instant access to images and documents, and teleconferencing and videoconferencing have emerged as substitutes for travel. The number of electronic transactions conducted over these networks is enormous. For example, the value of the telephone transactions that take place daily on Wall Street exceeds one trillion dollars. Similarly, the Federal Aviation Administration relies on the public network to transmit air traffic control information between individual airports. Public telephone networks are also being increasingly relied upon for emergency services. For example, the telephone has replaced fire alarm boxes as the primary method for reporting fires. Emergency 911 service can be obtained from personal or public pay phones. Telephones are also used to report medical emergencies requiring emergency medical technicians, and burglaries and domestic problems requiring responses from the police. Enhanced 911 service, available in many locations, is even capable of automatically routing the emergency call to a public service answering point, the facility in charge of answering calls and dispatching appropriate services in the caller’s area. The system also searches phone company databases to determine and report the caller’s location and telephone number to the dispatcher. While the public and private sectors are becoming more dependent on networks, a growing number of major outages have raised concerns, triggered losses of service, potentially risked lives, and affected the economy. Several of these outages are highlighted below. May 8, 1988: More than 500,000 business and residential customers lost telephone service due to a fire at the Hinsdale, Illinois, central office. During the following two weeks, approximately 3.5 million calls were disrupted. Hospitals with centrex service in the affected area could not make calls from one floor to another. Twenty percent of the departing flights from O’Hare International Airport were canceled and flights from other airports around the country had to be rescheduled. In a study of the Hinsdale outage, the University of Minnesota concluded that the cost of network failures to airlines could be between $2 and $3 million per hour and investment bankers could lose up to $5 million per hour. Jan. 4, 1991: Maintenance workers in a cable vault in New Jersey accidentally cut an optical fiber transmission line that provided service to lower Manhattan. Sixty percent of the calls into and out of the city were disrupted for eight hours. The New York Mercantile Exchange and the Commodity Exchange had to shut down operations. Voice and radar systems that are used to control air traffic from facilities in New York, Washington, and Boston were disabled for five hours. Sept. 17, 1991: Through a power sharing arrangement with New York’s Consolidated Edison, AT&T agreed to use its own power when Consolidated Edison’s facilities were heavily loaded. On this particularly warm day in September, AT&T switched to its own power. Batteries designed to meet the initial instantaneous power demand performed as intended. However, alarms that were intended to inform technicians to start the facility’s diesel generator had been manually disabled. When the batteries discharged, all telephone transmission systems in the facility shut down and voice and data communications controlled by the facility failed. Voice and data communications between the New York, Boston, and Washington Air Route Traffic Control Centers stopped. Three New York area airports closed for several hours. Flights destined for New York were either delayed or canceled. Air traffic at Boston was severely disrupted and delays occurred nationwide. More than 1,174 flights were canceled or delayed and approximately 85,000 passengers were affected. The day after the phone outage, flight schedules were still disrupted because aircraft were not at the right airports for the scheduled morning flights. Sept. 10, 1993: A road crew boring holes for highway road signs in Ohio cut a high-capacity fiber-optic cable belonging to MCI. The cable, which carries most of the company’s east-to-west traffic, was repaired in about seven hours. However, millions of residential and business customers were unable to make coast-to-coast calls during that period. March 15, 1994: During the early morning hours a fire broke out in Pacific Bell’s Los Angeles central office known as the Madison Complex. Before complete service was restored, almost 17 hours later, approximately 395,000 customers may have been affected and over 5 million calls were blocked. Cable cuts, a source of major outages, occurred 160 times during the period between March 1, 1992, and February 4, 1993, with 93 (58 percent) of them caused by “dig-up” incidents, such as the one illustrated in figure 4.1. The average time needed to restore service after a cable cut was 5.2 hours with a maximum of 21.4 hours. The average time required to repair a fiber cable cut was 14.2 hours with a maximum of 97.5 hours. On February 13, 1992, the FCC instituted mandatory outage reporting requirements for outages that affect more than 30,000 customers for durations lasting 30 minutes or longer. As of June 1994, more than 314 outages were reported. The calculation of the cost of an outage is difficult because of the variety of users that could be affected. The deployment of advanced technologies, such as intelligent network architectures, common channel signaling, integrated services digital network, broadband transport facilities, customer control, and user-programmability, is increasing network complexity and vulnerability. The new technologies, described in appendix III, are also allowing network designers to concentrate more traffic into larger and fewer switches, and to rely on fewer higher capacity fiber optic cables to transmit hundreds of thousands of telephone calls. Failure of any of these high-capacity elements could be potentially devastating. As the information superhighway grows, the number of networks and service providers is also expected to grow. Telecommunications consumers will increasingly acquire services from combinations of suppliers’ products, service providers, and network providers. Increasing network complexity will make it more difficult to isolate and correct problems. In 1991, the FCC, concerned about the spate of telephone network outages that affected a large number of subscribers on both the east and west coasts, established the Network Reliability Council. The council’s goal was to bring together leaders of the telecommunications industry, telecommunications experts from academia, and consumer organizations, to explore and recommend measures that would enhance network reliability. Members include the executive officers of most of the major U.S. telephone companies, principal equipment suppliers, long-distance companies, consumer organizations, corporate and federal user representatives, and state regulatory agencies. The council established a steering committee and seven focus groups to deal with the key problem areas—signaling network systems, digital cross-connect systems, fiber cable cuts, fire prevention, enhanced 911 service, power systems, and switching systems (with a focus on software). The groups formulated recommendations for developing and implementing countermeasures to reduce the number of outages; monitoring the results; and modifying, as necessary, the countermeasures. The commission is now looking at these recommendations and considering regulations that would require the carriers and equipment suppliers to implement them. In 1994, the Network Reliability Council restructured and created four focus groups. The first group will concentrate on network reliability; the second will examine reliability issues arising from expanded interconnection of networks; the third will study network technology and examine reliability concerns related to providing telephone service through cable, satellites, and wireless systems; and the fourth group will study the reliability of critical services, including 911, Federal Aviation Administration, military, and government. The Alliance for Telecommunications Industry Solutions—a private sector organization—was formed to promote the timely establishment of telecommunications standards and operational guidelines. Its members include representatives of local exchange carriers, interexchange carriers, enhanced service providers, manufacturers, vendors, and end users who participate in a number of sponsored committees. The alliance also sponsors the Network Operations Forum, a group of telecommunications industry access providers and customers who meet periodically to identify national operations issues involving the installation, testing, and maintenance of access services. In July 1991, the alliance began focusing on the area of network reliability. One of the forum’s subcommittees has developed traffic management guidelines that provide network management personnel with alternatives when emergencies occur. The forum also maintains contact directories for use in emergency situations. While the information superhighway’s development is expected to be arduous, a grand vision of its capabilities is beginning to emerge among policymakers, industry leaders, and public interest groups. Viewed as a global metanetwork that will seamlessly and reliably link millions of users through broadband terrestrial and satellite digital networks, it is hoped that the superhighway will allow users to routinely receive and transmit large volumes of digital information, and ensure equal access for service and network providers. Achieving the grand vision will depend largely on how successfully industry and government meet the key technical challenges of security and privacy, interoperability, and reliability. Security and privacy of databases and users’ communications is a critical issue. The superhighway will become an increasingly enticing target for intruders with the technical expertise and resources to cause damage. Given the complexity, size, and importance of the evolving superhighway, significant effort will be needed to define, develop, test, and implement security measures. Interoperability among the thousands of networks, communications devices, and services that will comprise the superhighway is also essential, but will be difficult to achieve. The telecommunications industry is deploying, or plans to deploy, a host of technologies and services that are based on ill-defined, anticipatory, or competing standards. A coordinated approach will help reduce the risk of the superhighway being fragmented into thousands of poorly integrated networks providing a bewildering choice of incompatible services. Because the proposed superhighway is intended to provide critical commercial and personal services, its end-to-end reliability requirements will be very high. The public and private sectors are already highly dependent on the existing telecommunications infrastructure and networks that will be the foundation of the superhighway. Outages on these networks have raised concerns about achieving reliability. Government and industry are beginning to recognize these challenges. The administration’s Information Infrastructure Task Force, working together with the private sector, has formed committees and working groups charged with addressing security and privacy, interoperability, and reliability issues. The challenge remains for the major public and private players to work together to resolve these issues. With effective cooperation, the promise of the information superhighway can be attained.
GAO reviewed the technical issues associated with protecting the information superhighway from unauthorized access. GAO found that: (1) the information superhighway poses technical challenges concerning the security, privacy, and reliability of personal and proprietary information; (2) a large proportion of the information that will traverse the superhighway will be sensitive and a tempting target for hackers, foreign governments conducting political and military intelligence operations, domestic and foreign enterprises engaged in industrial espionage, or terrorist groups seeking to disrupt society or the economy; (3) significant effort will be needed to define, develop, test, and implement measures to prevent unauthorized access to the superhighway; (4) although the federal government could play a leading role in ensuring the superhighway's security, critics argue that individuals should be free to choose the technical means for meeting their security requirements; (5) a major challenge facing the development of the information superhighway will be creating a consensus between the federal government, computer and communications industry, business community, and civil liberties groups on how to ensure information security and privacy; (6) federal and private sectors have begun establishing uniform standards to ensure the superhighway's interoperability; and (7) questions remain about how to protect the superhighway from large network failures and encourage the telecommunications industry to develop a secure and reliable infrastructure.
In May 2006, the department announced its National Strategy to Reduce Congestion on America’s Transportation Network (the “congestion initiative”), a comprehensive national initiative to reduce congestion on the nation’s roads, rails, runways, and waterways. A major component of this initiative is the UPA initiative, under which the department gives selected cities special consideration for funding from existing programs. To qualify for selection, a city had to develop and be ready to quickly implement a comprehensive, integrated, and innovative approach to reducing congestion through the use of the 4Ts. On December 8, 2006, the department issued a Federal Register notice soliciting proposals by cities to enter into UPAs with the department. According to the notice, the department planned to fund the agreements through several existing grant programs and lending and credit support programs. This cross-cutting approach was designed to enable the department to fund the greatest improvements in mobility in a coordinated manner across its modal operating administrations. The notice further indicated that the department would support its urban partners with regulatory flexibilities and dedicated expertise and personnel. Applicants wishing to become urban partners had to submit their applications by April 30, 2007. The notice stated that the department would consider applications filed after this date to the extent practicable. In addition, applicants had to apply to any underlying grant program from which they sought funding. They could do so by submitting a single application that covered each of the grant programs as long as the application was responsive to the requirements of each program. To achieve this, the department published several Federal Register notices between December 2006 and March 2007, requesting that UPA applicants apply to the underlying programs. The Federal Register notice also set forth requirements for the UPA application. Under the UPA initiative, the department and the urban partner would agree to pursue the 4Ts to reduce traffic congestion. The department sought projects with congestion pricing to help shift some rush-hour traffic to off-peak times, coupled with new or expanded transit services. In this way, buses could move more freely through previously congested roadways and could provide more reliable service. To further reduce congestion, the urban partner could use cutting-edge technologies—such as providing travelers with real-time transportation information—to improve transportation performance and secure agreements from area employers to expand telecommuting programs. (See fig. 1.) Finally, the department’s solicitation stated that neither the procedures nor the criteria identified in the notice would be binding on the department. Aside from publishing the UPA initiative notice in the Federal Register, the department took a number of steps intended to (1) generate interest in the initiative, (2) encourage cities to develop fresh ideas, and (3) provide information to potential applicants. First, before publishing the December 2006 Federal Register notice, department officials met separately with officials from three urban areas—Seattle; northern Virginia; and Portland, Oregon—and presented information on congestion pricing and UPAs. Then, after publishing the notice, at the request of New York City, department officials met with the New York state legislature; conducted national workshops in Atlanta, Georgia; Denver, Colorado; and Washington, D.C. that were open to any interested UPA applicant; made presentations at transportation conferences; and held a Webinar for city officials who could not attend the national workshops. In the workshops and the Webinar, department officials discussed congestion pricing and supporting strategies, political and public outreach techniques that might be used to gain support for congestion-pricing initiatives, and potential funding opportunities under UPA. According to the department, it conducted a number of other outreach activities including (1) establishing a Web page with information for such applicants, (2) giving speeches on the UPA initiative, and (3) sending information on congestion pricing to e- mail listservs. In its Federal Register notice, the department stated that it reserved the right to solicit, and was actively soliciting, by means other than the notice certain cities the department had determined to be candidates for UPA award consideration. The department received 26 UPA applications and created a multistep review process to select PUPs (announced in June 2007) and then urban partners (announced in August 2007). First, a review team, composed of staff from several modal administrations, used several technical criteria—such as innovation, the comprehensiveness of the 4Ts, cost- effective use of federal dollars, and the feasibility and likelihood of implementation—to perform a technical review of the UPA applications and rank them for the department’s senior leaders. Second, the senior leaders reviewed the review team’s rankings in light of broader department goals and recommended nine PUPs to the Secretary, which according to department officials, the Secretary approved. The PUPs then each presented their proposals, first to the Secretary and Deputy Secretary and afterward to officials from the modal administrations involved in the UPA initiative. Following these presentations, the department, in some cases, asked for additional detail or clarification from the applicant. Third, using this information, the review team created funding scenarios—for example, one scenario provided funding for New York City and four other urban partners while another scenario did not provide funding for New York City, but did do so for seven other urban partners. Finally, using UPA applications and funding scenarios, the Secretary selected five urban partners—Miami, Minneapolis, New York City, San Francisco, and Seattle. The Federal Register notice stated the department would select up to 10 urban partners. The urban partner designation did not include funds for the urban partners, but did give them special consideration in obtaining department resources such as funding from grant programs and administrative flexibilities, including streamlined environmental reviews. Between December 2006 and April 2007, the department issued several Federal Register notices describing the amount of funding available to urban partners. Initially, the Federal Register notice stated that up to $100 million, over 3 years, was available from 1 grant program, but this amount was expanded to about $1 billion from 13 grant programs when the President signed the 2007 Revised Continuing Appropriations Resolution. Unlike previous years, the department’s appropriation was not subject to congressional directives that funds be dedicated for particular transportation projects. As a result, urban partners applied for and were eventually awarded funds from 10 grant programs for 94 projects totaling $848.1 million. (See table 1.) This report focuses on the 5 urban partners, 94 projects, and $848.1 million in awards announced in August 2007. As the department negotiated with the prospective urban partners, the numbers of urban partners, projects, and awards were reduced somewhat. The department awarded New York City $354.5 million (42 percent) of the $848.1 million. San Francisco was awarded $158.7 million (19 percent), Minneapolis $133.3 million (16 percent), Seattle $138.7 million (16 percent), and Miami $62.9 million (7 percent). In addition, although San Diego was not selected as an urban partner, the department awarded it $15 million under the Bus and Bus-Related Facilities Capital Investment Grants (Bus and Bus Facilities) program and $3 million under the Intelligent Transportation Systems-Operational Testing to Mitigate Congestion (ITS-OTMC) program for a project element of its UPA application. Departmental senior leaders and officials believed that San Diego’s project to demonstrate the safety and efficiency of cutting-edge separation and braking technologies on narrow lanes was meritorious. Each urban partner proposed to implement a tolling project with congestion pricing that would be supported by projects primarily from at least two of the other three Ts—transit and technology. (See table 2.) According to department officials, telecommuting projects received less emphasis from the urban partners and the department because it was generally beyond the control of the city to influence many employers and because they received telecommuting proposals in few applications. Each UPA was subject to several terms and conditions. One significant condition was that no urban partner could expend federal funds until it had obtained the legal authority necessary to implement congestion pricing for the applicable highway or parking area. To date, three urban partners have the necessary authority for congestion pricing—Miami, Minneapolis, and San Francisco—and Seattle needs to obtain it by September 2009. However, New York City failed to meet its April 2008 deadline for obtaining congestion pricing authority from the state legislature. As a result, in accordance with the terms of the department’s UPA with New York City, the department cancelled New York City’s agreement and awarded about $364 million to Chicago and Los Angeles through the department’s Congestion Reduction Demonstration Program. As its name indicates, the Congestion Reduction Demonstration Program, established in 2007, is a successor to the UPA initiative. This program’s goals and selection criteria are similar to the UPA initiative’s. For example, the Federal Register notice solicited applications that would support congestion-pricing, transit, and technology strategies to reduce congestion. Like the UPA initiative, the Congestion Reduction Demonstration Program allows the department to partner with applicants to support congestion reduction using the department’s discretionary funds. In addition, in July 2008, the Secretary announced a new administration plan to create a more sustainable way to pay for and build roads and transit systems. This plan includes a proposal for creating a Metropolitan Mobility Program. Among other things, this proposal envisions financial support for innovative approaches to reduce traffic congestion. The department clearly communicated 10 of the 11 criteria it used to select urban partners, but it could have better communicated to applicants the relative weights it would assign to the selection criteria and the amount of funding available under the UPA initiative. We also found that the department provided additional attention to two applicants after they were selected as PUPs, but in the absence of government-wide guidance it is unclear on how to assess the appropriateness of this attention. According to grants policies and guidance, funding announcements that clearly state the criteria that will be used to evaluate applications promote competition and fairness in the selection of grantees. For example, an Office of Management and Budget policy directive states that an agency’s funding announcements must clearly describe all criteria and if they vary in importance, the relative priority, or weights, assigned to the criteria. Similarly, the Federal Highway Administration’s (FHWA) assistance agreement procedures manual states that evaluation criteria must be prepared before a funding announcement is published and should be available to both applicants and reviewers. These procedures allow applicants to make informed decisions when preparing their applications and help ensure the selection process is as fair and equitable as possible. Like the Office of Management and Budget’s guidance, the FHWA manual identifies good grants making practice but it does not strictly apply to the UPA initiative, since four of the grant programs under the initiative were not FHWA programs. The department does not have agency-wide guidance that mirrors the Office of Management and Budget guidance. The UPA initiative was not funded like a traditional grant program, since there was no UPA-specific funding. Instead, the department gave urban partners special consideration when allocating funds from as many as 13 individual grant programs. However, we believe grant-making practices such as those prescribed in the Office of Management and Budget’s guidance are relevant to the initiative because they can increase the likelihood that the department will receive applications that best further agency goals. The department used 11 criteria to select urban partners. (See fig. 2.) We found that the department clearly communicated 10 of these to potential applicants in the December 2006 Federal Register notice. For example, this notice made it clear to applicants that the department wanted applicants to incorporate the 4Ts and explained how the different parts of the 4Ts strategy would interact to reduce congestion (synergy of 4Ts). This notice also stated the department sought proposals that would affect the most surface transportation travelers, include congestion pricing, be cost effective, and demonstrate innovative technology applications. The notice also made it clear the department sought proposals that were likely to be implemented, and requested that applicants submit information on political support for their proposal. The remaining criterion—political boldness—however, was not communicated at all to applicants. As a result, the applicants we spoke with had varying levels of awareness and understanding of this criterion. Senior department officials considered the political boldness of a city’s projects when selecting urban partners. The department defined this criterion as the level of boldness of the congestion-pricing component relative to the level of political acceptance of congestion pricing in the applicant’s particular region. Senior department officials told us they viewed projects that were politically bold positively, and took into consideration that although a particular congestion-reduction strategy might not be new nationally, it could be politically bold in the applicant’s region. This criterion was not stated in the Federal Register, on the department’s Web site, or in presentations. The department subsequently told us that political boldness was the same as political and technical feasibility. However, in our view, proposals can be politically feasible without being bold and the department was looking for bold proposals. Nine of the 14 applicants we spoke with said they were not aware of this selection criterion, while another 5 applicants told us they thought the department implied it was seeking politically bold proposals. The department’s March 2007 Federal Register notice indicated the criteria could change. It stated that neither the procedures nor the criteria set out in the notice would be binding on the department. Our search for language of this sort in Federal Register notices found that it is unique. The department indicated it might have needed to change both the criteria and the procedures to gain more participation in the initiative. The department’s Federal Register notice announcing the selection criteria for the UPA initiative also fell short of the Office of Management and Budget’s policy guidance in that it did not communicate the relative weights of the selection criteria, as the guidance directs. Department officials and six of eight reviewers told us they viewed congestion pricing measures as the most important criterion in selecting urban partners. The department identified congestion pricing as the most important of the 4Ts in a document posted on its Web site 4 weeks before applications were due—and about 16 weeks after the original Federal Register notice appeared. Although some applicants we spoke with were aware of the department’s emphasis on congestion pricing in general, none knew the relative weights of the selection criteria. Nine of the 14 applicants we spoke with about the impact of the weighting information on their applications said that having weighting information would have changed their applications, while the remaining 5 said it would not. For example, an official from San Diego noted that knowing the weights of the selection criteria would have helped him to decide which projects to include and exclude, and would have resulted in a more focused application. The department’s incomplete communication to applicants of UPA selection criteria and weights may have had little, if any, effect on the final selection, since four of the five urban partners were rated high for technical merit and the fifth one, while rated lower, was seen as innovative because it would be the first in the country to convert high-occupancy vehicle (HOV) lanes to high-occupancy toll (HOT) lanes while simultaneously increasing HOV eligibility from two to three occupants. Grant announcements should fully describe the funding opportunity to give applicants a sense of the scope of the funding, and to assist them in prioritizing and developing their proposed projects. To this end, an Office of Management and Budget policy directive requires an agency to publish the full programmatic description of the funding opportunity, to communicate to applicants the areas in which funding may be provided, and to describe the agency’s funding priorities. FHWA’s procedures manual reflects the Office of Management and Budget’s directive to include a description of the funding opportunity in the grant announcement. Communicating the funding opportunity was important for the UPA initiative, since the funding motivated cities to do the comprehensive planning and serious consideration of congestion pricing the department wanted cities to reflect in their applications. As stated previously, an awardee’s selection as an urban partner meant the department would give the awardee priority consideration when allocating funding from as many as 13 individual grant programs. The original December 2006 UPA initiative announcement indicated that up to $100 million was available to urban partners through the ITS-OTMC program. (See fig. 3.) After more funds became available for the department’s discretionary use, the department decided sometime between February 15, 2007, and April 2007 to dedicate about $1 billion to the UPA initiative. As a result, the department solicited applications for the 13 discretionary programs through Federal Register notices over a period of 4 months (December 2006 through March 2007). Although funding for UPA was available under 13 grant programs, the department published the amount of funding available in the Federal Register for 5 of these programs and distributed funding information for 12 at conferences and through its Web site. Between February 15, 2007, and April 30, 2007 (the date when UPA applications were due), the department communicated the amount of funding available for UPAs to applicants in two ways: First, the department solicited applications for the 13 grant programs through Federal Register notices. (See table 3.) For 5 of these grant programs, the department published the amount of funding available in the notices. The amount available for UPAs through these 5 grant programs totaled $852 million and ultimately accounted for 77 percent of the funds awarded to urban partners. However, of this $852 million amount, $716 million (about 65 percent of the approximate $1.1 billion that was ultimately made available) was announced less than 6 weeks before the UPA application deadline. By the time this additional funding was announced, applicants may have already substantially completed their applications, and in some cases, obtained the approval of their stakeholders. According to a department official, it took the department several weeks to issue a Federal Register notice. For the remaining 8 grant programs, the Federal Register notices contained no information on the amount of funding available. A department official told us he does not know why these Federal Register notices did not include funding information. The department disagreed with our assessment that the applicants could have benefited if funding information had been communicated more than 6 weeks before applications were due. The department indicated that it typically provides 2 months for the submission of applications and cited several grant programs in which this was the case. For example, the department said 2 months was adequate time to file Bus and Bus Facilities program applications. However, because of its complexity, the UPA application could be expected to take longer to complete than applications for more traditional grant programs, such as the Bus and Bus Facilities program. Second, the department developed a funding handout that listed the amounts available for UPAs through 12 of the 13 grant programs. These amounts totaled almost $1.1 billion. The department updated this document several times with new funding information and posted the document on the department’s Web site sometime between February 15, 2007, and April 2007. In addition, department officials handed out this document at conferences. Applicants’ understanding of the funding available under the UPA initiative varied, and several applicants told us that if they had had more complete funding information, they would have changed their applications. Of the 14 applicants we spoke with about funding, 6 told us they had the funding handout, which included information on the availability of funding by program (see table 3), while the other 8 said they did not have this handout when they were developing their applications. Of the applicants that did not have complete funding information, 6 told us they were aware of the total funding available but not the amounts available for each program, while 2 told us that they did not know the amount of funding available under the initiative when they were applying to the program. Of the 2 UPA applicants that said they did not know the amount of funding available to the initiative, 1 applicant said it thought the total amount of funding available was the $100 million initially identified under the ITS- OTMC program. Half of the 8 applicants we spoke with that did not have the funding handout told us if they had had better information on the funding available under the UPA initiative, they would have changed their applications and been able to scope their projects better. While communicating the complete amount would have been desirable as a means of eliciting applications that were optimally responsive to the initiative’s goal of congestion reduction, the $852 million amount, in our opinion, provided potential applicants with a rough understanding of the program’s size. After the department selected nine cities as PUPs, it used various methods to select and determine funding for the final 5 urban partners. The department invited officials from the nine PUP cities to Washington, D.C., to present their applications and asked them to provide additional information, when needed, about their congestion-reduction initiative. Department officials then explored ways to fund UPAs by identifying the PUPs’ core projects that were most in the spirit of the initiative. Next, department staff developed funding scenarios, including options with and without New York City, reflecting the department’s concerns that the New York state legislature might not pass the congestion-pricing legislation necessary to allow the New York City projects to move forward. At this time, department officials told us that, in some cases, they were also contacting PUPs to have them submit new grant applications because PUPs had submitted applications for more funding than was available under some grant programs and because PUPs could submit applications for the same uses under other, undersubscribed programs. However, in two cases, the department went beyond asking UPA applicants to submit applications for the same projects under other funding programs. In these instances, the department contacted the UPA applicants to request substantive changes to the applications. Miami indicated in its application that it would run bus rapid transit, but did not provide details on the project, or state its cost. Miami officials told us they did not intend to purchase buses or improve bus facilities through the UPA initiative. However, after Miami’s selection as a PUP, the department encouraged the city to apply for funding from the Bus and Bus Facilities program and suggested specific measures (such as bus branding, hybrid buses, bus facility improvements, and transit signal priority technology) for city officials to include in the proposal. As a result, Miami submitted a Bus and Bus Facilities application 4 weeks after the application deadline and was awarded $19.5 million in these funds. According to department officials, this was reasonable, since Miami’s original UPA application had contained bus elements, even though Miami had not originally requested funds to purchase buses or improve bus facilities. According to officials from the Minnesota Department of Transportation, after Minneapolis was selected as a PUP, the department asked it to include fewer projects in its UPA application to make the application more competitive for limited funding. According to Minneapolis officials, this allowed Minneapolis to better describe some projects and create more accurate cost estimates. The officials from the remaining seven PUPs told us the department did not contact them to suggest specific ways of strengthening their applications. Department officials told us that, after the nine PUPs presented their proposals, all of the PUPs’ congestion-reduction plans were meritorious and all nine PUPs were worthy of being designated as urban partners. Because time was of the essence in reaching a decision and announcing the urban partner designations, department officials told us that they provided special attention to the PUPs they felt were likely to be selected as urban partners—rather than spending time with PUPs that were unlikely to be selected—in order to craft the strongest congestion- reduction proposals possible. They said that working with only certain PUPs to develop stronger congestion-reduction efforts was appropriate because prior interactions with other PUPs indicated that those PUPs were less likely to make meaningful changes to their proposals. Finally, they said that if at any time negotiations failed with designated urban partners, the department would be able to offer urban partner designation to another PUP. There is little if any government-wide or Department of Transportation guidance that would shed light on when and if it is appropriate to provide proactive assistance to grant applicants to help them create stronger applications. Thus, while the assistance could cause concerns, it is unclear how to assess the appropriateness of the department’s actions toward Miami and Minneapolis, where the grant-making agency sees the applicants as such strong candidates that they are likely to be selected. The department had the legal authority to allocate appropriated funds to the UPA initiative as long as the funds were spent for the purposes authorized in the appropriations legislation, and the department complied with the restrictions and requirements of the underlying grant statutes. The department also had authority to consider congestion pricing as a priority factor in making grant selections. Each of the nine grant statutes either explicitly permitted the consideration of congestion pricing or afforded the department discretion to consider congestion pricing because it is rationally related to statutory objectives. Because of an error in the department’s technical evaluation for the Ferry Boat program, the department’s initial documentation suggested that the department had improperly favored congestion pricing over statutory priorities. The Secretary did not rely on this documentation in awarding Ferry Boat grants, however, and the corrected information confirmed that the urban partners in fact met statutory priorities, and that the Secretary was within her discretion to apply congestion pricing as a discriminating factor. Finally, in one instance the Transportation, Community, and System Preservation (TCSP) program the department likely did not comply with all of the statutory requirements in evaluating the grant applications. Based on available information, it is not clear that this failure affected the ultimate grant award decisions. The statute required that “priority consideration” be given to applicants meeting five specified factors, and the department instead gave such consideration to applicants (including urban partners) that met just one such factor. Because “priority consideration” does not entitle an applicant to selection as a grantee, only to a bona fide and careful review and because the department terminated its evaluation after confirming applicants met only one factor, it is not possible to determine whether any applicant met all five factors and thus deserved the required bona fide “hard look.” Agencies generally have considerable discretion in choosing how to allocate lump-sum appropriations—appropriations that are available to cover a number of programs, projects, or items—to specific programs and activities. In the past, the department’s discretion had been circumscribed by congressional directives that earmarked most of its appropriations for particular projects, but this changed for fiscal year 2007. The department’s fiscal year 2007 appropriation, enacted in the 2007 continuing resolution, funded the department based on 2006 levels and authorities but removed earmarks contained in the 2006 committee reports (the continuing resolution stated that such earmarks shall have no legal effect). In addition, the continuing resolution did not include congressional directives that funds be dedicated or earmarked for particular projects and stated that earmarks contained in 2006 committee reports shall have no legal effect. The department interpreted this language as permitting it to allocate its appropriations to various grant programs in order to fund its UPA initiative. Specifically, the department drew from three lump-sum appropriations for (1) payment of obligations incurred in carrying out the provisions of bus-related statutes, (2) federal-aid highways and highway safety construction programs, and (3) necessary expenses of the Research and Innovative Technology Administration. We concluded that the department’s appropriations were available to carry out the discretionary grant programs identified in each of these lump-sum appropriations. The department had broad discretion in choosing how to allocate funds among those programs. For example, the “Federal-aid highways and highway safety construction programs” lump-sum appropriation was available to fund several grant programs, and absent any other statutory restriction, the department could choose how much, if any, of that appropriation to allocate to each of the grant programs. In carrying out each individual grant program, the department, of course, was required to comply with the restrictions and requirements of the underlying grant statutes and to award funding to grantees in accordance with the statutory provisions. In the discussion that follows, we address whether the department complied with these underlying provisions. As discussed earlier, in determining which cities would receive the urban partner designation, the department gave special consideration to those that had, or had committed to obtaining, authority to use congestion pricing. The department then awarded grant funds under 10 of its grant programs to the five urban partners. The impact of this sequential process was to give congestion pricing priority as a selection factor not only for the UPA initiative but for the individual grant programs as well. The specific terms of the authorizing statute for each grant determine whether the department has the authority to give priority to congestion pricing as a selection factor in making grant decisions. According to department officials, they considered congestion pricing as a priority or priority consideration selection factor for only 9 of the 10 programs (excluding the Alternative Analysis program). As a result our analysis focused on the 9 remaining programs. We concluded that the authorizing statutes for the 9 grant programs used to fund urban partners either explicitly permit the consideration of congestion pricing or afford the department discretion to consider congestion pricing as a factor because it was rationally related to program objectives. (See table 4.) In particular, 2 of the 9 grant-authorizing statutes permit the Secretary to consider congestion pricing as a selection factor by express mention: ITS-OTMC and Value Pricing Pilot programs. The remaining 7 grant-authorizing statutes (Bus and Bus Facilities; Ferry Boat Discretionary; Innovative Bridge; Interstate Maintenance Discretionary; New Fixed Guideway Facilities (specifically Very Small Starts); Public Lands Highway Discretionary; and TCSP programs) grant the department discretion to use congestion pricing as a priority consideration selection factor because tolling has a rational connection to statutory objectives, such as mobility and reduced congestion. Although we believe the department had authority to consider congestion pricing as a selection factor, with respect to the Ferry Boat program, the department’s initial technical evaluation documentation suggested (albeit incorrectly) that the department had exceeded its authority. This technical evaluation documentation made it appear that the department had overridden the statute by rejecting nonurban partners that lacked congestion pricing but met one or more statutory priorities in favor of urban partners that had congestion pricing but met no statutory priorities. In making the final award decision, however the Secretary relied on correct information showing the urban partners in fact met statutory priorities. The Secretary therefore was within her discretion to apply congestion pricing as a selection factor. Specifically: Using congestion pricing as a priority selection factor, the department awarded grants totaling $40.2 million under the Ferry Boat program to New York City, San Francisco, and Seattle as urban partners. Under this program, the department is authorized to award grants for the construction of ferry boats and ferry terminal facilities in accordance with statutory eligibility criteria. The Ferry Boat grant statute lists three permissible “priority” selection factors for Ferry Boat program grants that the department must apply. Priority is required for ferry systems that will (1) provide critical access to areas not well served by other modes of surface transportation, (2) carry the greatest number of passengers and vehicles, or (3) carry the greatest number of passengers in passenger-only service. Although the Ferry Boat statute does not explicitly identify congestion pricing as a priority selection factor, the department believes it had discretion to use congestion pricing to discriminate between grant applicants. The department makes a connection between congestion pricing and the second Ferry Boat statutory priority (carrying large numbers of passengers and vehicles), which it believes reflects congressional support for activities that increase mobility and reduce congestion. We agree that the department had discretion to use congestion pricing as a discriminating factor under the Ferry Boat statute, because there is a rational connection between congestion pricing, mobility, and congestion. The department could not, however, apply congestion pricing in a way that would fail to comply with the statutory priority factors—that is, it could not reject nonurban partners that met statutory priorities simply because they lacked congestion pricing in favor of urban partners that had congestion pricing but did not meet statutory priorities. The department’s technical evaluation documentation incorrectly suggested that such a situation occurred in the case of one grant, Seattle’s High-Speed, Ultra- Low Wake Passenger-Only Ferry project. Although the department’s technical review team evaluator appeared to have determined that this Seattle project failed to meet two statutory priority selection criteria— carrying large numbers of passengers and vehicles, and carrying a large number of passengers in passenger-only service—the department nevertheless awarded the project $2 million based on Seattle’s urban partner designation, passing over nonurban partners the technical review team had determined to have met one or more of the statutory priorities. The technical review team evaluator apparently reached this conclusion by incorrectly relying on the total number of passengers carried by individual projects, not the total number of passengers carried by the ferry system as a whole as required by statute. In fact, Seattle’s ferry system carries the greatest number of passengers of all ferry systems in the country, and therefore was entitled to “priority.” In addition, in one instance—the TCSP program, under which the department awarded $50.4 million in grants to urban partners—we concluded that although the department had discretion to use congestion pricing as a discriminating priority factor, it likely did not apply statutory “priority consideration” factors correctly, in the way the statute requires. However, “priority consideration” entitles an applicant only to a bona fide and careful review, not to guaranteed selection. Furthermore, based on available information, it is not clear that the department’s incorrect evaluation approach affected the ultimate outcome of its selections. As a result, we are not recommending that the department re-evaluate the more than 500 grant applications it received for fiscal year 2007 for this program. Specifically: Using congestion pricing as a priority selection factor, the department awarded three grants totaling $50.4 million under the TCSP program to urban partner applicants—Minneapolis, San Francisco, and Seattle. The statute requires that the department give “priority consideration” only to applicants that meet five separate factors, none of which explicitly relates to congestion pricing and all five of which must be satisfied. The statute provides that priority consideration shall be given to applicants that (1) have instituted preservation or development plans and programs; (2) have instituted other policies to integrate TCSP practices; (3) have preservation or development policies that include a mechanism for reducing potential impacts of transportation activities on the environment; (4) demonstrate a commitment to public and private involvement, including the involvement of nontraditional partners in the project team; and (5) examine ways to encourage private-sector investments. The department believes that the term “priority consideration” does not require the department to award grants to applicants that meet the above criteria; instead, the department believes priority consideration entitles an applicant only to precedential or careful deliberation or thought before competing alternatives. We agree with the department that, unlike “priority,” “priority consideration” does not guarantee an applicant selection. However, the department’s reading, as we understand it, is too narrow. We believe “priority consideration” entitles an applicant to special attention and a careful and bona fide review, not just consideration earlier in the evaluation process. Nonetheless, ultimately, the department had discretion to grant awards to applicants that did not meet priority consideration criteria, based on other factors, such as congestion pricing, found to be rationally connected to statutory objectives. The remaining issue is whether, before the department applied congestion pricing as a selection factor, it followed the statute and gave any applicants qualifying for “priority consideration” the bona fide review Congress required. The answer is not clear. The statute lists five factors, with the last two joined by the conjunctive “and,” indicating all five factors must be met in order for an applicant to receive priority consideration. Department review team officials told us they rated grant applicants as meeting statutory priority consideration criteria so long as just one factor was met (essentially the reviewers treated the “and” in the statute as an “or”). We disagree with the department’s interpretation and believe the “and” is used in its ordinary sense, requiring applicants to meet all five factors. Once the department found that an applicant met one factor, it terminated its evaluation. The effect of this error is unknown since, from the current record, it is not possible to determine whether any applicant met all five factors. Appendix V contains a complete analysis of these legal issues and our conclusions about the department’s compliance. The department is tracking urban partners’ progress in completing the conditions of their awards, such as obtaining their authority to use congestion pricing where needed. In addition, the department has contracted with Battelle Memorial Institute to evaluate the outcomes of UPA projects, such as the extent to which congestion is mitigated. While progress is being made in these two areas, we did not attempt to assess the overall reasonableness of these efforts because they are in early stages. The department’s UPA initiative management team monitors urban partners’ completion of award conditions by tracking each urban partner’s progress in implementing congestion-reduction projects. According to a department official, the UPA initiative management team meets weekly in an effort to obtain and track real-time data from urban partner sites and address issues as they occur. Typical meetings consist of reports from department officials on the status of each urban partner’s planning efforts, federal fund obligations, and environmental reviews. The UPA initiative management team identifies and monitors the award conditions of UPAs in the following ways: Term sheet. Each urban partner entered into a term sheet, or memorandum of understanding, with the department that includes the urban partner’s award conditions. Each term sheet describes the congestion-reduction projects funded by the department, the amount of funding to be obligated, and the responsibilities of the parties. A department official told us that although not legally binding, the term sheets formalize both the department’s and the urban partners’ understanding of project requirements and deadlines, and provide the department with a mechanism to track urban partners’ progress in meeting the award conditions. Funding agreements. Each urban partner has proposed to implement several projects to reduce congestion. These projects are tied to funding agreements that establish the amounts of funding to be provided by the department, the grant programs that serve as the sources of funding, and the conditions that must be met to ensure obligation of federal funds. The content of these funding agreements varies and is dependent on statutory and contractual requirements associated with each funding source. Implementation matrix. The UPA initiative management team has created an implementation matrix spreadsheet to track and update progress in meeting requirements from the UPA term sheets and funding agreements. For example, each urban partner’s implementation matrix spreadsheet tracks the following conditions contained in urban partner funding agreements: (1) the completion of preconditions for obligating federal funds, (2) UPA project funding sources, and (3) the dates federal funds were (or were expected to be) obligated. The implementation matrix also tracks award conditions contained in each UPA term sheet, including initiative-driven conditions, such as legislative authority for congestion pricing, and project-related award conditions, such as environmental approval, planning, design, development, evaluation requirements, and completion dates. Project management documentation. The UPA initiative management team requires that each urban partner adhere to project management processes and protocols. The department has requested that each urban partner provide standard project management documentation that follows project management standards, including project management plans, project charters, baseline schedules and budgets, and progress reports. These items will be tracked by the UPA initiative management team. According to department officials, only Miami’s UPA projects have progressed far enough to require a significant amount of tracking, and Miami officials have begun to provide project management documentation. Seattle also has provided a draft project management plan in support of its UPA. However, in anticipation of the other urban partners entering the project implementation phase, the department is exploring the use of software applications that fulfill project management standards and can be used to track the urban partners’ adherence to project management documentation requirements. The department has already taken steps to ensure that urban partners complete their award conditions. For example, in April 2008, New York City was unable to obtain the legal authority to do congestion pricing, which was a selection condition of its UPA. As a result, New York City lost its designation as an urban partner and the funding for its congestion- reduction projects. In addition, in May 2008, the department determined that the congestion-pricing project identified in San Francisco’s term sheet might not achieve the department’s congestion-reduction goals. As a result, the department decided not to release about $100 million of San Francisco’s UPA funding from several grant programs until the city adopted a congestion-pricing project that was acceptable to the department. (San Francisco did so in October 2008.) A department official has indicated that the UPA initiative management team will continue to monitor urban partners’ completion of award conditions throughout the implementation of UPA initiative congestion-reduction projects. We did not evaluate whether the implementation tracking was reasonable or whether the award conditions were fulfilled, because projects have not progressed far enough to make this determination. To oversee the development and implementation of the UPA evaluation, the department created an evaluation subteam within the UPA initiative management team. In April 2008, the department hired Battelle to evaluate three urban partners: Minneapolis, San Francisco, and Seattle. Battelle also was hired to provide technical assistance to New York City and Miami, which both agreed to contract for and fund their own evaluations. From these individual urban partnership evaluations, including Miami, Battelle will develop a national evaluation of the UPA initiative to generate conclusions about the effectiveness of various types of congestion- reduction strategies. The evaluation subteam manages Battelle’s development and implementation of the UPA evaluation process. For example, the UPA evaluation subteam will approve central parts of Battelle’s evaluation framework, such as the site test plans that will detail data collection and analysis activities for each urban partner site. In addition, a department official has told us that for each urban partner site, the evaluation subteam will coordinate with site officials and Battelle to ensure the evaluation effort receives adequate support and is appropriate for each site’s projects. The urban partnership evaluation will be completed in four phases. For each phase, Battelle will produce a product that the UPA evaluation subteam must approve. (See table 5.) According to Battelle, phase one— the initial evaluation strategy formulation—is complete, and phase two is underway. The department identified four questions to be used in the urban partnership evaluation. (See table 6.) As part of phase one, Battelle then developed a number of evaluation analyses from these questions that it presented to the department in an initial strategy briefing. Battelle rated the evaluation potential of each urban partner using these analyses, based on the analyses’ applicability and feasibility. Battelle defined applicability as the likelihood that each site will be able to provide significant answers to the four evaluation questions and feasibility as the likelihood that Battelle will be able to measure the impact of project strategies to reduce congestion and determine that those strategies are the cause of any improvement found. Since Battelle will be relying on data collected by each urban partner site to perform its evaluation, the department is working with urban partners to ensure they will devote sufficient resources to data collection. Battelle has delivered a draft national evaluation framework as part of phase two of the evaluation process. The national evaluation framework will act as a guide for site-specific evaluations and defines the entire evaluation process. The department is reviewing the draft framework. The national evaluation framework will be followed by site-specific evaluation plans that provide a high-level view of data collection, analyses to be performed, roles and responsibilities of stakeholders, and schedules for urban partner sites. Minneapolis and Seattle are the first sites scheduled, and the remaining sites will follow. While Battelle is still working on finalizing future deliverables, phase three will include the collection of pre- and postdeployment data, and phase four will conclude the evaluation with Battelle’s report of findings. As of December 2008, the department had not decided whether to release the reports as they are completed or in a consolidated format at the end of the evaluation. Miami proposed to fund and perform its urban partnership evaluation. According to Florida Department of Transportation officials, Miami did this to make its UPA application more competitive and because at the time, Miami did not know that the department would provide funding for this activity. Miami’s UPA evaluation will also answer the four evaluation questions. In September 2008, Miami provided the department with a master transit evaluation matrix, which Miami officials have described as a crosswalk between the variables Miami will measure and the department’s evaluation questions. In addition, to date, Miami has hired a contractor to perform transit surveys and create lessons-learned reports for its transit projects under the UPA initiative. Miami will receive technical assistance from the department and from Battelle, and will work with the University of South Florida’s Center for Urban Transportation Research to complete its evaluation. Battelle and the department have noted that the urban partner evaluations will differ somewhat, since all urban partners have different congestion-reduction plans. We did not determine whether the evaluation methodologies proposed by Battelle or Miami were reasonable, because these methodologies have not been fully developed. We support performance-based integrated approaches—such as the one the department employed for the UPA initiative—because of the potential for a greater impact than can be achieved by operating the component programs in a stand-alone mode. Moreover, the initiative was a highly complex activity undertaken relatively quickly to take advantage of flexibilities allowed under the 2007 Revised Continuing Appropriations Resolution and to produce results in a relatively short period of time. With minor exceptions, the department did a good job of letting applicants know which criteria it would use in selecting urban partners and of the funding available for the initiative. However, the department could have done a better job of letting applicants know which of the dozen selection criteria it considered most important so that applicants could tailor their applications accordingly. The department acted within its legal authority in funding individual grant programs to support the UPA initiative, and using congestion pricing as a priority or priority consideration selection factor in making award decisions under the individual grant statutes. In one instance—the TCSP program—the statute required that “priority consideration” be given to applicants meeting five specified factors, and the department instead gave such consideration to applicants (including urban partners) that met just one such factor. Because of the department’s approach, it is not possible to determine from the documentation we reviewed whether any of the applicants in fact qualified for priority consideration. However, because “priority consideration,” unlike “priority,” entitles an applicant only to a bona fide and careful review, not to guaranteed selection, and because the department ultimately had discretion to use congestion pricing as a discriminating priority factor, we are not recommending that the department re-evaluate the more than 500 grant applications it received for fiscal year 2007 for this program. Rather, in the future, the department should evaluate these grant applications in accordance with the statute, by awarding priority consideration only to applicants that meet all five factors. The department has promoted UPA goals and concepts in its proposed successors to the UPA initiative—the Congestion Reduction Demonstration and the Metropolitan Mobility programs. To the extent that the department moves forward to select communities to receive funds for these proposed initiatives and to allocate funds to them, it must look back on the lessons learned from the UPA initiative to ensure that the missteps identified in this report are not repeated. This is especially important when the department employs a relatively novel framework as an umbrella to integrate the underlying programs that may fund these initiatives. We are making two recommendations. First, to better ensure that potential applicants for future congestion relief initiatives are aware of the criteria for assessing the applications, we recommend that the Secretary of Transportation communicate all selection criteria—and the relative weight to be given to the criteria—to potential applicants. Second, for the Transportation, Community, and System Preservation program, we recommend that the Secretary direct the Administrator, FHWA, to give priority consideration only to applicants that meet all five statutory factors, as required by the grant statute. The department reviewed a draft of our report and generally agreed with most of its findings. The department indicated that it was considering the recommendations; however, it indicated that the recommendation concerning the Transportation, Community, and System Preservation program will require careful legal analysis by the agency. Overall, the department told us that it views performance-based initiatives, such as the UPA initiative, as critical tools for applying its limited discretionary funding to achieve the greatest possible congestion reduction. The department said that the UPA initiative also made use of other best practice approaches. For example, the department incorporated an intermodal perspective for assessing program applicants based on established and publicized criteria. Intermodal teams assessed the merits and viability of proposals under the leadership of the Office of the Secretary to ensure that funding was awarded in a manner consistent with statute and regulation, to those projects that offered the most significant congestion relief benefits. The department also emphasized that it used extensive outreach to potential participants, because the program’s dynamic environment made it particularly important to ensure clear, consistent, and effective communication. The department indicated that it made its expertise available to all potential applicants on an ongoing basis from the outset of the program. Finally, the department stated that the UPA initiative incorporates elements for assessing results, so that information can be obtained for consideration in future efforts of this type. Our draft report stated that the department appeared to give Minneapolis proactive assistance in crafting a stronger application before the department selected the city as a preliminary urban partner. We concluded that this action was inappropriate and tendered a draft recommendation on this issue. The department took exception to our discussion that it provided Minneapolis assistance at this point of the evaluation process. The department maintained that it did everything possible to ensure these interactions were consistent and fair to all applicants, and did not agree that its discussions with Minneapolis or any potential applicants were either unfair or inappropriate. As a result, we had additional discussions with Minneapolis officials and reviewed documentary evidence which showed that the department provided assistance to Minneapolis after it was selected as a preliminary urban partner, similar to Miami. As a result, we revised our draft report accordingly and removed the draft recommendation. The department offered several technical comments, which we have incorporated where appropriate. We are sending copies of this report to other congressional committees and subcommittees with responsibility for highway mobility issues; the Secretary of Transportation; the Administrator, Federal Highway Administration; the Administrator, Federal Transit Administration; the Administrator, Research and Innovative Technology Administration; and the Director, Office of Management and Budget. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact either Susan Sawtelle, Managing Associate General Counsel at (202) 512-6417 or [email protected]; or me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix VI. In May 2006, the Department of Transportation (the department) established Corridors of the Future (Corridors) as a demonstration program to accelerate the development of multistate transportation corridors to reduce congestion. In designating such corridors, the department followed criteria that it communicated to potential applicants. In addition, the department has established a framework to ensure that states will work together and that Corridor projects will come to fruition and produce transportation benefits. In September 2006, the department issued a Federal Register notice soliciting applications for Corridors, which is a major component of its congestion-reduction initiative. This program is to accelerate the development of multistate transportation corridors, for one or more transportation modes, in need of investment to reduce congestion. The department also encouraged participation by sending an e-mail to transportation groups and by citing the program’s benefits in speeches. According to department officials, another purpose of the demonstration program is to encourage states to work together, rather than acting separately, to reduce congestion along major transportation corridors. The department solicited applications for Corridors in two phases. For phase one, the department asked for proposals containing general information about the proposed corridor, such as its location, purpose, preliminary design features, and estimated capital costs. The department received 38 proposals and established a review team comprising representatives from the department’s surface transportation administrations, with expertise in finance, environment and planning, and infrastructure. In accordance with the review team’s recommendations, the department announced it had selected eight potential corridors to submit applications for phase two. The phase two applications were more detailed than the phase one proposals and supplied information on the corridor’s physical description, congestion-reduction goals, mobility improvements, economic benefits and support of commerce, value to users, innovations in project delivery and finance, environmental stewardship, finance plan, and proposed project timeline. Several state officials told us that the process of completing proposals or applications for Corridors fostered cooperation between states and began discussions about multistate efforts to reduce congestion that would not have happened otherwise. On September 14, 2007, relying on the recommendations of the review team, the department announced its selection of six corridors: Interstate 95, Interstate 15, Interstate 5, Interstate 10, Interstate 70, and Interstate 69. (See fig. 4.) Designation as a corridor did not include an award of funds, but did give individual corridors priority access to department resources such as funding from grant programs and administrative flexibilities, such as environmental streamlining. Each corridor proposed a series of improvement projects that collectively totaled about $106 billion and individually ranged in cost from about $8.5 million to $63 billion. However, because of funding limitations, the department strategically chose where to best utilize funds and only selected one or two projects per corridor. For example, Interstate 10 proposed about $6.7 billion in high-priority projects in several areas: security, incident management, traveler information systems, traffic management, multiagency coordination, and capital projects. Following Interstate 10’s designation as a corridor, the department provided $8.6 million in funding for two projects within this corridor. (See table 7.) In all, the department provided $66.2 million in funding from five grant programs for 10 projects in the six corridors. The department is developing agreements for how the states along each corridor will work together to develop their corridor. As of February 2009, the department finalized agreements for three corridors. According to a department official, the department is working to finalize the remaining three agreements by the end of December 2009. Beyond the general information requested for phase I proposals, the September 2006 Federal Register notice provided nine criteria for reviewing phase two applications. Specifically, the proposals and applications were to include a description of the corridor, proposed strategies for reducing congestion, expected economic benefits and support of commerce, estimated value to users of the corridor, innovative project delivery and financing features, evidence of exceptional environmental stewardship, a finance plan and opportunities for private-sector participation, and a proposed project timeline. In reviewing both the phase one proposals and phase two applications, the department applied the criteria stated in the Federal Register and, according to the department, gave equal weight to all the criteria. According to the Office of Management and Budget policy directive described earlier in this report, since the selection criteria for Corridors did not vary in importance, it was not necessary for the department to describe the weights of the criteria in its funding announcements. As discussed earlier in this report, grant announcements should provide a complete description of the funding opportunity to give applicants a sense of the scope of the funding, and to assist them in prioritizing and developing proposals for projects. The September 2006 Federal Register notice did not state a specific amount of funding available to corridors through grant programs. However, the Federal Register notice did state that (1) if a corridor was selected for participation in the Corridors program, the department would work with the corridor to identify possible funding sources and (2) the department would select up to five corridors (although six were ultimately selected). The Federal Register notices soliciting Corridors proposals and applications were issued before February 15, 2007, when the President signed the department’s fiscal year 2007 appropriation without any congressional directives that funds be dedicated for particular projects. Therefore, department officials told us that, at the time Corridors proposals and applications were solicited, the department did not know to what extent funds would be available for allocation to Corridors projects. Department officials told us that in April 2007, the department sent out an e-mail to phase two applicants stating that $329 million, from eight Federal Highway Administration (FHWA) grant programs, was available to applicants that met the grant programs’ respective statutory criteria and emphasized the proposed projects’ highway safety and congestion- reduction benefits. However, it was not clear in this document what portion of the $329 million would be dedicated to Corridors. According to the department, the level of funding that would be allocated to Corridors was unknown at this time. In July 2007, the department set aside $66.2 million for Corridor designees and funded 10 projects. This sum represented the amount remaining after funds were set aside for congressionally directed activities, urban partnership agreements (UPA), and other grant programs. Department officials told us that after identifying the six corridors, they called each corridor to solicit projects that could be funded. For 9 of the 10 funded Corridors projects, we found corresponding projects listed in the respective Corridors applications. However, the Corridors application for Interstate 10 did not include an Arizona project to widen a section of this highway. Officials from the Florida and Texas Departments of Transportation who served as the points of contact for the Interstate 10 Corridors project told us they did not receive telephone calls from the department soliciting projects for funding in the Interstate 10 corridor. Rather, these officials told us that Arizona received Corridors funding for a project that was not listed in the Interstate 10 Corridors application and therefore was not a priority for the corridor. According to the department, it was not necessary for a project to be listed in a Corridors application for the project to be funded. Instead, the department said, it gave priority consideration in its funding decisions to parties designated as corridors. In our opinion, this approach appears to act at cross purposes to the department’s goal of encouraging multistate collaboration to address pressing congestion along corridors. The department is developing an agreement for each corridor on how the states will work together to plan, develop, finance, construct, and maintain the corridor. As of February 2009, the department had finalized agreements for three corridors. According to a department official, the department is working to finalize the remaining three agreements by the end of December 2009. Each agreement will establish the objectives for developing the respective corridor. The department is asking the signatory states to use the following objectives as the guiding principles for development: Promote innovative national and regional approaches to congestion mitigation. Address major transportation investment needs. Illustrate the benefits of alternative financial models that involve private- sector capital. Promote a more efficient environmental review and project development process. Develop corridors that will increase freight system reliability and enhance the quality of life for citizens. Demonstrate the viability of a transportation investment model based on sound economics and market principles. Each agreement will require that the signatory states develop a multistate approach to developing and managing the corridor. The department is asking that the states execute a memorandum of understanding among themselves, with the department’s concurrence, that sets forth how the states will collaborate to support each other in corridor activities. To ensure that the signatory states are speaking with one voice, the department is asking each corridor to establish a committee that can represent the states and negotiate on behalf of the corridor with the department. Each agreement also will include specific requirements for developing and operating the corridor. The department is asking the signatory states to develop a process under which each project will be subject, as applicable, to specific development goals to ensure coordinated planning, financing, construction, operation, maintenance, and performance of the corridor. The department is also encouraging the signatory states to cooperatively develop a method to select projects and establish a schedule for project delivery. The department would like the signatory states to create and maintain a schedule that will establish priorities for undertaking projects and obtaining funding from different sources. Lastly, each agreement will address the development of a performance plan for the corridor, including operations and management performance goals and expectations, and methods to measure travel time and reliability. Beginning 1 year after the effective date of the agreement and regularly thereafter, the department will ask the signatory states to report to the department on the corridor’s performance and progress. In February 2008, we reported that the department encourages and promotes the use of highway public-private partnerships through policy and practice, including the development of experimental programs that waive certain federal regulations and encourage private investment. The department believes that public-private partnerships have the potential to reduce highway congestion, among other things. Since our report, the department has taken additional steps to promote highway public-private partnerships through programs and practice. This appendix updates our prior report on the activities the department has used to promote public-private partnerships. We did not assess these new efforts. Since our February 2008 report, the department has extended credit and credit support under the Transportation Infrastructure Finance and Innovation Act program to two public-private partnerships. This act authorizes the department to provide secured (direct) loans, lines of credit, and loan guarantees to public and private sponsors of eligible surface transportation projects. For example, in December 2007, under the Transportation Infrastructure Finance and Innovation Act, the department allocated $589 million for Virginia’s Capital Beltway high-occupancy toll lanes project, which will use congestion pricing to ensure reliable traffic flow on one of the nation’s most congested highways. In addition, in March 2008, the department allocated $430 million for segments of Texas state highway 130, which will form part of a new 91-mile tollway intended to relieve congestion on Interstate 35. Both transactions involved a partnership between private borrowers and a state. In addition, as of December 2008, the department had allocated about $9.2 billion in private activity bonds to eight public-private partnerships. The Safe, Accountable, Flexible, Efficient Transportation Equity Act—A Legacy for Users (SAFETEA-LU) amended the Internal Revenue Code to add qualified highway or surface freight transfer facilities to the types of privately developed and operated projects for which tax-exempt facility bonds (a form of private activity bonds) may be issued. For example, the department allocated $980 million for private activity bonds to a group of private companies that are planning to build a tunnel connecting the Port of Miami on Dodge Island with Watson Island and Interstate 95 on the Florida mainland. However, according to a department official, not all of the $9.2 billion allocated in private activity bonds has been issued. In addition, according to this official, the department is currently reviewing applications for additional private activity bond allocations to other public-private partnerships. Finally, the department plans to study projects that use methods of procurement that integrate risk and streamline project development. SAFETEA-LU established the Public-Private Partnership Pilot Program, known as Penta-P, to evaluate the benefits of forming public-private partnerships for new construction projects. In 2007, the department executed agreements for three pilot projects: the first is a single contract for the construction of two light rail lines in Houston, Texas, that will ultimately serve the city’s two main airports; the second is a contract in Denver, Colorado, for two rail projects that will serve the Denver airport and northwest Denver; and the third is a contract in Oakland, California, for a transit system that will connect the Oakland International Airport with the San Francisco Bay Area Rapid Transit District’s Coliseum Station. According to a department official, construction on the projects has not begun. Denver is contemplating an innovative contract that increases risk sharing between the private partner and the local, state, and federal governments. In this agreement, the Denver Regional Transportation District will ask its private partner to assume a degree of risk by contributing equity capital to the project. This capital will be at risk until the project is operating and collecting revenue. In addition, the Metropolitan Transit Authority of Harris County, Texas, is contemplating an innovative contract under which a “facility provider” will share risk with the vehicle provider, construction firm, and operator. Also, the project’s development will be streamlined, since the private partner will coordinate all work with the contractor, vehicle provider, and operator. As these projects proceed, the department will study how public-private partnerships affect completion times, projections of project costs and benefits, and project performance. Lastly, the San Francisco Bay Area Rapid Transit District plans to use an innovative contract where a consortium of private firms will assume the risk to design, build, operate, maintain, and finance the project. The department has promoted public-private partnerships in the following ways: Developing publications. In July 2008, the department published a report that describes the use of public-private partnerships by transportation authorities and updates the department’s 2004 report to Congress on public-private partnerships. Drafting model legislation for states to consider highway public-private partnerships within their jurisdiction. The model legislation addresses such subjects as bidding, agreement structures, reversions of the facilities to the state, remedies, bonds, federal funding, and property tax exemptions, among other things. In July 2008, the department published a framework for overhauling the way transportation decisions and investments are made. Specifically, the framework recommends the use of public-private partnerships, expansion of Transportation Infrastructure Finance and Innovation Act program capacity, and removal of the $15 billion cap on private activity bonds administered by the department. In addition, the department is currently developing guidance on the use of public-private partnerships by procurement agencies. This guidance will describe how federal, state, and local officials have structured public- private partnerships. Maintaining a public-private partnership Internet Web site. This Web site serves as a clearinghouse for information to states and other transportation professionals about public-private partnerships, pertinent federal regulations, and financing options. Making public presentations. Department officials have made public speeches and written at least one letter to a state in support of highway public-private partnerships. Officials of the department also have testified before Congress in support of highway public-private partnerships. Since February 2008, the department has conducted workshops on the structure and rationale for public-private partnerships. For example, in October 2008, the department gave a presentation at a transit conference on how public-private partnerships can be used to address funding shortages in highway infrastructure projects. Making public-private partnerships a key component of congestion mitigation. Two major parts of the department’s May 2006 national strategy to reduce congestion are the UPA initiative and Corridors. In August 2007, the department awarded funds to five urban partners that would make congestion pricing a key component of congestion mitigation. Such a strategy could act to promote highway public-private partnerships, since tolls provide a long-term revenue stream, key to attracting investors. In September 2007, the department awarded funds to six interstate routes for use in developing multistate corridors to help reduce congestion. These six interstates were selected for their potential to use private resources to reduce traffic congestion within the corridors and across the country. Encouraging public-private partnerships in its reform proposal to Congress. In July 2008, the Secretary announced the administration’s new plan to create a more sustainable way to pay for and build roads and transit systems. This plan includes a proposal for leveraging federal resources. Among other things, this proposal encourages states and metropolitan areas to explore innovative transportation financing mechanisms by expanding the use of public-private partnerships. For example, the administration’s plan proposes that all federal aid projects with a total cost of over $250 million would not receive federal assistance unless the project sponsor first compared the project’s lifecycle costs under the most cost-effective form of conventional public procurement with the project’s lifecycle costs if procured using a public-private partnership (assuming state law allows for public-private partnership procurement). This appendix provides information on the extent to which previous recipients of grants provided through 11 programs received funds under the UPA initiative and Corridors. To develop this information, we compared the amounts of funds states were allocated during fiscal years 2004 through 2006 with the amounts awarded for the UPA initiative and Corridors in fiscal year 2007. Urban partners were awarded about 26 percent of the funding provided through the 11 grant programs in fiscal year 2007, while Corridors states were awarded 2 percent of the funding provided through those grant programs. (See fig. 5.) For fiscal years 2004 through 2006, about $6.9 billion was allocated through the 11 grant programs to 50 states and the District of Colombia, in amounts ranging from about $319 million on average per year (California) to $2.4 million on average per year (Wyoming). The top 10 states, in descending order of grant size, were California, New York, Illinois, New Jersey, Washington, Pennsylvania, Colorado, Maryland, Arizona, and North Carolina. Three of the top 10 states had urban partners (California, New York, and Washington), and two of these states were involved in a Corridors project that received funding (California and Washington) in fiscal year 2007. In fiscal year 2007, about $2.8 billion was awarded through the 11 grant programs. Of this amount, about $715 million was awarded to urban partners and about $66 million was awarded to Corridors states through grants ranging from about $328 million (New York) to $1.1 million (Arkansas). Arizon($ million) Florid($6 million) Mrylnd ($96 million) Colordo ($9 million) Washington ($79 million) Pennylvni($109 million) Minneot($111 million) Washington ($120 million) Cliforni($14 million) New Jerey ($19 million) New York ($ million) Illinoi($164 million) All other ($2.1 illion) New York ($221 million) Cliforni($19 million) All other ($92 million) Arksas ($1 million) Missri ($2 million) Indi($ million) Arizon($4 million) Lo($5 million) Cliforni($5 million) Nev($10 million) Washington ($15 million) North Crolin($22 million) Urban partners were awarded about 22 percent of the funding provided through the two largest grant programs, Bus and Bus-Related Facilities Capital Investment Grants (Bus and Bus Facilities) and New Starts/Small Starts, in fiscal year 2007. (See fig. 6.) For fiscal years 2004 through 2006, about $6.4 billion was allocated through Bus and Bus Facilities and New Starts/Small Starts to 50 states and the District of Colombia, in amounts ranging from about $307 million on average per year (California) to $900,000 on average per year (Wyoming). The top 10 states, in descending order of grant size, were California, New York, Illinois, New Jersey, Pennsylvania, Washington, Colorado, Maryland, Arizona, and North Carolina. Three of the top 10 states had cities designated as urban partners (California, New York, and Washington) and 3 of the top 10 states were part of a Corridors project that received funding (Arizona, California, and Washington) in fiscal year 2007. In fiscal year 2007, about $2.4 billion was awarded through the Bus and Bus Facilities and New Starts/Small Starts programs. Of this amount, $530 million was awarded to urban partners in grants ranging from about $326 million (New York) to $19.5 million (Florida). In fiscal year 2007, no funding was awarded for Corridors projects from Bus and Bus Facilities and New Starts/Small Starts. Arizon($7 million) Washington ($41 million) Mrylnd ($91 million) Cliforni($5 million) Colordo ($9 million) Minneot($6 million) Washington ($104 million) New York ($26 million) Pennylvni($105 million) New Jerey ($15 million) Illinoi($162 million) All other ($1.9 illion) New York ($21 million) Cliforni($07 million) All other ($770 million) Urban partners were awarded about 54 percent of the funding provided through the remaining 9 grant programs in fiscal year 2007, while Corridors states were awarded about 21 percent of the funding provided through those programs. (See fig. 7.) In fiscal years 2004 through 2006, about $547 million was awarded through the remaining 9 grant programs to 50 states and the District of Colombia, in amounts ranging from about $17 million on average per year (Washington) to about $500,000 on average per year (New Hampshire). The top 10 states, in descending order of grant size, were Washington, Alaska, California, Utah, Kentucky, Nevada, Colorado, Mississippi, Texas, and Alabama. Two of the top 10 states had cities designated as urban partners (California and Washington) and 3 of the top 10 states were part of a Corridors project that received funding (California, Nevada, and Washington) in fiscal year 2007. In fiscal year 2007, about $315 million was awarded through the 9 grant programs. Of this amount, about $169 million was awarded to urban partners and about $66 million was awarded to Corridors states through grants ranging from about $61 million (California) to $1.1 million (Arkansas). To determine the extent to which the department communicated information about the selection criteria and funding for the UPA initiative, applied the criteria, and selected applicants for grant awards, we analyzed department publications, such as Federal Register notices on the UPA initiative and its underlying grant programs, and UPA initiative outreach materials, such as presentation slides and handouts. To understand how UPA applicants understood this information about the selection criteria and funding, we interviewed representatives of 14 of the 26 UPA applicants, including all of the 9 preliminary urban partners and 5 of the 17 unsuccessful UPA applicants, which we selected at random. Because the department did not track which applicants received particular outreach materials, we had to rely on these interviews with applicants to analyze the extent to which the department communicated information about the selection criteria and funding. Additionally, we spoke with representatives of five randomly selected cities that did not apply to the UPA initiative but had been identified by the department’s Volpe National Transportation Systems Center as having extreme or high levels of congestion. We also spoke with officials of three national professional transportation groups about their role, if any, in communicating information on the UPA initiative to potential applicants and their understanding of UPA selection criteria and funding. In determining good grants practices, we reviewed grants policies from the department and its agencies—particularly FHWA—as well as other government agencies, such as the Departments of Energy, Commerce, and Labor. Furthermore, we reviewed several of our reports on competitive discretionary grants and grants guidance from the Office of Management and Budget and the Guide to Opportunities for Improving Grant Accountability developed by the Domestic Working Group Grant Accountability Project. In examining how the department applied UPA selection criteria, we reviewed and analyzed department documents such as the 26 UPA applications, the department’s instructions for UPA application reviewers, the results of the reviewers’ assessments of applications, and documentation of the reviews conducted by senior department officials. We compared the department’s review of UPA applications, particularly the criteria used, with the criteria listed in Federal Register notices and reviewed other materials made available to applicants, such as a UPA frequently asked questions document. We also spoke with senior department officials about their application of the UPA selection criteria and their UPA funding decisions. Additionally, we also spoke with 8 of the 11 department officials who served on the UPA application review team and with senior department officials about their reviews of UPA applications. To assess whether the department had authority to allocate grant funds to support the UPA initiative and give priority consideration in allocating individual grants to support projects that involve congestion pricing, we analyzed the department’s fiscal year 2006 appropriation, the fiscal year 2007 revised continuing resolution, the applicable authorizing legislation, and relevant case law and other legal authorities. We obtained the department’s legal position regarding its authority in these areas through formal and informal correspondence and through discussions with the department’s General Counsel and other senior department officials. We reviewed the department’s documentation of its technical evaluation team application review for several grant programs: the Intelligent Transportation Systems-Operational Testing to Mitigate Congestion; Interstate Maintenance Discretionary; Ferry Boat Discretionary; Public Lands Highway Discretionary; and Transportation, Community, and System Preservation programs. We also spoke with department staff members that manage the four grant programs to determine how they reviewed and ranked applications. We selected these grant programs because their statutes authorize the department to give priority or priority consideration to certain categories of applicants. To describe the steps the department is taking to ensure that award conditions are met and that results will be evaluated, we reviewed documents on the department’s actions to monitor UPA award conditions and plans to evaluate each urban partner’s projects to reduce congestion. Specifically, we reviewed urban partners’ term sheets (or memorandums of understanding) with the department and grant and cooperative agreements that list the conditions to receive federal funds. We also reviewed documents from Battelle Memorial Institute’s plans to evaluate UPAs. In addition, we interviewed officials from the department, Battelle Memorial Institute, and Miami about their plans for implementing and evaluating projects. To determine how the department applied the criteria and selected applicants for grant awards for Corridors, we reviewed all phase one and phase two applications, the September 2006 Federal Register notice, and the guidance given to review team members. In addition, we spoke with 10 of the 38 Corridors applicants. Of these, 5 applied to phase one and were not invited to apply to phase two, 1 applied to phase two but was not selected, and 4 were designated as corridors. To understand the department’s review of Corridors applications, we spoke with six of eight review team members. In addition, to describe the steps the department is taking to ensure that selection conditions are met and results are assessed, we reviewed Corridors development agreements, which state performance objectives and the conditions for receiving federal funds. We also spoke with the department officials responsible for managing five grant programs to understand how the program managers will monitor and evaluate Corridors projects. To determine what actions the department has taken to support public- private partnerships to reduce highway congestion, we reviewed several documents, such as the department’s 2006 National Strategy to Reduce Congestion on America’s Transportation Network, documents on the department’s public-private partnership Web site, and our reports on public-private partnerships. We also interviewed department officials on actions the department has taken. To identify the previous recipients of funding from the 13 discretionary grant programs used to fund the UPA initiative and Corridors, we collected funding information for fiscal years 2004 through 2007 from FHWA and the Federal Transit Administration and compared the recipients of those funds and the amounts they received for fiscal years 2004 through 2006 with UPA and Corridors recipients and the amounts they received in 2007. We assessed the reliability of the data by interviewing knowledgeable department officials about data collection methods, particularly those pertaining to funds allocated to states for fiscal years 2004 through 2007 from the 13 grant programs. We determined that the data were sufficiently reliable for the purposes of this report. As part of our review of the department’s National Strategy to Reduce Congestion on America’s Transportation Network, we examined whether, for fiscal year 2007, the department had legal authority to allocate its lump-sum appropriations to 10 existing discretionary grant programs in order to “fund” the UPA initiative, and if so, whether the department could use tolling (specifically, congestion pricing) as a priority or priority consideration factor in deciding which applicants would be awarded grants under those programs. We conclude that because there were no statutory designations of funding for specific projects or programs in fiscal year 2007—no legally binding “earmarks” or other directives—the department had authority to allocate its lump-sum appropriations to its existing discretionary grant programs. The department’s appropriations were available to carry out the programs identified in each of the lump-sum appropriations, and the department had discretion in choosing how to allocate funds among those programs. We conclude further that, for nine of the 10 grant programs that were used to fund UPA projects, the department had authority to use congestion pricing as a discriminating priority or priority consideration factor to select among otherwise equally qualified grant applicants. Each of the grant statutes underlying these 9 programs either explicitly permitted consideration of tolling or afforded the department discretion to consider tolling because it was rationally related to program objectives. For 8 of these 9 grant programs, it is clear that the department then applied congestion pricing in this way, although in the Ferry Boat program, the department’s technical evaluation documentation initially suggested the department had improperly supplanted statutory priorities with tolling by allegedly awarding a $2 million grant to an urban partner that did not meet any statutory priority criteria (but had congestion pricing) while passing over a number of nonurban partners that met at least one priority criterion (but lacked congestion pricing). The technical evaluation was incorrect, however, and was not relied on by the Secretary in making the final grant decision. The Secretary relied on corrected information showing the urban partners in fact met statutory priorities. The Secretary therefore was within her discretion to apply congestion pricing as a discriminating factor. With respect to the remaining grant program—the Transportation, Community, and System Preservation (TCSP) program—we conclude that the department likely did not apply statutory “priority consideration” factors consistent with the requirements of the statute. Because priority consideration does not entitle an applicant to grant selection, only to a bona fide and careful review, and because the department had discretion to use congestion pricing as a rational discriminating priority factor, the department’s action may not have affected the outcome of its grant awards. Although not free from doubt, we believe the statute allows the department to give priority consideration only to cities that meet all five statutory criteria, while the department believes an applicant must meet only one factor, and rated applicants accordingly. Because the department used a one-factor rating approach, it is not possible to determine from the current record whether any of the applicants satisfied all five criteria and thus deserved a bona fide “hard look.” Given that the department had ultimate discretion to select applicants that were not entitled to priority consideration, we do not recommend reevaluating the more than 500 project applications and possibly reawarding the fiscal year 2007 TCSP grants. We note also that all TCSP grant funding has been obligated. Instead, the department should ensure that all future TCSP program discretionary grant awards are carried out in accordance with the statute, that is, by giving priority consideration only to applicants that meet all five of the factors. The department received a number of lump-sum appropriations for fiscal year 2006. These included approximately $36 billion “for Federal-aid highways and highway safety construction programs” administered by FHWA, see Pub. L. No. 109-115, 119 Stat. 2396, 2402 (2005); $1.5 billion “or payment of obligations incurred in carrying out the provisions of 49 U.S.C. 5305, 5307 5308, 5309, 5310, 5311, 5317, 5320, 5355, 5339, and 5340 . . .” for bus and transit-related programs administered by the Federal Transit Administration, id.,119 Stat. at 2417; and approximately $5.8 million “or necessary expenses of the Research and Innovative Technology Administration. . .,” id., 119 Stat. at 2423. Although the fiscal year 2006 appropriations act itself made these sums available for a number of programs, the accompanying conference report contained designations—so-called “earmarks”—directing how substantial amounts of these appropriations should be spent. For fiscal year 2007, however, Congress enacted a $463 billion continuing resolution, giving budget authority to federal agencies based on the same 2006 levels, but removing the nonstatutory earmarks: “ny language specifying an earmark in a committee report or statement of managers accompanying an appropriations Act for fiscal year 2006 shall have no legal effect with respect to funds appropriated by this division.” Revised Continuing Appropriations Resolution, 2007, Pub. L. No. 110-5, sec. 112 (Feb. 15, 2007). The department interpreted this language as permitting it to allocate the above lump sums to discretionary grant programs administered by FHWA, the Federal Transit Administration, and the Research and Innovative Technology Administration, in order to “fund” policy initiatives such as the UPA initiative. April 2008 DOT Letter at 2. Accordingly, following passage of the fiscal year 2007 continuing resolution, the department announced in the Federal Register that it was soliciting applications by metropolitan areas to enter into UPAs with the department. 71 Fed. Reg. 71231 (Dec. 8, 2006). Under the UPA initiative, cities would agree to demonstrate innovative strategies that would reduce traffic congestion. In order to be designated as an urban partner, applicants had to demonstrate their ability to implement the 4Ts: tolling/congestion pricing, transit, technology (use of cutting-edge approaches to improve system performance), and telecommuting (expansion of telecommuting and flexible work schedules). The urban partner designation itself would not entitle a city to any grant funding; urban partners (as well as other cities) had to apply and qualify for grants under the department’s existing discretionary grant programs. Designation as an urban partner, however, would entitle a grant applicant to “preferential treatment” as the department made its individual grant decisions. 71 Fed. Reg. at 71233-34. The department received 26 applications for the UPA initiative and well over 1,300 project applications from urban partner applicants and others, for grants under various discretionary programs. After narrowing the 26 applicants to 9 potential urban partners, the department selected Miami, Minneapolis, New York City, San Francisco, and Seattle as urban partners in August 2007. In the meantime, the department solicited applications under 13 grant programs, and in almost all instances, explained the department would give “priority consideration” to cities selected as urban partners in deciding which cities would be awarded such grants. As announced, the department then gave priority consideration to these 5 urban partners in awarding them approximately $848 million in grants for 94 different projects under 10 discretionary grant programs administered by FHWA, the Federal Transit Administration, and the Research and Innovative Technology Administration. The department also awarded $18 million in grants to preliminary urban partner San Diego under two of these programs. All told, the department awarded approximately 98 percent of the total $866 million in grant funding under these 9 programs to urban partners. A lump-sum appropriation is one that Congress intends to cover a number of programs, projects, or items. By contrast, a line-item or an earmarked appropriation refers to funds that Congress has designated for specific and particular purposes. See GAO, Principles of Federal Appropriations Law, Vol. II, 3d ed., GAO-06-0382SP (Washington, D.C.: Feb. 2006), at 6-5. Agencies have considerable discretion in choosing how to allocate lump- sum appropriations to specific programs and activities. As the Supreme Court recognized in Lincoln v. Vigil, 508 U.S. 182 (1993), “as long as the agency allocates funds from a lump-sum appropriation to meet permissible statutory objectives, gives the courts no leave to intrude.” Id. at 193. The Supreme Court in Lincoln found that the allocation of funds from a lump-sum appropriation is an example of an administrative decision generally committed to agency discretion, noting “the very point of a lump-sum appropriation is to give an agency the capacity to adapt to changing circumstances and meet its statutory responsibilities” “in what sees as the most effective or desirable way.” 508 U.S. at 192 (citing, among other authorities, GAO, Principles of Federal Appropriations Law). After the fiscal year 2007 continuing resolution removed the fiscal year 2006 report earmarks (and even before), the department’s appropriations were available to carry out the discretionary grant programs identified in the each of the lump-sum appropriations. The department had broad discretion in choosing how to allocate funds among those programs; for example, because the $1.5 billion lump-sum appropriation was available to fund various bus and other transit-related programs under a dozen different statutes, absent any other statutory restriction, the department could choose how much of that appropriation to allocate to each of the dozen programs. See Illinois Environmental Protection Agency v. United States Environmental Protection Agency, 947 F.2d 283, 291-92 (7th Cir. 1991) (EPA could set aside a portion of discretionary air pollution grant funds for its own air pollution control activities; EPA appropriation for “Abatement, Control and Compliance” was available for these activities). In carrying out each individual grant program, the department, of course, was required to comply with the restrictions and requirements of the underlying grant statutes and to award funding to grantees in accordance with the statutory provisions. We address below whether the department complied with these underlying provisions. As detailed in this report, the single most important factor in determining which cities would be designated as urban partners was whether a city had, or had committed to obtaining, authority to implement tolling (congestion pricing). Except for nonurban partner San Diego, the department then awarded all of the funding under the 10 programs to the five urban partners. Because congestion pricing was the most important factor in selecting the urban partners, and because urban partners were placed “at the head of the line” in receiving grant awards, concerns have been raised about whether congestion pricing was an inappropriate “superpriority” factor in making grant selections. Whether the department had authority to use congestion pricing in this manner depends on the specific terms of each grant statute. Duncan v. Walker, 533 U.S. 167, 172 (2001) (“Our task is to construe what Congress has enacted. We begin, as always, with the language of the statute.”). But before we evaluate how the department applied the terms of each statute, it will be helpful to address the general scope of agency discretion in making grant awards. The scope of an administrative agency’s authority to award federal assistance funding depends on the specific terms of the authorizing statutes. “‘When Congress passes an Act empowering administrative agencies to carry on governmental activities, the power of those agencies is circumscribed by the authority granted.’” State Highway Comm’n of Missouri v. Volpe, 479 F.2d 1099, 1107 (8th Cir. 1973), quoting Stark v. Wickard, 321 U.S. 288, 309-10 (1944). In the case of statutory earmarks and formula grants, for example, agencies have little or no discretion in making awards. By contrast, agencies are given considerable flexibility in making so-called discretionary grants. See generally GAO, Principles of Federal Appropriations Law, Vol. I, 3d ed., GAO-04-261SP (Washington, D.C.: Jan. 2004), at 3-40 to 3-52. Some discretionary grant statutes require agencies to make awards on a competitive basis, while others do not. Agencies have greater (but not unlimited) flexibility in making noncompetitive grants. In such instances, it is well settled that where an agency does not have sufficient appropriations to fund all applicants for a program and the legislation does not establish priorities or guidance, the agency may, within its discretion, establish selection priorities, classifications, and/or eligibility requirements, so long as it does so on a rational and consistent basis. Id. at 3-49 to -52. However, “in such a case the agency must, at a minimum, let the standard be generally known so as to assure that it is being applied consistently and so as to avoid both the reality and appearance of arbitrary denial of benefits to potential beneficiaries.” Morton v. Ruiz, 415 U.S. 199, 231 (1974). In cases where the authorizing statute provides discretion, agencies are deemed to act within their authority as long as there is a rational basis for their decisions and their acts are not “arbitrary and capricious, an abuse of discretion, or otherwise not in accordance with law.” 5 U.S.C. § 706(2)(A); see Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U.S. 402 (1971). Under this standard, courts look to whether the agency’s decision was “based on a consideration of the relevant factors and whether there has been a clear error in judgment.” Overton Park, 410 U.S. at 416. Although the factual inquiry is to be “searching and careful” and there must be a “thorough, probing, in-depth review,” the ultimate standard of review “is a narrow one.” Id. at 415-16. See, e.g., City of Grand Rapids v. Richardson, 429 F. Supp. 1087, 1094-95 (W.D. Mich. 1977) (denying request to halt grant awards based on allegations of vague and unpublicized eligibility criteria; while court was “tremendously sympathetic” to the losing applicant, “the Court cannot say that the agency committed error, abused its discretion or acted arbitrarily or capriciously within the meaning of . . . Overton Park,” nor did agency violate “elementary fairness”). One other introductory point is relevant before reviewing the department’s use of congestion pricing in awarding grants under the 9 programs. As a general rule, section 301 of Title 23 of the U.S. Code prohibits the imposition of tolls on all roads, highways, bridges, tunnels, and other transportation facilities constructed with federal funds. Thus in our view, the department could not use any form of tolling, such as congestion pricing, as a selection factor for a grant—whether made under Title 23 (FHWA and Research and Innovative Technology Administration grants) or Title 49 (Federal Transit Administration grants)—if the tolling that was the basis of the grant’s “priority” was prohibited by section 301. Congress has enacted a number of exceptions to the tolling ban, however, and the department has confirmed that all of the tolling, specifically congestion pricing, activity supporting the grants at issue qualified under one or more of these exceptions. October 2008 DOT Letter at 15-16. Using congestion pricing as a priority selection factor, the department awarded grants under the Bus and Bus-Related Facilities Capital Investment Grants (Bus and Bus Facilities) program to all five urban partners—New York City, San Francisco, Minneapolis, Seattle, and Miami—and to San Diego, which was not designated as an urban partner. The total funding was $433 million, the single largest amount allocated to the 10 programs at issue and fully half of the $866 million awarded under all of these programs. Under the Bus and Bus Facilities program, the department “may make grants . . . to assist State and local governmental authorities in financing capital projects to replace, rehabilitate and purchase buses and related equipment and to construct bus-related facilities . . ..” 49 U.S.C. § 5309(b)(3). The only explicit direction in the statute concerning the selection of grantees is that the department “shall consider the age and condition of buses, bus fleets, related equipment, and bus-related facilities.” 49 U.S.C. § 5309(m)(8) (emphasis added). The department is required only to “consider” this factor, however, not to give it priority. “ welfare and vitality of urban areas, the satisfactory movement of people and goods within those areas, and the effectiveness of programs aided by the United States Government are jeopardized by deteriorating or inadequate urban transportation service and facilities, the intensification of traffic congestion, and the lack of coordinated, comprehensive, and continuing development planning . . ..,” “ in the interest of the United States . . . to foster the development and revitalization of public transportation systems that—(1) maximize the . . . efficient mobility of individuals; (2) minimize environmental impacts; and (3) minimize transportation-related fuel consumption . . ..” Id. § 5301(a) (emphasis added); see October 2008 DOT Letter at 7. The department’s judgment is that congestion pricing supports all three of these goals, and that one of these, mobility, will enhance the effectiveness of the bus transportation system. “determine the procedure to be applied and the grants to be made. Within those limitations the statute is permissive, provides only the broadest conceptual guidelines for action, and requires highly developed expertise in the determination of the conditions under which the grant of assistance will fulfill the broad congressional purposes.” Id. at 438. Pullman therefore upheld as within the department’s broad discretion a grant condition requiring the local applicant to use competitive bidding for any subcontracts and to determine bid responsiveness. The court found this requirement “consistent with the statute’s encouragement of local responsibility in urban mass transportation” and thus an appropriate exercise of the department’s discretion. Id. Like the department’s grant condition in Pullman, the department’s congestion pricing grant condition here is consistent with the overarching objectives of the Bus statute—to provide a public transportation system that maximizes mobility and minimizes environmental impacts and transportation-related fuel consumption. While the Pullman statute arguably afforded even greater discretion (it allowed the department to impose “such terms and conditions as may prescribe,” while the current statute authorizes the department to set “terms conditions . . . that the Secretary determines to be necessary or appropriate for the purposes of this section”), and while congestion pricing may not be strictly “necessary” for 49 U.S.C. § 5309(c)(3) purposes to achieve the objectives of the bus statute—a well-functioning bus system does not “need” congestion pricing; congestion pricing “need”s a well- functioning bus system—we agree with the department that congestion pricing is at least “appropriate” to achieve the larger statutory purposes. See Town of Secaucus v. DOT, 889 F. Supp. 779, 789 (D.N.J. 1995) (upholding grant award for construction of strengthened foundation underlying mixed public transportation hub/commercial development because project would “enhance the effectiveness of a mass transportation project”); B-160204, Dec. 7, 1966 (GAO approval of grants to purchase city buses used occasionally for charter service in off-peak hours, because buses were needed for “an efficient and coordinated mass transportation system”); see generally State Highway Comm’n of Missouri v. Volpe, above, 479 F.2d at 1112 (a statute “should be construed according to its subject matter and the purpose for which it was enacted.”). Other grant statutes with similarly broad language have been found to provide broad agency discretion. See, e.g., Illinois Environmental EPA v. USEPA, above, 947 F.2d at 291 (authority to make grants “upon such terms and conditions . . . necessary to carry out the purpose” of Clean Air Act provision should be read broadly; grant statute’s purpose was to implement air quality standards within states); LEAA, above, 605 F.2d 21, 22, 27 (1st Cir. 1979) (authority to make grants “according to the criteria and on the terms and conditions the Administration determines consistent with this chapter” provided “large discretion”). “to those ferry systems, and public entities responsible for developing ferries, that—(1) provide critical access to areas not well-served by other modes of surface transportation; (2) carry the greatest number of passengers and vehicles; or (3) carry the greatest number of passengers in passenger-only service.” 23 U.S.C. § 147(c). Although congestion pricing is not explicitly identified as a priority selection factor, the department believes it had discretion to apply congestion pricing as a discriminating or “tie-breaking” factor to select among otherwise equally qualified applicants, because congestion pricing is rationally related to the purposes of the Ferry Boat program. April 2008 DOT Letter at 6, 9. The department reasons that Congress’s decision to give priority to ferry systems that carry the greatest number of passengers and vehicles reflects congressional support for increasing mobility and reducing congestion. Id. at 6, note 1; see also id. at 8-9 (tolling and congestion pricing are consistent with the department’s core mission focusing on “mobility, safety, efficiency, convenience, and economic growth.”); October 2008 DOT Letter at 9 (congestion pricing will shift passengers from cars to ferries, maximizing “social yield” on the federal government’s investment). The department therefore used congestion pricing to select among applicants qualifying under one or more of the statutory priorities. We agree that the department had discretion to use congestion pricing as a discriminating factor among equally qualified applicants based on the rational connection between congestion pricing, mobility, and congestion. However, the department did not have authority to override the statute by, for example, rejecting applicants that lacked congestion pricing but met one or more statutory priorities in favor of urban partners that had congestion pricing but met no statutory priorities. The department’s initial technical evaluation documentation for the fiscal year 2007 Ferry Boat grant awards suggested (incorrectly) that this occurred for one grant, Seattle’s grant for the High-Speed, Ultra-Low Wake Passenger-Only Ferry project. According to the evaluation documentation, that project application “oes not meet statutory priority selection criteria under 23 U.S.C. 147(c) of serving large number of passengers and vehicles, or large number of passengers in passenger-only service.” (Emphasis added.) The department’s technical evaluator rated the application as “qualified” rather than “highly qualified,” in part because it did not “meet statutory preference criteria.” The department nevertheless awarded a $2 million grant for this project based on Seattle’s urban partner status, passing over applications for at least 23 other projects from other jurisdictions that the department’s technical evaluator determined met one or more of the statutory priorities. The initial technical evaluation was incorrect, however. As noted, the statute requires priority for “ferry systems, and public entities responsible for developing ferries”—not for individual projects—that carry the greatest number of passengers and vehicles, and the ferry system in Seattle carries the greatest number of passengers of all ferry systems in the country (emphasis added). Thus all of the projects proposed by Seattle should have been rated as meeting at least one of the statutory priority criteria. October 2008 DOT Letter at 9-10. Yet the department evaluator apparently focused (incorrectly) on whether an individual project would carry the greatest number of passengers. October 2008 DOT Letter at 9. Equally important, the department told us that the Secretary did not rely on (or see) the technical evaluation forms for the Seattle High-Speed project (or any other project funded under the UPA initiative), but instead relied on other department officials’ determinations that the recommended grants “comply or would comply with the statutory requirements of the FBD program,” October 2008 DOT Letter at 10, which would have included the statutory priorities. Accordingly, despite the technical reviewer’s error, the Secretary was within her discretion to apply congestion pricing as a discriminating factor and to select Seattle’s High-Speed project for funding. See Mass. Dep’t of Correction v. Law Enforcement Assistance Administration, 605 F.2d 21, 24-25 (1st Cir. 1979) (LEAA) (upholding grant decision despite errors in technical evaluation process; “hatever was the case at the panel review level, LEAA’s final decision did not rely on the discredited factors. It relied exclusively on . . . e do not believe these beginning errors sufficiently infected the entire process . . . to warrant setting aside a decision entrusted to LEAA’s discretion.”). Using congestion pricing as a priority selection factor, the department awarded grants under the TCSP program totaling $50.4 million for projects by urban partners Minneapolis, San Francisco, and Seattle. Under the TCSP statute, enacted in 1998 as a “smart growth” initiative, the department may award grants and other assistance to support local strategies that integrate transportation projects with community and system preservation “livability” plans and practices. The statute specifies broad eligibility criteria: grants may be awarded for any project under the federal-aid highway, bus, or transit-related programs or for “any other activity relating to transportation, community, and system preservation that the Secretary determines to be appropriate.” Pub. L. No. 109-59, sec. 1117(d), 23 U.S.C. §101 note (Aug. 10, 2005). The statute also requires the department to give “priority consideration” to applicants that meet specified criteria. See id. sec. 1117(e) (emphasis added). “Priority consideration” is not a defined term; Congress added it in 2005 when it converted TCSP from a pilot program to a permanent program. As originally enacted in 1998, the statute required the department to give selection “priority” to applicants meeting specified criteria. Thus, as under the Ferry Boat statute discussed above, the department did not have discretion under the TCSP pilot program to pass over applicants that met the “priority” criteria in favor of those that did not. In 2005, however, Congress added the word “consideration” after “priority.” The legislative history is silent on the reason for this change, but Congress must be presumed to have intended a meaning different from “priority,” as well as from mere “consider,” which it used in the Bus statute as discussed above. Reiter v. Sonotone Corp., 442 U.S. 330, 339 (1979) (in construing a statute, courts must give effect, if possible, to every word Congress used). Thus at least in the context of grant selection criteria, it is possible Congress intended a sort of hierarchy—consideration, priority consideration, and priority—with priority requiring the greatest adherence to named criteria. Relying on dictionary definitions of “priority” and “consideration,” since the terms are not defined in the statute, the department believes that “priority consideration” does not require the department to award funds to applicants that meet the criteria but “means only that the Secretary shall give the applicants that meet one or more of the criteria in section 1117(e) precedential or careful deliberation or thought before competing alternatives . . ..” October 2008 DOT Letter at 13 (emphasis added). We agree that the department likely is not required to select “priority consideration” candidates, because this was the meaning of “priority” before the statute was amended. But the department’s reading, as we understand it, is too narrow. The department appears to argue that it must simply give “careful deliberation” first to applicants that meet the criteria, then to “competing alternatives” applicants that do not meet the criteria, without giving substantive weight to the criteria themselves in selecting grant recipients. This process-oriented interpretation does not account for the fact that “priority” and “priority consideration” both appear in selection “criteria” provisions (see notes 16-17 above), not in selection process provisions. While a process-oriented interpretation has been recognized in a number of court decisions, it is used there as a specialized term—“a term of art in the jargon of federal employment law.” Pope v. FCC, 311 F.3d 1379, 1381 (Fed. Cir. 2002). Moreover, when the department analyzed the 1998 and 2005 statutes more contemporaneously, it read both “priority” and “priority consideration” as pertaining to selection criteria, not selection sequence, and indeed, according to the department, this was how the fiscal year 2007 evaluations for TCSP grants were performed. The department reviewers considered whether applicants met priority consideration and other factors. Final selections were made applying congestion pricing as a discriminating factor. In our view, the fact that Congress changed “priority” to “priority consideration” means the department was not bound to select applicants that met the “priority consideration” factors. Because Congress retained the phrase in the selection “criteria” provision, we also believe it relates to more than simply the timing of an applicant’s consideration. Congress singled out a class of candidates and mandated that the department give them special attention and a careful and bona fide review. Ultimately, however, the department had discretion to select applicants that were not in the “priority consideration” class, or to select among multiple applicants that were all in the class, based on other factors rationally connected to the objectives of the statute. Congestion pricing was a factor rationally related to the TCSP statute, in the department’s judgment, because the stated purpose of the statute is to support development and implementation of strategies to integrate transportation and community plans for addressing, among other things, improving the efficiency of the nation’s transportation system—which congestion pricing would help to achieve. October 2008 DOT Letter at 10-11; see Pub. L. No. 109-59, § 1117(b)(1). We agree that the department could use congestion pricing as a discriminating factor in selecting among otherwise qualified applicants. The remaining issue is whether, before the department applied congestion pricing as a discriminating factor, it followed the statute and gave applicants that qualified for priority consideration (if any) the bona fide “hard look” that Congress required. The answer to this is not straightforward. Department officials told us they treated applications as qualifying for priority consideration if a candidate met just one of the five statutory criteria, and looked no further. As discussed below, we believe the better view is that the statute requires all five to be met. Because of the department’s one-factor approach, however, it is not possible to determine from the current record whether any of the applicants met all five factors. Even if no urban partner had satisfied all five criteria but some other applicants had done so, the outcome might not have changed. Because the department had discretion, once it gave bona fide consideration to priority consideration applicants, to make selections based on congestion pricing, its announced key factor, it could well be that the same urban partners would have been selected. It is also possible that having taken a hard look at “true” priority consideration candidates, the department would have selected applicants that were not urban partners instead. Given that the department had ultimate discretion to select nonpriority-consideration applicants, and that all TCSP grant funding has been obligated, we do not recommend re-evaluating the more than 500 project applications and possibly reawarding the fiscal year 2007 TCSP grants. Instead, the department should ensure that all future TCSP discretionary grant awards are carried out in accordance with the statute, that is, by giving priority consideration only to applicants that meet all five of the factors, for the reasons we now address. Literally read, the statute requires that an applicant satisfy all five factors in order to qualify for priority consideration. The statute lists five factors, with the last two joined by the word “and.” The usual meaning of the word “and” is conjunctive—”and” means “and”—unless the context dictates otherwise. The presumption is that “and” is used in its ordinary sense. See, e.g., Reese Brothers v. United States, 447 F.3d 229 (3d Cir. 2006); Zorich v. Long Beach Fire Dep’t and Ambulance Serv., 118 F.3d 682, 684 (9th Cir. 1997). Overall, we believe the TCSP program’s context does not dictate otherwise. Several of the five factors have subparts, providing different ways in which an applicant can satisfy that factor, and these are separated by the word “or” rather than “and.” This shows that when Congress intended to provide alternatives, it did so. On the other hand, the five factors appear to overlap to some extent, for example in referring to environmental protection, arguably indicating that just one factor must be met. At least initially, however, environmental protection was one of the key aims of the TCSP program, and thus requiring that it be addressed in more than one area may be warranted. In addition, the second factor refers to applicants that “ave other policies to integrate transportation, community, and system preservation practices,” arguably indicating this was intended as an alternative. In context, however, we read this as simply a way to describe one in a list of five factors requiring an applicant to have different types of plans, policies, and programs, to be expected in a grant program focusing on planning rather than on construction. The department states that because the five criteria are “stated in a wide- sweeping manner” and because of the “broad context of the entire TCSP statute” “with its inclusive purposes, wide eligibility requirements, and extensive criteria for priority consideration . . . it was logical to conclude that the applicants did not have to meet all five of the criteria . . ..” October 2008 DOT Letter at 12-13. We believe these factors show the opposite. Given the extraordinary breadth of the eligibility requirements, it is logical that Congress would provide criteria for narrowing the pool, by specifying which applicants deserve special consideration. Only having to meet one of the criteria would undercut the very concept of “priority,” because virtually all applicants could satisfy one of these broad requirements. The provision’s legislative history also supports the interpretation that all factors must be met. As noted, when Congress made the TCSP program permanent in 2005, it amended what were then two “priority” criteria provisions (one for planning grants, another for implementation grants). It combined them into a single provision and changed the requirement from the department having to give “priority” to having to give only “priority consideration.” At the same time, Congress amended the “purposes” provision from a list of five purposes joined by the word “and” to a list of roughly the same five purposes introduced by the term “one or more of the following.” See Pub. L. No. 105-178, sec. 1221(c)(2) (1998 statute), Pub. L. No. 109-59, sec. 1117(b) (2005 statute). Despite all of these changes, Congress retained the “and” in the list of priority consideration factors. This suggests that “and” was intentional, not a drafting oversight. The fact that Congress was amending the statute to require only priority consideration rather than priority also supports this reading—once the department had this additional selection discretion, it would make the provision be virtually meaningless if only one of the five factors had to be met. The overall purpose of the program and the department’s historic descriptions of it also support the reading that all five factors must be met. The TCSP program was established as a counterpoint to traditional transportation grant programs that focus on new construction as a way to improve mobility, without necessarily considering the effect on surrounding communities and the environment. The TCSP program was intended to encourage communities to think more strategically and to integrate their transportation planning with community and regional economic planning. As the department has noted on many occasions, the TCSP program is intended to address the relationships among transportation, community, and system preservation plans and practices— the so-called land use/transportation link—and to encourage the “use transportation to build livable communities.” Giving priority to applicants that meet all five factors supports this purpose by rewarding those that integrate the greatest number of activities. The department itself has recognized this focus on integration in numerous descriptions of the TCSP program, literally underlining the “and” between the final two priority factors and emphasizing that it will give priority to applicants that meet all five program purposes (which roughly mirrored the priority factors). Finally, the conjunctive “and” should be interpreted as a disjunctive “or” only to avoid an incoherent reading of the statute or a reading that leads to an irrational result. Sosa v. Chase Manhattan Mortgage Corp., 348 F.3d 979, 983 (11th Cir. 2003); OfficeMax v. United States, 428 F.3d 583, 589 (6th Cir. 2005). Reading these factors as conjunctive would not lead to such a result, however, because an applicant can meet all five factors. The department demonstrated this in recent discussions by outlining how it believes the TCSP projects funded for Seattle, Minneapolis, and San Francisco could have met all of the five factors. Using congestion pricing as a priority selection factor, the department awarded grants totaling $20 million under the Value Pricing Pilot program to urban partners Minneapolis, New York, San Francisco, and Seattle. Under this program, the department is authorized to fund cooperative agreements with up to 15 state and local governments to “establish, maintain and monitor value pricing programs,” Pub. L. No. 109-59, sec. 1604, 23 U.S.C. § 149 note, and Value Pricing Pilot projects may include tolling and other forms of congestion pricing on federally funded highways. We conclude that the department had authority to use congestion pricing as a priority selection factor. Because the very purpose of the program is to fund congestion pricing and tolling pilot projects, congestion pricing clearly was a permissible selection factor. “(1) enhance mobility and productivity through improved traffic management . . . [and] toll collection . . .; (2) utilize interdisciplinary approaches to develop traffic management strategies and tools to address multiple impacts of congestion concurrently; . . . (3) address traffic management . . . toll collection traveler information with goals of . . . reducing metropolitan congestion by not less than 5 percent by 2010 . . ..” Pub. L. No. 109-59, sec. 5306(b) (emphasis added). Because the statute specifically requires the department to give higher priority to projects that enhance mobility through toll collection, and focuses on reducing congestion—a goal that the department’s expert judgment, is facilitated and enhanced by congestion pricing—the department was clearly authorized to use congestion pricing as a priority selection factor in awarding these grants. Using congestion pricing as a priority selection factor, the department awarded a total of $112.7 million in grants to urban partner New York City under the New Fixed Guideway Facilities (New Starts) program. The funding consisted of a series of individual grants, each for less than $25 million, thus qualifying as Very Small Starts grants. October 2008 DOT Letter at 4-5. Under the general New Starts program, the department may award grants “only if the Secretary, based on evaluations and considerations set forth in paragraph (3), determines that the project is . . . justified based on a comprehensive review of its mobility improvements, environmental benefits, cost effectiveness,” see 49 U.S.C. § 5309(d)(2)(B). In making this determination, the department must evaluate, among other things, “(i) congestion relief; (ii) improved mobility; (iii) air pollution; (iv) noise pollution; (v) energy consumption,” see 49 U.S.C. § 5309(d)(3)(D), as well as “other factors that the Secretary determines to be appropriate to carry out this subsection,” 49 U.S.C. § 5309(d)(3)(K). The department has identified these “other factors” as including congestion management/pricing strategies. See 72 Fed. Reg. 17981, 17982 (Apr. 10, 2007); 72 Fed. Reg. 30907, 30913 (June 4, 2007). By contrast, grants made under the Very Small Starts program—a subset of the New Starts program—are not currently subject to these or any other specific selection criteria. The department therefore had discretion to use selection criteria rationally connected to achieving the purposes of the statute. The department states that it looked to the above New Starts selection criteria for guidance in exercising this discretion, October 2008 DOT Letter at 5, and in the department’s judgment, congestion-pricing measures meet several of the New Starts selection factors. Congestion pricing reduces congestion by creating a price incentive for motorists to keep off the roads in the most congested times of day and to use public transit alternatives. Less congestion, in turn, improves mobility, reduces environmental pollution, and reduces fuel consumption. October 2008 DOT Letter at 5; see also Department of Transportation, Fight Gridlock Now, available at www.etc.dot.gov (accessed Nov. 14, 2008). We conclude that the department had authority to use congestion pricing as a priority factor in making these Very Small Starts grants. Based on the department’s technical expertise in traffic management, we give deference to the department’s position that congestion pricing supports and enhances the achievement of several of the selection factors for this program—reduced congestion, increased mobility, and reduced pollution. Using congestion pricing as a priority selection factor, the department awarded grants totaling $5.1 million under the Innovative Bridge Research and Deployment program to urban partner Seattle. Under this program, the department is authorized to award grants to “promote, demonstrate, evaluate, and document the application of innovative designs, materials, and construction methods in the construction, repair, and rehabilitation of bridges and other highway structures.” 23 U.S.C. § 503(b)(1). The department is required to “select and approve” grants for this program “based on whether the project . . . meets the goals of the program described in paragraph (2).” Id. § 503(b)(3)(B). Paragraph (2) provides a nonexclusive list of the program’s goals; it states that “he goals of the program shall include” eight different objectives, id. § 503(b)(2), none of which is to reduce congestion or increase the use of congestion pricing. Although congestion pricing is not explicitly identified as a selection factor, the statute affords the department discretion to use congestion pricing as a selection factor provided congestion pricing is rationally related to the program’s objectives. The statute’s use of the term “include” indicates that the list of goals was nonexclusive. Puerto Rico Maritime Shipping Authority v. I.C.C., 645 F.2d 1102, 1112 (D.C. Cir.1981); Adams v. Dole, 927 F.2d 771, 776 (4th Cir. 1991). The department suggests that the statutory objective in 23 U.S.C. § 503(b)(2)(B) to reduce “traffic congestion” supports the use of congestion pricing, see April 2008 Letter at 6, October 2008 Letter at 14, although this objective only pertains to congestion during bridge construction. In our view, an even stronger nexus is that congestion pricing would help achieve some of the policies of the national transportation system reflected in the federal aid highway program: “(i) national and interregional personal mobility (including personal mobility in rural and urban areas) and reduced congestion; (ii) flow of interstate . . . commerce and freight transportation . . ..” Id. § 101(b)(3)(C). Using congestion pricing as a priority selection factor, the department awarded grants totaling $50 million under the Interstate Maintenance Discretionary grant program to urban partners Miami and Minneapolis. Under this program, federal set-aside funds are available “for resurfacing, restoring, rehabilitating, and reconstructing any route or portion thereof on the Interstate System . . . and any toll road on the Interstate System” not subject to certain agreements. 23 U.S.C. § 118(c)(1). The statute requires “priority consideration” of applicants proposing maintenance on high-cost (above $10 million), high-volume (urban high-volume, or rural high-truck- volume) routes. 23 U.S.C. § 118(c)(3). Although congestion pricing is not an explicit priority selection factor, we conclude that the department had discretion to use it to discriminate among otherwise equally qualified applicants. Miami qualified for statutory priority consideration because it proposed a high-cost project on a high- volume route. Although Minneapolis may not have qualified for priority consideration (its grant was only for $6.6 million), as discussed above under the TCSP program; nonetheless, the department could select Minneapolis based on congestion pricing if there was a rational nexus between congestion pricing and the Interstate Maintenance Discretionary program’s objectives or the general federal aid highway objectives. As noted in discussing the Bridge program, this nexus exists with the federal aid highway goals of mobility and reduced congestion; the Bridge program priority for high-volume and urban routes also reflects a mobility focus that would be enhanced by congestion pricing. See April 2008 DOT Letter at 6. We therefore conclude the department had discretion to use congestion pricing as a factor. Using congestion pricing as a priority selection factor, the department awarded a $47.3 million grant under the Public Lands Highway Discretionary program to urban partner San Francisco. Under this program, which improves access to and within public lands, the department must allocate a portion of annual authorized funding on the basis of state need “as determined by the Secretary,” and in making this allocation, the Secretary is required to “give preference” to projects that are “significantly impacted by Federal land” and “resource management activities that are proposed by a State that contains at least 3 percent of the total public land in the United States.” 23 U.S.C. § 202(b)(1)(A), (B) (emphasis added). San Francisco met these preference criteria because California has at least 3 percent of U.S. public lands and San Francisco’s proposed grant project (either on the Golden Gate Bridge or Doyle Drive) was deemed “significantly impacted” based on location, traffic volumes, and access to the public land. According to a department official, the department then applied congestion pricing to select San Francisco from among the “preferred” applicants. There is no explicit basis in the Public Lands grant statute to use congestion pricing as a discriminating factor, and the department acknowledges that there is not a specific congestion-reduction or mobility component “associated with” the Public Lands program. April 2008 DOT Letter at 6. Nonetheless, there was a rational nexus between congestion pricing and program objectives. As the department notes, congestion pricing will provide California with additional funds, “thereby leveraging the federal investment,” and congestion pricing “is reasonably expected to reduce emissions.” April 2008 DOT Letter at 6; October 2008 DOT Letter at 3-4. Furthermore, as with the other federal-aid highway program grants discussed above, congestion pricing supports the general program goals of mobility and reduced congestion. Thus we believe the department had discretion to use congestion pricing as a factor in awarding this grant. In addition to the contacts named above, Richard Burkard, David Hooper, Emily Larson, James Ratzenberger, Aron Szapiro, Donald Watson, Crystal Wesco, Carrie Wilks, and Courtney Williams made key contributions to this report.
As part of a broad congestion relief initiative, the Department of Transportation awarded about $848 million from 10 grant programs to five cities (Miami, Minneapolis, New York, San Francisco, and Seattle) in 2007 as part of the Urban Partnership Agreements (UPA) initiative. The UPA initiative is intended to demonstrate the feasibility and benefits of comprehensive, integrated, and innovative approaches to relieving congestion, including the use of tolling (congestion pricing), transit, technology, and telecommuting (4Ts). Congestion pricing involves charging drivers a fee that varies with the density of traffic. This report addresses congressional interest in (1) how well the department communicated UPA selection criteria, (2) whether it had discretion to allocate grant funds to UPA recipients and consider congestion pricing as a priority selection factor, and (3) how it is ensuring that UPA award conditions are met and results are assessed. GAO reviewed departmental documents, statutes and case law, and interviewed department officials and UPA applicants. Although GAO did not assess the merits of the UPA initiative's design, it has reported on its support for integrated approaches to help reduce congestion. With minor exception, the department did a good job communicating the criteria it would use to select urban partners and how much funding was available, but it did not clearly communicate the relative priority of the criteria or extend the same outreach to all applicants. The department clearly communicated 10 of the 11 selection criteria--such as the political and technical feasibility of projects--that it used to decide which cities to select as urban partners, but it did not publicize which criteria, other than the 4Ts, were most important. In addition, over time, the department provided information indicating that about $852 million was available for these projects--a figure short of the actual $1.02 billion but sufficient to give applicants a rough idea of the program's size. Clearly communicating selection criteria, their relative priority, and the available funding allows applicants to make informed decisions when preparing their applications. GAO also found that the department told two semifinalists for being named urban partners how to revise their applications to make them more competitive, but did not do so for the other semifinalists. Both were ultimately selected as urban partners. However, in the absence of government-wide or departmental guidance, it is unclear how to assess the appropriateness of this assistance. The department acted within its authority to allocate about $848 million of its fiscal year 2007 appropriation under 10 grant programs to five UPA cities. Typically these funds have been awarded through congressional direction (earmarks) to thousands of jurisdictions; but the department's 2007 funds were not subject to such directives. In addition, the department had authority to consider congestion pricing as a priority selection factor when awarding funds because the underlying statutes either explicitly permit it or provide the department with the authority to do so. However, GAO found that the department likely did not comply with statutory requirements of the Transportation, Community, and System Preservation program by failing to require applicants to meet all five statutory factors in order to receive "priority consideration," but this may not have affected the selection outcome. The department has developed a framework to ensure that UPA award conditions are met and the initiative's results will be evaluated. The department is monitoring urban partners' completion of award conditions, such as obtaining congestion-pricing authority, and has already acted when conditions have not been met, such as by taking away New York City's funding when it could not obtain congestion pricing authority from the state. In addition, the department plans to evaluate urban partners' strategies for, and results in, reducing congestion. The evaluation, to be conducted by To view the full product, including the scope Battelle Memorial Institute, is in its early stages.
About 15,000 SNFs provide care for patients who are temporarily or permanently unable to care for themselves, but who do not require the level of care furnished in an acute care hospital. SNFs provide a variety of services to patients, including nursing care; physical, occupational, respiratory, and speech therapy; and medical social services. Medicare covers these SNF services for Medicare beneficiaries who have recently been discharged from a stay in an acute care hospital lasting at least 3 days and who need daily skilled care. In addition, many of these facilities provide long-term care, mostly to Medicaid or private paying patients. (Over 2,200 nursing homes are not SNFs and treat Medicaid but not Medicare patients.) A SNF must meet federal standards to participate in the Medicare or Medicaid program. About 85 percent of SNFs, or roughly 13,000, are freestanding and three-quarters of these are for-profit entities. Nearly half of freestanding SNFs are owned by for-profit chains— corporations operating multiple facilities. Hospital-based SNFs, which number about 1,900, are usually part of not-for-profit acute care hospitals. (See table 1.) In 2000, Medicare SNF expenditures were $13 billion for services provided to 1.4 million Medicare patients. About two-thirds of these patients received care in freestanding SNFs and the remaining one-third received care in hospital-based SNFs. On any given day, about 10 percent of freestanding SNFs' residents were Medicare beneficiaries. Most other patients cared for in a freestanding SNF were longer-stay patients receiving nursing or long-term care, which generally is paid for by Medicaid or by the patients themselves. Medicare patients account for a larger share of patients in hospital-based SNFs compared to freestanding SNFs. About 56 percent of patients in hospital-based SNFs are Medicare patients. During most of the 1990s, Medicare spending for SNF care grew much more rapidly than spending for most other Medicare services. Under the cost-based reimbursement system then in effect, Medicare paid SNFs' costs for routine care (room and board and routine nursing) up to a specified limit, with higher limits applied to hospital-based SNFs than to freestanding SNFs. New providers were exempt from the routine-care cost limits for their first 4 years, and all providers could be granted exemptions to the limits by demonstrating that their higher costs were due to atypical patients or patterns of care. Unlike routine-care costs, payments for ancillary services such as therapy were not subject to cost limits, giving facilities few incentives to control those costs. The Congress, in the BBA, directed the Health Care Financing Administration (HCFA) to replace the cost-based reimbursement system with a PPS. The PPS is designed to give SNFs incentives to furnish only necessary services and to deliver those services efficiently by allowing facilities to retain any excess of Medicare payments over costs, but requiring them to absorb any costs that are greater than payments. Under the PPS, SNFs receive a per diem payment, adjusted for geographic differences in labor costs and for differences in the resource needs of patients. Adjustments for patients' resource needs are based on a patient classification system, resource utilization group (RUG), version III. This system assigns patients to 1 of 44 payment groups or RUGs, based on their clinical condition, functional status, and use or expected use of certain types of services. With few exceptions, the payment covers all routine, therapy, and nursing costs incurred in treating patients. Although we have reported that total SNF PPS payments are likely to be adequate, we, MedPAC, and others have raised concerns that the Medicare payments for certain types of patients may be too low because of inadequacies with the patient classification system. The patient classification system may not sufficiently reflect the greater resource needs of those patients who require multiple kinds of health care services, such as drugs, laboratory services, and imaging. In response to BIPA's requirement that CMS report on alternatives to the RUG patient classification system by January 1, 2005, CMS has sponsored research to determine the feasibility of refinements as well as alternatives to the RUG system. After the implementation of the SNF PPS, some SNF representatives claimed that Medicare payments were inadequate and contributed to SNFs' poor financial performance. The Congress responded to provider concerns about the adequacy of SNF payments by making several temporary modifications to the PPS payment rates. Two of these changes, which applied to all Medicare SNF patients and represented about $1.4 billion in annual payments, expired on October 1, 2002: an increase provided by the Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 (BBRA) of 4 percent in the payment rate for all RUGs for fiscal years 2001 and 2002; and an increase provided by BIPA of 16.66 percent in the nursing component of the payment rate for all RUGs for April through September 2001 and fiscal year 2002. Two additional changes were enacted for selected types of Medicare patients. These changes, which affect 26 of the 44 RUGs and total about $1 billion per year, will remain in effect until CMS refines the patient classification system. CMS has announced that, although it is examining possible refinements, the system will not be changed for the 2003 payment year. The two payment changes are: an increase provided by BBRA of 20 percent in the payment rate for 15 RUGs, including those for extensive services, special care, clinically complex care, and certain rehabilitation services; and an increase provided by BIPA of 6.7 percent in the payment rate for 14 rehabilitation RUGs. This redirected the funds from the 3 rehabilitation RUGs that had received the 20 percent BBRA increase and applied these funds to all 14 rehabilitation RUGs. As a result of this redirection of funds, aggregate payments did not increase. Prior to October 1, 2002, when two of these temporary payment increases expired, some SNF representatives stated that Medicare payments were adequate, although they said inadequate Medicaid payments compromised SNF financial viability. Following the expiration of these two temporary Medicare payment increases, provider organizations have again expressed concern that Medicare payments are now not adequate. Other legislative provisions also affected Medicare payments to SNFs. A key provision was the 3-year phased transition to the PPS that the BBA established. Under this transition, which began in 1998, SNFs were paid a blend of facility-specific rates, based on each SNF's 1995 costs, and the PPS rate. BBRA allowed SNFs to receive the full PPS rate for cost reporting periods beginning on or after January 1, 2000. This provision permitted SNFs that were advantaged by the PPS rate to be paid under it, while SNFs that were disadvantaged by the new rate could transition to it on the original 3-year schedule. Medicare-covered SNF use quadrupled from 1985 to 1997, rising from 10 SNF users per 1,000 Medicare fee-for-service beneficiaries to 41 users. A variety of factors contributed to this increase: In 1983, Medicare began paying hospitals a fixed rate per stay as an incentive to control costs. Hospitals responded as expected and, to reduce costs by cutting the length of hospital stays, transferred patients more quickly to SNFs and other post-acute care settings. In 1988, clarification of Medicare coverage guidelines allowed more beneficiaries to qualify for SNF services. From 1990 through 1996 the number of freestanding SNFs increased 49 percent, while hospital-based SNFs increased 82 percent. This growth in providers was encouraged by Medicare payment policies, which did not subject new SNFs to payment limits for their first 4 years of operation, and by the growth in payments. From 1990 through 1996, the average Medicare payment per SNF day of care climbed from $98 to $292. During this period prior to the implementation of the SNF PPS, hospitals that had SNFs were particularly advantaged by transferring acute care patients sooner to their own SNFs. Transfers enabled these hospitals to reduce their acute care costs and increase their SNF revenues. To help ensure that Medicare did not overpay for services at the end of an acute episode of care, the Congress required HCFA to reduce hospital payments for patients transferred to post-acute care after a shorter-than-average hospital stay. In fiscal year 1999 HCFA implemented this policy for 10 types of patients with high use of post-acute care. The reduction in hospital payment for patients transferred to post-acute care lessened the incentive for hospitals to shorten the stays of these patients. Following this change, SNF admissions per 1,000 hospital discharges decreased by 4 percent from 1996 to 2000. After adjusting for differences in patients' clinical conditions, the number of admissions was only 2 percent lower in 2000 than in 1996, indicating that part of the decline was due to reduced need for SNF care. However, if the 10 types of patients affected by the change in hospital payment for transfers are excluded, SNF admissions were the same in 2000 as in 1996. This suggests that some of the observed decline in SNF admissions may be due to the change in payment policy for hospital transfers. Despite this observed decline in SNF admissions, the evidence does not suggest major problems with beneficiary access to SNF care. Beginning in 1999, the Department of Health and Human Services' Office of Inspector General (OIG) has examined SNF access in several surveys of hospital discharge planners to determine whether they are able to place their Medicare patients who need care in SNFs. These surveys have found that planners can place most patients needing care. In the most recent OIG survey, about three-quarters of discharge planners reported that they were able to place all patients. However, some planners reported delays in placing patients with particular medical conditions or service needs, resulting in these patients continuing to receive care in the hospital rather than in a SNF. Patients who took longer to place included those who needed intravenous (IV) antibiotics or expensive drugs, as well as those who were ventilator-dependent or who required dialysis or wound care. In the first 2 full years under the PPS, Medicare payments more than covered Medicare costs for most freestanding SNFs, although their experiences varied widely. Many SNFs had very high Medicare margins, particularly in 2000, although in both years a minority of SNFs had negative Medicare margins—payments from Medicare did not cover their costs of serving Medicare patients. The median Medicare margin for SNFs that were owned by large nursing home chains and for those SNFs with high occupancy was much higher than the overall median Medicare margin for all SNFs. SNFs' Medicare margins were sufficiently high that, while Medicare's share of most SNFs' total patient days was relatively small, SNFs with higher Medicare margins generally had higher total margins, which reflect all SNF revenues and costs. For-profit facilities generally had higher total margins, as did facilities owned by large chains. SNFs with higher proportions of Medicaid patients generally had lower total margins. For their first 2 years under PPS, most freestanding SNFs reported positive Medicare margins, meaning that their payments more than covered their costs. In 1999, the median facility had a Medicare margin exceeding 8 percent, and over one-tenth had margins of 30 percent or more. By 2000, the median Medicare margin for freestanding SNFs had risen to nearly 19 percent, and almost one-quarter of SNFs had Medicare margins of 30 percent or more. These positive margins resulted largely from SNFs reducing their costs. Although Medicare payments per day were 8 percent lower in 1999 than in 1997, for the median facility these lower payments were more than offset by lower costs. (See table 2.) From 1999 through 2000, costs had again declined, although by a smaller amount. At the same time, payments increased, as the temporary increases authorized by the Congress began to be implemented. Although most freestanding SNFs had positive Medicare margins, for a minority of SNFs, Medicare payments did not cover Medicare costs. In 1999, more than one-third of freestanding facilities reported negative Medicare margins, with one-tenth reporting margins that were –30 percent or less. By 2000, the number of facilities with negative margins had declined substantially: about 19 percent had margins that were less than zero, and 4 percent had margins of –30 percent or less. Freestanding SNFs' Medicare margins differed by type of ownership. For- profit SNFs—particularly those associated with the largest chains—had positive Medicare margins in both 1999 and 2000 that were higher than those of both not-for-profit and government-operated SNFs. In 1999, median margins for not-for-profit and government-operated SNFs were negative, while in 2000 the median margins for all types of freestanding SNFs were positive. (See table 3.) Medicare margins also varied with occupancy. Higher occupancy resulted in higher margins. For example, in 1999, freestanding SNFs with occupancy rates of 90 percent or more had a median margin of 10.2 percent, while SNFs with occupancy rates below 70 percent had a median margin of 0.6 percent. (See table 4.) These results are not surprising, because higher occupancy reduces per diem costs, as fixed costs are spread across more patient days. Despite the expiration of two temporary Medicare payment increases and the completed transition from payments based on a facility's own costs to PPS rates, SNFs' positive Medicare margins are likely to continue. MedPAC has estimated that freestanding SNFs' aggregate Medicare margin for 2002 would be 9.4 percent, excluding for the entire year the temporary payment increases that expired on October 1, 2002, and assuming that all facilities had completed the transition to the PPS. Although most freestanding SNFs had positive Medicare margins, most had few Medicare patients and Medicare accounted for a small share of their revenue. In 1999, the median SNF had about six Medicare patients each day and received about 13 percent of its revenue from Medicare. By contrast, the care for about two-thirds of patients was paid for by Medicaid with the remainder generally paid for by the patients themselves. Despite Medicare's small share of most freestanding SNFs' patients, Medicare contributed substantially to these facilities' total margins, because Medicare payments were much higher than costs. In general, facilities with higher Medicare margins had higher total margins. Moreover, in 1999 and 2000, the median total margin would have been negative without Medicare; for example, in 2000, it would have been –1.2 percent. With Medicare, the actual median total margin was 1.8 percent. (See table 5.) Medicaid's share of freestanding SNFs' residents influenced facilities' overall profitability. The larger Medicaid's share of a SNF's patient days, the smaller its total margin. (See table 6.) For-profit status and ownership by a chain also affected freestanding SNFs' total margins. For-profit facilities showed higher median total margins than not-for-profit and government-operated facilities, and large chains displayed the highest total margins. (See table 7.) Many other factors were also related to differences in freestanding SNFs' total margins. Factors contributing to high total margins included high occupancy and location in a rural area. Factors associated with low total margins were a high concentration of SNFs in a geographic area, and location in a state with relatively high average wages for nursing staff. In contrast to freestanding SNFs, hospital-based SNFs reported very negative Medicare margins after the introduction of the PPS. These low margins reflected a substantial decline in Medicare payments to hospital- based SNFs under the PPS as well as hospital-based SNFs' weak response to PPS incentives to reduce costs. Differences in services between hospital-based SNFs and freestanding SNFs could have resulted in higher costs for hospital-based SNFs that may not have been fully accounted for by the patient classification system in the PPS. The negative margins reported by hospital-based SNFs were also due in part to their high costs per day, which may reflect the historical allocation of hospitals' overhead to their SNF units. In 1999, about 90 percent of all hospital-based SNFs reported Medicare costs exceeding Medicare payments, and the median hospital-based SNF posted Medicare margins of –53 percent. While insufficient data were available to compute margins for hospital-based SNFs for 2000, their margins likely improved with the payment increases, but remained significantly negative. Only a small minority—about 10 percent—reported positive margins in 1999. These more successful hospital-based SNFs generally had high occupancy and did not rely heavily on Medicare payments. The explanation of these low margins lies partly in the large decline in Medicare per diem payments that followed the shift to the PPS. Prior to the PPS, Medicare's payments to SNFs were based on each facility's own costs. This led to higher payments for hospital-based SNFs: a median of $378 per day for hospital-based SNFs in 1997 (see table 8), compared to a median of $264 for freestanding SNFs. In the first year of the PPS, hospital-based SNFs, unlike their freestanding counterparts, did not respond to the incentives in the PPS by reducing costs: compared to 1997, hospital-based SNFs' costs in 1999 were higher by $29 per day. By contrast, freestanding SNFs reduced costs by $49 per day. As a result, per diem costs continued to be substantially higher in hospital-based facilities than in freestanding SNFs—more than twice as high in 1999. Some differences in costs between hospital-based and freestanding SNFs may also reflect differences in services in the two settings. Although patients in hospital-based SNFs had received less therapy as of their initial Medicare assessment than patients in freestanding SNFs (and slightly more as of their second assessment), they were more likely to receive other kinds of services, including IV medications, oxygen therapy, and transfusions. Hospital-based SNFs also gave significantly more nursing care, as measured by the ratio of nurses to patients. However, when patients' resource needs were measured by RUGs, patients in the two settings appeared identical, suggesting that their service needs should be comparable. Consequently, the observed differences in the treatments that patients received may suggest that the RUGs do not fully measure differences in patients' conditions and could account for part of the cost difference between hospital-based and freestanding SNFs. (See table 9.) Part of the cost differential between hospital-based and freestanding SNFs may reflect accounting practices that increase reported costs for individual units of the hospital, such as SNFs, that had been paid on the basis of these reported costs. MedPAC "believe a significant portion of the negative SNF margin reflects the allocation of hospital overhead costs to cost-reimbursed units." Prior to the SNF PPS, but after Medicare had implemented its per case PPS for acute inpatient hospital care, hospitals had an incentive to allocate administrative and capital costs to cost-reimbursed units, including SNFs, potentially raising reported costs for these units. (Capital as a share of Medicare per diem costs for hospital-based SNFs was about 96 percent higher than it was for freestanding SNFs in 1999.) Now that SNFs are paid a fixed rate, this incentive no longer exists—but neither is there an incentive to change historical cost allocations. In fact, capital as a share of Medicare costs for hospital-based SNFs has changed little since before PPS. In light of these accounting issues, reported costs of hospital-based SNFs, as well as margins calculated from these costs, should be treated cautiously. Our analysis shows that the Medicare PPS generally pays SNFs adequately for the services that beneficiaries receive. Freestanding SNFs, which treat most Medicare SNF patients, generally received Medicare payments that exceeded their costs, often by considerable amounts. Most hospital-based SNFs reported costs that were greatly in excess of Medicare payments, but these hospital-based SNFs did not respond to the incentives in the PPS by reducing costs. Some of their high costs may also be due to differences in patients that lead to higher resource use and that are not captured by the payment system. This problem could be addressed through refinements to the patient classification system, which CMS is currently studying. Concerns about the financial conditions of some nursing homes have led to interest in using Medicare payment policy to offset current or anticipated financial difficulties. Whatever the merits of the case for aiding these facilities, an across-the-board increase in Medicare payments, such as the restoration of the expired temporary increases, would be particularly inefficient. An across-the-board increase would go to all providers of Medicare SNF care, even those for which Medicare's current payments already greatly exceed costs and which are not experiencing any financial difficulty. Such an increase could also not take account of differences in the adequacy of revenues from other payers, especially the state Medicaid programs. Moreover, over 2000 nursing homes would not get any increase, because they do not participate in Medicare. We received written comments on a draft of this report from CMS (see app. III) and oral comments from the American Association of Homes and Services for the Aging (AAHSA), which represents not-for-profit nursing facilities; the American Health Care Association (AHCA), which represents for-profit and not-for-profit nursing facilities; and the American Hospital Association (AHA), which represents hospitals. We incorporated their comments as appropriate. Industry representatives agreed with our basic findings concerning SNF margins and several stated that it was a good report. CMS noted that our findings are consistent with a recent analysis conducted by MedPAC and other analyses of Medicare margins. CMS stated that the report supports its position that Medicare SNF payment rates are more than adequate to cover the cost of services provided to Medicare beneficiaries. Representatives of AHCA and AAHSA who reviewed the draft report were concerned that the report does not address the issue of Medicaid payments being too low and the role they play in SNFs' financial viability. AHCA also objected to the prominence given to differences in payments by type of ownership, which they believe is a less important factor than occupancy and Medicare percentage of total SNF days in explaining SNF total margins. They characterized as misleading the 30 percent annual growth rate in Medicare SNF expenditures that we reported for 1985 through 1997, stating that spending growth was driven by growth in utilization. Both the AHCA and AHA representatives commented on our findings concerning the differences between freestanding and hospital-based SNFs. AHCA stated that any differences in services between hospital-based and freestanding SNFs are due to differences in patients' clinical conditions. The AHA representatives objected to our statement that the higher per diem costs of hospital-based SNFs could be partly due to the historical patterns of allocating overhead and other costs to the SNF. They stated that hospital cost accounting systems are constantly changing as hospitals add and drop services, and that the cost allocation issue in general is an artifact of the 15 years of operation of Medicare's inpatient PPS. According to AHA, hospitals were already operating at an efficient level when the SNF PPS was implemented and therefore had fewer excess costs to trim. The AHA representatives also noted the shorter average length of stay of hospital-based SNFs and suggested that reporting costs on a per case basis, which reflects this shorter length of stay, rather than on a per diem basis, would show that hospital-based SNFs are less costly than freestanding SNFs. Both AAHSA and AHA addressed possible changes to the PPS. The AAHSA representative stressed that Medicare payments are inadequate for patients who need medically complex nonrehabilitation ancillary services. She stated that the report should include language suggesting that the patient classification system should be changed to better reflect patient characteristics. Regarding our concluding observations, AHA inferred from our discussion of across-the-board increases in payment rates that we would favor targeted increases. Regarding payments, we accounted for Medicaid payments in our analysis of total margins but were unable to conduct a separate analysis of Medicaid payment adequacy for nursing homes because of the lack of suitable data. Isolating the impact of Medicaid payments was not possible because the Medicare cost reports do not report Medicaid payments or costs separately and because there is no source of Medicaid financial data collected on a consistent and ongoing basis across all facilities and states. Although occupancy and Medicare percentage of total SNF days were important factors in explaining differences in total margins, we found that after accounting for these factors, type of ownership remained a significant factor. We agree that utilization growth was a key factor in the rise in Medicare SNF spending, as this report states. Nonetheless, the rapid growth in spending, which we characterized correctly, provided the impetus for enactment of the SNF PPS. With regard to differences between freestanding and hospital-based SNFs, our analyses reported in the draft show that the average RUG score, a measure of patients' clinical conditions, is nearly the same for both. We acknowledged, moreover, that the RUG system may not adequately account for differences across patients. As we stated in the draft, the higher costs of hospital-based SNFs are consistent with historical patterns of allocating overhead costs to SNF units. Whatever changes are occurring in hospital cost accounting, hospitals have had no incentive to change their historical cost allocations since the implementation of the SNF PPS. Moreover, we found no evidence to suggest that they had done so. For example, as we stated in the draft, capital costs expressed as a share of Medicare per diem costs have not changed. Although AHA representatives said that hospitals were more efficient and consequently had less flexibility to reduce costs after the implementation of the SNF PPS, they did not offer evidence to reconcile this view with hospital-based SNFs' higher costs. Hospitals' efficiency may have improved on the inpatient side as a result of the hospital PPS, but this would not necessarily improve the efficiency of hospital-based SNFs. Although we agree that hospital-based SNFs are less costly than freestanding SNFs on a per case basis, we did not present this measure because payments under the SNF PPS are not made on a per case basis. Regarding possible changes in the PPS, we have previously acknowledged that the current patient classification system may not adequately recognize the greater resource needs of some patients. We support CMS's sponsorship of research to investigate improvements in the system. Our analysis does not support an increase in Medicare payment rates. Instead, it would be preferable to refine the patient classification system underlying the SNF PPS, if necessary redistributing money to ensure that payments vary appropriately to reflect patient resource needs. We are sending copies of this report to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-7114. Other GAO contacts and staff acknowledgements are listed in appendix IV. This appendix describes the data and methods used to calculate margins for SNFs as well the analyses of factors affecting margins. In general, a SNF margin is the difference between its payments and its costs, divided by payments; this ratio is expressed as a percentage. Using this definition, a total margin for a SNF is based on the difference between its total payments—derived from all payers—and its total costs. A Medicare margin for a SNF is based on the difference between its Medicare payments and its reported costs of serving Medicare patients. We report the median margins for freestanding and hospital-based SNFs, as well as for subgroups (for example, not-for-profit freestanding SNFs). We computed Medicare and total margins for freestanding and hospital- based SNFs using methods similar to those developed by MedPAC and CMS's Office of the Actuary (OACT). Our primary data sources for SNF payments and costs used in calculating Medicare and total margins were the 1997 through 2000 Medicare SNF cost report files maintained by CMS. Our methods differed slightly from those used by MedPAC and OACT with respect to the definition of outliers and the application of an adjustment to Medicare costs. Definition of outliers. The data for some SNFs must be excluded from the analysis because they result in outliers—implausibly high or low margins that suggest data error. To identify outliers, MedPAC uses a method based on percentiles. We used a standard statistical distribution (lognormal) and removed SNFs where margins were plus or minus 3 standard deviations or more from the mean. We used this method because it improved our ability to detect and eliminate extreme values. Application of cost adjustment. MedPAC adjusted 1999 cost data for freestanding SNFs that, after the implementation of the SNF PPS, changed from certifying a portion of their beds for use by Medicare patients to certifying all or most their beds for Medicare patients. This increase in certified beds resulted in the average cost per day reflecting the cost experience of a broader range of patients, many of whom may not have received skilled care. The adjustment made data more comparable over time by making costs for 1999 more similar to costs in 1997 and 1998, which were based on a larger share of patients needing a SNF level of care. To better ensure comparability of cost data across time, we made an adjustment similar to MedPAC's. Following MedPAC's approach, we identified freestanding SNFs for which this adjustment should be made by examining the change between years in the number of Medicare-certified beds. If the number of certified beds increased 50 percent or more, over 90 percent of the SNF's beds were Medicare-certified, and certain other conditions were met, MedPAC adjusted the SNF's routine costs. We used similar criteria: If a SNF had over 90 percent of its beds certified as Medicare and if, in addition, this percentage had changed by more than 30 percentage points from 1998, we adjusted the SNF's routine costs. The adjustment raised or lowered routine costs based on the pre-PPS ratio of Medicare SNF routine costs per day to the entire facility's routine costs per day. We applied this adjustment in both 1999 and 2000. We found that our criteria identified about 10 percent of SNFs in 1999 and 18 percent in 2000 for which the adjustment was appropriate. Without the adjustment, the median Medicare margin for freestanding SNFs in 1999 would have been 10.2 percent rather than 8.4 percent, and the median Medicare margin in 2000 would have been 21.7 percent rather than 18.9 percent. A more refined measure of the routine costs attributable to Medicare patients in all SNFs would reflect the difference in nursing needs between Medicare patients and other patients in the facility. To test the impact of such an adjustment, we used patient-specific data on services from CMS's nursing home minimum data set (MDS) to approximate the difference in nursing needs between Medicare patients and other patients for each SNF. Using this estimate, we adjusted the portion of total facility routine costs attributable to employee wages and benefits, which we used as a proxy for nursing costs. Based on this analysis, we estimate that using a more refined measure of Medicare costs would likely have reduced the median Medicare margin we reported for freestanding SNFs by between 0.6 and 1.6 percentage points. This adjustment would not affect SNF total costs or total margins. The SNF cost report files we used to calculate 1999 and 2000 Medicare margins were the most current files as of May 2, 2002. The 1999 and 2000 files differed with respect to their completeness. The 1999 file was over 97 percent complete, while the 2000 file was 80 percent complete. After excluding freestanding SNFs that had outliers or lacked key data, including data necessary to adjust routine costs, 7,805 facilities were available for analysis in 1999 and 6,975 facilities in 2000. After exclusions, 1,506 hospital-based SNFs were available for analysis in 1999. The 2000 file contained very few records for hospital-based SNFs; as a result, we could not reliably calculate and report 2000 margins for these providers. Table 10 shows that the distribution of freestanding SNFs is similar in both years for type of ownership, location (urban versus rural), and census region. Compared to the 1999 file, the 2000 file has more SNFs that provide 4,000 or more days of care to Medicare beneficiaries and correspondingly fewer that provide less than 1,500 days of care. To account quantitatively for factors that potentially influence SNF margins, we analyzed SNF margins using multiple regression. This statistical technique accounts for variation in margins by estimating the separate contribution of each of several explanatory factors included in the analysis, while controlling for the effect of all other included factors. For freestanding SNFs, we estimated separate regressions for Medicare margins and for total margins. Each regression included contextual factors, such as the number of SNFs in a geographic area and the state in which the SNF was located, and individual factors, such as each SNF's proportion of Medicaid patients, its occupancy rate, and whether it was for-profit. We report only results that are statistically significant. Contributors to this report were Dae Park and Eric Wedum. Skilled Nursing Facilities: Available Data Show Average Nursing Staff Time Changed Little after Medicare Payment Increase. GAO-03-176. Washington, D.C.: November 13, 2002. Skilled Nursing Facilities: Providers Have Responded to Medicare Payment System by Changing Practices. GAO-02-841. Washington, D.C.: August 23, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Skilled Nursing Facilities: Services Excluded from Medicare's Daily Rate Need to be Reevaluated. GAO-01-816. Washington, D.C.: August 22, 2001. Nursing Homes: Aggregate Medicare Payments Are Adequate Despite Bankruptcies. GAO/T-HEHS-00-192. Washington, D.C.: September 5, 2000. Skilled Nursing Facilities: Medicare Payment Changes Require Provider Adjustments but Maintain Access. GAO/HEHS-00-23. Washington, D.C.: December 14, 1999. Skilled Nursing Facilities: Medicare Payments Need to Better Account for Nontherapy Ancillary Cost Variation. GAO/HEHS-99-185. Washington, D.C.: September 30, 1999.
This report addresses (1) the relationship between Medicare skilled nursing facility (SNF) payments and the costs of treating Medicare patients in freestanding SNFs, as well as the effect of Medicare SNF payments on the financial condition of these facilities, and (2) the relationship between Medicare SNF payments and the costs of treating patients in hospital-based SNFs, as well as the factors that may account for cost differences between hospital-based and freestanding SNFs. Under the prospective payment system (PPS), most freestanding SNFs Medicare payments substantially exceeded the costs of caring for Medicare patients, contributing to facilities' overall positive financial condition. In 1999, the first full year under the PPS, the median freestanding SNF Medicare margin--a measure that compares Medicare payments with Medicare costs--was slightly over 8 percent. By 2000, when the temporary payment increases authorized by the Congress started to take effect, the median Medicare margin had risen to almost 19 percent. However, nearly one-quarter of SNFs in 2000 had Medicare margins exceeding 30 percent, while about one-fifth had negative Medicare margins; that is, the payments they received from Medicare did not cover their costs of providing care. Medicare margins were higher for freestanding SNFs affiliated with large, for-profit nursing home chains and for those with high occupancy. The median SNF total margin--which reflects total revenues and costs across all patients--was 1.3 percent in 1999 and 1.8 percent in 2000. A SNF's total margin tended to be higher when its Medicare margin was higher despite the fact that, in most SNFs, Medicare's share of patient days was small. The total margins for freestanding SNFs tended to be lower when a higher proportion of a SNF's patients had their care paid for by Medicaid. Unlike freestanding SNFs, about 90 percent of hospital-based SNFs reported significantly negative Medicare margins after Medicare's new SNF payment system was launched. The median hospital-based SNF Medicare margin was --53 percent in 1999. Under the PPS, per diem payments to hospital-based SNFs dropped considerably, reflecting the change from payments based on a facility's own costs to fixed payments based on average costs for all facilities. At the same time, hospital-based SNFs reported per diem costs rose from 1997 through 1999. This is in contrast to the experience of freestanding SNFs, which had lower per diem Medicare costs than hospital-based SNFs prior to the PPS and reduced their costs further after the shift to the PPS. The higher Medicare costs reported by hospital-based SNFs may stem in part from differences in services provided to patients. The higher costs may also reflect the historical allocation of overhead costs to the SNF from the hospital, an accounting practice that, while consistent with the payment incentives under the prior cost-based reimbursement system, means that hospital-based SNFs reported costs should be treated cautiously.
Hospitals, which account for over 40 percent of U.S. health care expenditures, are changing rapidly and dramatically. Growing costs, advancing technology, and an aging population are driving these changes. As health care costs have increased, both public health financing programs, such as Medicare and Medicaid, and private health insurers have fundamentally reformed their methods for paying for and managing hospital-provided health care. Such reforms have not generally been implemented, however, in hospitals operated directly by the federal government, including those operated by VA. Hospital care accounts for the largest component of national health care expenditures. In 1995, hospitals accounted for 40 percent or about $441 billion of the nation’s estimated $1.1 trillion in health care expenditures. The next largest component of health care expenditures, physician services, accounted for just about 19 percent. (See fig. 1.1.) The American Hospital Association (AHA) groups hospitals into two primary categories—community and noncommunity. Community hospitals include all nonfederal, short-term general, and other special hospitals whose facilities and services are available to the public. Noncommunity hospitals include federal hospitals, long-term hospitals, hospital units of institutions, psychiatric hospitals, hospitals for tuberculosis and other respiratory diseases, chronic disease hospitals, institutions for the mentally retarded, and alcoholism and chemical dependency hospitals. For 1995, AHA reported that it had 6,291 hospitals registered in the United States, including 5,194 community and 1,097 noncommunity hospitals. The community hospitals included 3,092 nongovernment not-for-profit, 752 investor-owned for-profit, and 1,350 state- and local government-owned hospitals. This report focuses primarily on community hospitals when discussing non-VA hospitals. Such hospitals accounted for 873,000 of the nation’s 1,081,000 beds and almost 31 million of the approximately 33 million hospital admissions in 1995. VA hospitals account for 16 percent of all noncommunity hospitals. In fiscal year 1995, VA operated 173 of the 1,097 noncommunity hospitals, with an average of 50,787 hospital beds and admission of 844,626 patients. In addition to hospitals, the VA health care system included 375 outpatient clinics, 130 nursing homes, and 39 domiciliaries in 1995. For fiscal year 1995, VA obligated about $16.5 billion to maintain and operate its facilities and, on a limited basis, contract for care from non-VA providers. Over $8.4 billion (51 percent) of its obligations were for operating VA hospitals (see fig. 1.2). VA hospitals differ from community hospitals in the following ways: Whom they can and do serve. Community hospitals generally have no restrictions on whom they can serve. A hospital’s target population is limited primarily by the facility’s capabilities and business decisions. In contrast, VA hospitals have historically been limited to treating mainly veterans—adult males. Recent eligibility and contracting reform legislation, as discussed below, has broadened the types of patients VA hospitals may treat. Whom they can buy care from and sell care to. Community hospitals have few restrictions on their ability to contract to buy or sell patient care or nonpatient care services. Historically, VA facilities have been limited primarily to sharing health care services with other federal hospitals and with their medical school affiliates. Recent legislation has removed most restrictions on VA contracting. Who pays for the care provided. Most community hospital revenue comes from payments for patients sponsored by public payers (primarily Medicare and Medicaid) and private health insurers. Small portions also come directly from patients and state and local governments as operating subsidies. VA hospitals receive funding through an annual appropriation process. VA receives virtually no funding through Medicare and Medicaid and before August 1997 returned recoveries from private health insurance (other than a portion needed to cover the cost of operating the recovery program) to the general fund in the Department of the Treasury. Although VA facilities relied almost entirely on appropriated funds, they were allowed to retain certain payments resulting from sale of health care resources to the Department of Defense (DOD), other federal facilities, and certain other providers. In addition, although most VA hospitals, like their community counterparts, focus on short-term acute care services, other VA hospitals focus more on psychiatric and long-term care services. Under the AHA definitions, hospitals that primarily focus on psychiatric care, long-term care, or specialty services, even if they also provide some short-term care, are considered noncommunity hospitals. Systemwide, over 50 percent of VA’s 50,787 operating beds in fiscal year 1995 were devoted to long-term care (intermediate medicine), specialized services (rehabilitation of the blind, treatment of spinal cord injuries, and rehabilitation medicine), or psychiatric care (see fig. 1.3). About 18 percent of VA hospitals provide mainly psychiatric care. In administering the veterans’ health benefits program authorized under title 38 of the U.S. Code, some of VA’s responsibilities are like those of the Health Care Financing Administration (HCFA) in administering Medicare benefits and like those of private health insurance companies in administering health insurance policies. For example, VA is responsible for determining under the statute (1) which benefits veterans are eligible to receive, (2) whether and how much veterans must contribute toward the cost of their care, and (3) where veterans may obtain covered services (in other words, whether they must use VA-operated facilities or may obtain needed services from other providers at VA expense). Similarly, VA, like HCFA and private insurers, is responsible for ensuring that the health care benefits provided to its beneficiaries—veterans—are (1) medically necessary and (2) provided in the most appropriate care setting whether that is a hospital, nursing home, or outpatient clinic. In operating a health care delivery program, VA’s role is like that of the major private-sector health care delivery networks as operated by Kaiser Permanente. For example, VA strives to ensure that its facilities (1) provide high-quality care, (2) are used to optimum capacity, (3) are located where they are accessible to their target population, (4) provide good customer service, (5) offer potential patients services and amenities comparable with those of competing facilities, and (6) operate effective billing and collection systems. Historically, VA health benefits were focused on hospital care; outpatient care for most veterans was limited to coverage of services that would prepare the veterans for hospitalization, obviate the need for hospitalization, or provide treatments needed following a hospitalization. The Veterans’ Health Care Eligibility Reform Act of 1996, enacted in October 1996 (P.L. 104-262), eliminated the obviate-the-need provision and made all veterans eligible for comprehensive outpatient care. Any person who served on active duty in the uniformed services for the minimum amount of time specified by law and who was discharged, released, or retired under other than dishonorable conditions is eligible for some VA health care benefits. The amount of required active-duty service varies depending on when the person entered the military and an eligible veteran’s health care benefits depend on factors such as the presence and extent of a service-connected disability, income, and period or conditions of military service. Although all veterans meeting the above basic requirements were eligible for hospital, nursing home, and at least some outpatient care, before October 1996, 38 U.S.C. 1710 established a complex priority system— based on factors such as the presence and extent of any service-connected disability, the incomes of veterans with nonservice-connected disabilities, and the purpose of care needed—to determine which services were covered and which veterans received care within available resources. All veterans’ health care benefits included medically necessary hospital and nursing home care, but certain veterans, referred to as category A or mandatory-care category veterans, had the highest priority for receiving care. More specifically, the old law required VA to provide hospital care, and, if space and resources were available, allowed VA to provide nursing home care to veterans who had service-connected disabilities, were discharged from the military for disabilities incurred or aggravated in the line of duty, were former prisoners of war, were exposed to certain toxic substances or ionizing radiation, served during the Mexican Border Period or World War I, received disability compensation, received nonservice-connected disability pension benefit, or had incomes below the means test threshold (as of January 1996, $21,001 for a single veteran or $25,204 for a veteran with one dependent plus $1,404 for each additional dependent). For higher income veterans who did not qualify under these conditions, VA could provide hospital and nursing home care if space and resources were available. These veterans, however, known as category C or discretionary care category veterans, had to pay a part of the cost of the care they received. Under the old law, VA provided three basic levels of outpatient care benefits: comprehensive care, which included all services needed to treat any service-connected care, which was limited to treating conditions related to a service-connected disability; and hospital-related care, which provided only the outpatient services needed to (1) prepare for a hospital admission, (2) obviate the need for a hospital admission, or (3) complete treatment begun during a hospital stay. Separate mandatory and discretionary care categories applied to outpatient care. Figure 1.4 summarizes mandatory and discretionary VA health benefits under the old law. The Veterans’ Health Care Eligibility Reform Act of 1996 (P.L. 104-262) eliminated the criterion to obviate the need for hospital care and expanded eligibility for comprehensive outpatient services to all veterans. In addition, the act provides the following: Expressly states that the availability of health care services for veterans in the mandatory care category is limited by the amounts appropriated in advance by the Congress. The act has authorized appropriations of $17.25 billion for fiscal year 1997 and $17.9 billion for fiscal year 1998. Removes about 1.2 million veterans with noncompensable service- connected disabilities from the mandatory care category. Requires VA to establish an enrollment process for managing demand within available resources. The priorities for enrollment are (1) veterans with service-connected disabilities rated at 50 percent or higher; (2) veterans with service-connected disabilities rated at 30 or 40 percent; (3) former prisoners of war and veterans with service-connected disabilities rated at 10 or 20 percent; (4) catastrophically disabled veterans and veterans receiving increased nonservice-connected disability pensions because they are housebound or need the aid and attendance of another person to accomplish the activities of daily life; (5) veterans unable to defray the cost of medical care; (6) all other veterans in the so-called “core” group, including veterans of World War I and veterans with a priority for care based on presumed environmental exposure; and (7) all other veterans. VA may create additional subdivisions within the enrollment groups. The enrollment process will be implemented over a 2-year period during which VA facilities may continue to treat veterans regardless of their enrollment status. After September 30, 1998, however, veterans generally will need to be enrolled to receive VA care. Enrollment will be limited to the number of veterans VA can take care of within its available resources. One of the most significant differences between the VA health care system and the private sector has been the limited ability of VA to purchase health care services from and sell such services to the private sector. The Veterans’ Health Care Eligibility Reform Act of 1996, however, largely eliminated these differences. Before October 1996, veterans were generally limited to obtaining health care services from VA-operated facilities, with the following three main exceptions: VA-operated nursing home and domiciliary care was augmented by contracts with community nursing homes and by per diem payments for veterans in state-operated veterans’ homes. VA paid private-sector physicians and other health care providers to extend care to certain veterans when the services needed were unavailable in the VA system or when the veterans lived too far from a VA facility (commonly referred to as fee-basis care). VA limited use of fee-basis care mainly to veterans with service-connected disabilities. Veterans could obtain emergency hospitalization from any hospital and then be transferred to a VA hospital when their conditions stabilized. In addition, veterans being treated in VA facilities could be provided specific, scarce medical resources from other public and private providers through sharing agreements and contracts between VA and non-VA providers. Similarly, VA was generally not permitted to sell hospital and other health care services but could enter sharing agreements to obtain or provide health care services to DOD and other federal hospitals and specialized medical resources to federal and nonfederal hospitals, clinics, and medical schools. VA could not, however, sell health care services directly to veterans or others. The Veterans’ Health Care Eligibility Reform Act of 1996 expanded the types of providers as well as the types of services for which VA may contract. In addition, it simplified the procedures for complying with federal procurement processes when contracting with commercial providers. Finally, the act eliminated the ban on VA contracting for patient care (which had been suspended through 1999). Following are the contracting provisions under the new law: VA may sell services to nonveterans but only if veterans will receive priority for care under such an arrangement and the arrangement is needed to maintain an acceptable level and quality of service or will result in improved services for veterans. VA may acquire—without regard to laws or regulations requiring use of competitive procedures—resources in instances when such resources are to be obtained from a VA-affiliated institution, including medical practice groups, blood banks, organ banks, or research centers. When the health care resource is to be obtained from commercial sources, it is to be obtained in accordance with simplified VA-developed procurement procedures that would permit all responsible sources to compete for the resource being obtained. VA may contract with outside entities for converting VA activities to private activities. Previously, Section 8110(c) of title 38 of the U.S. Code prohibited contracting out of direct patient care activities or activities “incident to” direct care and permitted contracting out other activities, such as laundry and cleaning services, only on the basis of a VA-conducted cost-comparison study. This section was repealed but the VA must still report annually on performance by contractor personnel of work previously performed by VA employees. Unlike private-sector hospitals, VA hospitals do not depend financially on public and private health insurance. As a result, VA hospitals are not at financial risk for inappropriate admissions, unnecessary days of care, and treatment of ineligible beneficiaries. Private-sector hospitals generally depend on payments from public and private insurance programs and their patients for their income. Private-sector hospitals are facing increased pressures from both private insurers and public health benefits programs, such as Medicare and Medicaid, to eliminate inappropriate admissions and reduce hospital lengths of stay. For example, private health insurers increasingly use preadmission screening to ensure the medical necessity of hospital admissions and set limits on approved lengths of stay. Although nothing prevents private-sector hospitals from admitting patients without an insurer’s authorization, the hospital and the patient, rather than the insurer, become financially responsible for the care. Similarly, the Medicare prospective payment system and utilization reviews provide financial incentives for hospitals to provide services in the most appropriate setting and to discharge patients as soon as their medical conditions allow. The financial incentive is particularly strong for hospital care financed under Medicare because the hospital is, in general, not allowed to charge beneficiaries for services determined to be medically unnecessary or inappropriate. Historically, VA hospitals and veteran patients have not faced these same risks. VA hospitals do not face the same payment limitations and external utilization reviews that private-sector hospitals face. And, although VA hospitals can recover funds from veterans’ private health insurance, failure to comply with private health insurers’ preadmission screening and length-of-stay requirements has little direct financial effect on VA hospitals. This is because (1) before 1994 VA facilities were funded primarily on the basis of their inpatient workload and (2) before last year medical care cost recoveries were returned to the Department of the Treasury. During the past 5 years, we completed a series of reviews focusing on the many challenges facing the VA health care system and the potential role of VA in health care reforms. This report, prepared at the request of the Chairman, Senate Committee on Veterans’ Affairs, summarizes and expands on that body of work to identify major issues concerning the future of VA hospitals. Specifically, it discusses the evolution of hospital care during the 20th century, factors contributing to the declining demand for hospital care in community and VA hospitals, the extent to which excess capacity exists in community and VA hospitals, and actions taken by community and VA hospitals to increase efficiency and compete for patients. In developing information on the evolution of hospital care, we relied on the legislative history of the veterans’ health care provisions of title 38 of the U.S. Code and articles and reports prepared by or for the Brookings Institution (1934); House Committee on Veterans’ Affairs (1967); National Academy of Sciences (1977); VA’s Commission on the Future Structure of Veterans Health Care; Congressional Research Service; Twentieth Century Fund; and VA. Information on the evolution of community hospitals came primarily from our 1985 report, Constraining National Health Care Expenditures: Achieving Quality Care at an Affordable Cost (GAO/HRD-85-105, Sept. 30, 1985); the Source Book of Health Insurance Data, 1995; AHA’s Hospital Statistics; and HCFA’s Data Compendiums. To identify factors contributing to the declining demand for care in community and VA hospitals, we interviewed policy analysts from associations and think tanks, including the American Medical Association (AMA), AHA, and the CATO Institute; obtained the views of representatives from the major veterans service reviewed many studies and reports on hospitals, including those prepared by the Pew Health Professions Commission, Prospective Payment Assessment Commission, Physician Payment Review Commission, HIAA, Hay Group, National Committee for Quality Health Care, Congressional Research Service, the former Office of Technology Assessment, HCFA, and VA; reviewed our prior reports and testimonies on VA health care, Medicare, and health care cost containment; and reviewed reports and studies on VA health care prepared by the VA Office of Inspector General and others. To estimate the amount of excess bed capacity in community and VA hospitals, we developed three approaches by adapting methods used in prior studies reviewed by the National Academy of Science’s Institute of Medicine. First, we developed a conservative measure of excess capacity based on the number of unused beds, assuming an 85-percent occupancy rate was appropriate. Next, we developed estimates of additional excess capacity under differing assumptions about the amount of medically unnecessary care being provided. Third, we developed estimates of longer term goals for reducing hospital beds based on selected targets of beds per 1,000 population (beds per 1,000 users for VA). Additional details on how we selected our approaches appear in chapter 6. To identify actions of community hospitals to increase efficiency and compete for patients, we used a three-tiered approach. First, we identified, on the basis of our initial review of health care literature and discussions with health policy analysts, several specific actions taken by community hospitals. Second, we refined and expanded this list through discussions with AHA, AMA, VA, and others. Third, we conducted an extensive literature search using Healthstar, Econlit, and other search engines to identify pertinent literature on the list of specific actions. We focused on studies that described the actions being taken, showed how extensively community hospitals were implementing the actions, described the intended benefits of the actions, and evaluated their effectiveness. We used a similar multitiered approach to determine VA actions. First, we provided the Veterans Health Administration a list of the community hospital actions and asked for information on the extent to which VA had taken or planned to take similar actions. After receiving written responses from VA central office officials, we followed up to obtain additional details. Second, we reviewed VA planning documents and reports, including the Under Secretary for Health’s 1995 Vision for Change, 1996 Prescription for Change, and 1997 Journey of Change, which contain the primary action plans for restructuring the VA health care system. In addition, we reviewed the 1996 and 1997 network directors’ performance measures; status reports on directors’ meeting their performance goals; VA budget submissions for fiscal years 1996, 1997, and 1998; and VA’s draft strategic plan prepared under the Government Performance and Results Act. Third, we reviewed each of the 22 Veterans Integrated Service Networks’ strategic plans, looking specifically for references to the types of actions being taken by community hospitals. Finally, we obtained additional information on VA actions through interviews with VA officials from VHA, the National Acquisition Center, and the Office of General Counsel. Our work was conducted between January 1996 and January 1998 in accordance with generally accepted government auditing standards. The role of America’s hospitals has profoundly changed during this century. During the first three-quarters of the century, advances in medical technology and the development of private and public health insurance led to unprecedented growth in the role of hospitals in the U.S. health care system. Other factors, most notably two world wars and the creation and subsequent expansion of VA’s safety net mission during the Great Depression, significantly increased demand for VA hospital care during the 1930s and 1940s. Both private-sector and VA hospitals were transformed from charitable institutions providing mainly custodial care into the preeminent providers of life-saving and -sustaining technologies. Because the demand for hospital care seemed insatiable, federal programs encouraged construction of additional private-sector and VA hospital beds. But, by the 1960s and 1970s, health care spending was rising rapidly, consuming a growing portion of the gross domestic product. Hospitals accounted for the largest and a growing portion of the increased spending. As concern about rising health care costs grew in the early 1980s, the role and fortunes of America’s hospitals again began to change. The steadily increasing supply of and demand for hospital beds in the first three- quarters of the century began to decline. More and more hospitals began to close. In addition, the role of hospitals in overall health care spending stabilized, and, in the VA system, declined as hospital admissions declined and lengths of stay shortened. In the 19th century, hospitals mainly provided a place for people to die; little medical treatment was offered. In addition, hospitals were basically charitable institutions; neither patients nor the government provided extensive financial support. The late 19th century and first half of the 20th century saw the following changes both in the role of hospitals and in the financing of hospital care: Scientific developments increased the amount of medical and surgical care provided in hospitals. Private health insurance became an important source of payment for hospital care. World wars strained the ability of the private sector to treat returning casualties, leading to expanded veterans’ facilities. Declining use of VA hospitals by veterans with service-connected disabilities following World War I and increased use during the Great Depression led to the creation and expansion of VA’s safety net mission. The increased demand for hospital care prompted by these developments led to a perceived shortage of hospital beds and to federal programs to promote hospital construction. Late 19th-century scientific developments increasingly shifted the focus of medical care from physicians’ offices and patients’ homes to hospitals. For example, the use of antiseptics and other methods to fight disease-causing microorganisms reduced the spread of infection, making surgery safer. Furthermore, breakthroughs in disease diagnosis and therapeutic intervention expanded the science and art of medicine. As a result, physicians began to depend more on hospital-based equipment and services to provide medical care to their patients. In addition to the development of antisepsis, the discovery of antibiotics and the introduction of modern surgical techniques and equipment made surgery safer for the patient. Moreover, surgeons’ increasing knowledge and the availability of sophisticated medical and surgical equipment made possible surgical procedures not previously considered. Private health insurance emerged with the creation of the first Blue Cross and Blue Shield plans in the 1930s. Traditional health insurance in which providers are paid for each covered service delivered (known as fee-for- service coverage) tends to increase demand for hospital care by insulating both the patient and the provider from medical care costs. Fee-for-service health insurance encourages patients to demand more and better health care because it reduces the patient’s cost for care and forces changes in consumer and provider behavior through increased use of insured services and reduced concern about the relative cost of providers. Moreover, as fee-for-service health insurance became more comprehensive, physicians had fewer incentives to question the cost-effectiveness of alternative treatments or the prices charged by hospitals. Also, physicians had financial incentives to provide more services to patients because this increased their earnings. Increased health insurance coverage, while increasing demand for care in community hospitals, tends to decrease demand for care in VA hospitals. This is because the number of veterans with health insurance is expected to increase, and veterans with health insurance are more likely to seek care from community hospitals than VA hospitals. Before World War I, the government built a number of homes to provide domiciliary care to war veterans. These homes provided only incidental medical and hospital care. During World War I, veterans received a series of new benefits, including medical and hospital care for those suffering from wounds or diseases incurred in the service. Public Health Service (PHS) hospitals treated returning veterans, and, at the end of the war, several military hospitals were transferred to PHS to enable it to continue serving the growing veteran population. In 1921, PHS hospitals primarily serving veterans were transferred to the newly established Veterans’ Bureau. Casualties from World War I soon overwhelmed the capacity of veterans’ hospitals to treat injured soldiers. The Congress responded by increasing the number of veterans’ hospitals with an emphasis on treating veterans’ disabling conditions. After veterans’ immediate, postwar, service-connected medical problems were met, VA hospitals began to have excess beds instead of a shortage of beds. The Congress, in 1924, responded by giving wartime veterans with nonservice-connected conditions access to veterans’ hospitals when space was available and the veterans signed an oath indicating that they could not pay for their care. The Great Depression saw an unprecedented demand for VA hospital care. In 1937, President Roosevelt authorized construction of additional VA hospital beds to (1) meet the increased demand for neuropsychiatric care and treatment of tuberculosis and other respiratory illnesses and (2) provide more equitable geographic access to care. Rapidly rising demand for hospital care prompted by U.S. involvement in World War II led to further construction and expansion of VA hospitals. Demand for care was so great that in March 1946 VA had a waiting list of over 26,000 veterans seeking care for nonservice-connected conditions. As had occurred after World War I, however, the initial high demand for medical services for returning casualties soon subsided and VA once again had excess hospital capacity. Although VA began to have excess hospital beds after World War II, the supply of community hospital beds was generally considered inadequate to meet increasing demand. To address this problem, the Congress, in 1946, passed the Hill-Burton Act (P.L. 79-725). The act provided federal funds to match those raised by local communities for building new hospitals and modernizing and replacing existing facilities. Between 1950 and 1980, hospital care consumed a steadily increasing percentage of overall health care spending. (See fig. 2.1.) Initially, the increase was slight, from 24 to 28 percent of health care expenditures between 1950 and 1965. In the 15 years following the 1965 creation of the Medicare and Medicaid programs, however, the growth in hospital spending rapidly outpaced growth in other health care spending. By 1980, hospital care accounted for 44 percent of the nation’s health care expenditures. Two primary factors contributing to rising hospital expenditures were (1) federal programs and tax policies that encouraged hospital construction and (2) growing demand for hospital care. Both the supply of community hospital beds and demand for hospital care increased dramatically between 1950 and 1980. Community hospital beds increased from about 505,000 to about 988,000. (See fig. 2.2.) During this same time period, community hospital admissions per 1,000 population increased from about 111 to about 162. While the supply of and demand for hospital beds had been increasing in the private sector, demand for VA hospital beds has been steadily decreasing since 1963. VA operating beds declined by about 33,000 between 1963 and 1979; the average daily census declined by about 40,000. (See fig. 2.3.) Although the average daily census in VA hospitals declined during the period, demand for hospital care, as measured by admissions per 1,000 veterans, increased. (See fig. 2.4.) As previously discussed, the Congress enacted the Hill-Burton Act in 1946 to encourage the construction of community hospital beds. According to an AHA estimate, the Hill-Burton Act played a role in the construction of about 43 percent of the not-for-profit community hospital beds in operation in 1974. Another federal subsidy that contributed to the increased number of community hospital beds was the use of tax-exempt bonds to finance construction projects. Hospitals, particularly tax-exempt, nonprofit hospitals, obtained low-interest loans for capital projects through the issuance of tax-exempt bonds. Many factors contributed to the increased demand for hospital care: (1) population growth, (2) advances in medical technology often requiring elaborate equipment available only in a hospital, (3) a growing elderly population with increasing health care needs, (4) improved insurance coverage of hospital expenses with the advent of Medicare and other federal health benefits programs, and (5) expansion of VA hospital benefits. Although increased hospital admissions between 1950 and 1980 are partly explained by increases in both the general and veteran populations, the growth in hospital admissions generally outpaced population increases. The general population increased from 152 million in 1950 to 228 million in 1980, a 50-percent increase. During this same period, community hospital admissions more than doubled, from 16.7 million to 36.2 million. In other words, hospital admissions per 1,000 population increased from about 111 in 1950 to about 162 in 1980. The Korean Conflict increased the number of new veterans by about 6 million during the early and mid-1950s. By 1965, the total veteran population dropped to just under 22 million. As the nation geared up for and entered the Vietnam War, the veteran population once again began to grow. It increased steadily for the next 15 years, reaching 28.6 million by 1980. As demands for treatment of returning casualties increased, admissions to VA hospitals more than doubled from 1963 through 1980, from 585,000 to 1,183,000. As was the case with private-sector hospitals, admissions increased at a faster pace than did the number of veterans. Admissions to VA hospitals per 1,000 veterans grew steadily from 1967 through 1980, from 24 to 41. The second factor contributing to increased demand for hospital care between 1950 and 1980 was continuing advances in medical technology. The development of intensive care units (ICU) and other technologies, such as computed tomographic scanners, open-heart surgery, and life- sustaining procedures for critically ill patients, for example, renal dialysis, exemplify what hospitals can provide and what the public grew to expect. In addition to increasing demand, these advances contributed to higher hospital care costs in the following ways: An ICU is an area of the hospital set aside for the most seriously ill. ICUs have an array of electronic monitoring devices and life-support machinery, such as mechanical ventilators and defibrillators. In addition, ICUs have a high concentration of nursing and support personnel. Although the United States had fewer than 1,000 ICU beds in 1958, by 1976 nearly all community hospitals with 200 or more beds had an ICU, about 90 percent with 100 to 199 beds had such units, and almost 50 percent of hospitals with fewer than 100 beds had an ICU. By 1983, over 80,800 ICU beds were available. Renal dialysis filters waste material from the blood through an artificial kidney. The first long-term renal dialysis programs began in the early 1960s. Although about 1,000 patients received renal dialysis in 1967, another estimated 6,000 patients died because of the lack of resources to treat them. The Social Security Amendments of 1972 (42 U.S.C. 426-1) authorized Medicare to pay for dialysis and kidney transplants for patients with end-stage renal disease. By 1980, 50,000 patients were on dialysis and about 4,700 transplants were performed. In 1996, 200,000 patients received dialysis. Transplantation is a surgical procedure involving the implantation of healthy organs or tissues obtained from either living donors or cadavers. Kidney transplantation costs less than renal dialysis for treating kidney disease and is preferred for treating end-stage renal disease. Transplantation frees patients from the inconvenience of continuous dialysis treatments, imparts a sense of good health, and improves overall quality of life. The first successful kidney transplant was performed in 1954. Transplantation now includes such organs as the heart, liver, lungs, and pancreas. In 1994, U.S. surgeons performed over 18,000 organ transplants. Resuscitation techniques (including reversal of cardiac arrest), the development of respirators, and intravenous feeding enable medicine to do more for critically ill patients than ever before. The nation’s health care delivery system can now delay the moment of death for almost any life-threatening condition. For patients suffering a permanent loss of consciousness, doctors can use intensive and aggressive therapies to reverse unconsciousness and overcome other medical conditions. The third factor contributing to increased demand for community hospital care was the creation and subsequent expansion of public health benefits programs to help selected groups pay for health care services. In 1965, the Congress enacted legislation establishing the two largest public health insurance programs—Medicare, which covers most people aged 65 or older and certain disabled persons under age 65, and Medicaid, which covers many low-income people. The following year, the Congress established the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) to enable military retirees and dependents to obtain health care in the private sector when services are not available or accessible in DOD facilities. As the percentage of health expenses paid by third parties increased, the proportion paid directly by consumers dropped. In 1965, when the Medicare and Medicaid programs were established, consumers’ out-of- pocket payments accounted for about 53 percent of total personal health care expenditures. By 1970—just 5 years after these programs’ implementation—consumers’ out-of-pocket payments dropped to about 39 percent of expenditures. Out-of-pocket payments have continued falling, accounting for only about one-fifth of personal health care expenditures in 1994. (See fig. 2.5.) More significant is the growth of third-party payments for hospital care. In 1965, third parties accounted for about 83 percent of total expenditures for hospital care, growing to about 92 percent by 1975. In 1995, third parties accounted for an estimated 95 percent or more of total expenditures for hospital care. (See fig. 2.6.) While these programs tended to increase the demand for care in community hospitals, they decreased the demand for VA hospital care. For example, studies have shown that many VA hospital users increase their use of community hospitals and decrease their use of VA hospitals when they become Medicare eligible. This is because veterans’ financial incentive to use VA hospitals is largely eliminated when they become Medicare eligible and community hospitals are usually closer to their homes. A fourth factor contributing to increased demand for hospital care between 1950 and 1980 was the health care needs of an aging population. Older people use medical personnel and facilities more than younger people. For example, older people are hospitalized approximately twice as often as younger people, have lengths of stay 50 percent longer than younger people, and use twice as many prescription drugs. From 1950 through 1980, the proportion of the U.S. population 65 years of age or older increased from 8.0 to 11.3 percent, continuing the trend from the first half of the century; in 1900, only 4 percent of the population was 65 years of age or older. A 1977 study of the health care needs of the aging veteran population anticipated this increase. VA predicted that after 1985, veterans’ demand for VA hospital care would accelerate rapidly. VA estimated that it would need to operate about 91,000 beds in 1985, about 115,000 beds by 1995, and about 120,000 beds by the year 2000. VA based its estimates on utilization rates and eligibility provisions in effect in 1977 but factored in assumptions that (1) the need for psychiatric beds would continue decreasing and (2) hospital lengths of stay would continue declining despite the patients’ advancing ages. Eligibility expansions also affected demand for VA hospital care. In 1962, the Congress passed legislation that defined as a service-connected disability any condition traceable to a period of military service, regardless of the cause or circumstances of its occurrence. Previously, care for service-connected conditions was not ensured unless such conditions were incurred or aggravated during wartime service. VA expanded its safety net mission near the end of the Vietnam War. In 1973, VA expanded eligibility for hospital care to treatment of nonservice- connected disabilities of peacetime veterans unable to defray the cost of care. Treatment of nonservice-connected disabilities had previously been limited to wartime veterans. By the mid-1970s, researchers began to question whether the nation had too many hospital beds and whether the excess beds were contributing to higher health care costs. For example, the National Academy of Sciences’ Institute of Medicine (IOM) recommended in 1976 that the bed-to- population ratio, which by 1975 had reached 4.4 community hospital beds per 1,000 population, be reduced by at least 10 percent. Specifically, IOM called for reducing the number of community hospital beds per 1,000 population from 4.4 to approximately 4.0. IOM called for further sizeable reductions to follow after the initial goal had been met. As scientific developments continue and employers and the government focus on ways to contain health care costs, the role of hospitals is once again changing. Just as scientific advances spawned increased demand for hospital care in the first seven decades of this century, technological advances are enabling much of the care previously provided in hospitals to be shifted to outpatient settings (see ch. 3). Similarly, changes in the insurance market—principally the development of prospective payment systems and managed care—have helped decrease hospital use (see ch. 4). Demand for hospital care began to decline in community hospitals and continued to decline in VA hospitals during the 1980s. As shown in figure 2.7, demand for care in community hospitals declined more rapidly than the supply of hospital beds from 1980 through 1993. The number of community hospital beds increased slightly between 1980 and 1984 but has steadily declined since then. By 1995, community hospital beds had dropped to 873,000 after peaking at slightly over 1 million. More importantly, the average daily census in community hospitals dropped from 747,000 in 1980 to 548,000 in 1995. Demand for VA hospital care continued the decline that began in the early 1960s. From fiscal year 1981 through fiscal year 1995, the average daily census in VA hospitals dropped from about 66,000 to about 37,000. During the same period, the number of VA operating beds dropped from about 82,000 to about 51,000. (See fig. 2.8.) From 1980 through 1986, VA hospital admissions continued to increase despite a gradual decline in the number of veterans. Since 1987, however, VA hospital admissions have declined more quickly than the veteran population. Hospital admissions dropped about 18.6 percent from 1988 through 1995, from about 1,038,000 to about 845,000. During approximately the same period, the veteran population declined about 5 percent, from 27.5 million to 26.2 million. As a result, the number of VA hospital admissions per 1,000 veterans dropped from 38 in 1988 to 32 in 1995. Admissions to community hospitals are also declining. Despite continuing population growth, community hospital admissions, after increasing steadily from 1950 through 1980, dropped by 15 percent from 1981 through 1995. Adjusting for population growth, admissions per 1,000 population dropped from 158 to 118. From 1975 through 1995, more community hospitals have closed than new hospitals have opened, while VA has opened more hospitals than it has closed. Although the U.S. population increased by about 47 million between 1975 and 1995, the number of community hospitals decreased by about 12 percent (from 5,875 to 5,194). During the same period, the number of VA hospitals increased from 171 to 173. These community hospital statistics understate the actual extent of hospital closures because new hospitals continue to open as other hospitals close. For example, in 1993, 62 hospitals (including 34 community hospitals) closed but 40 new hospitals opened. Of the 40, 5 were psychiatric or substance abuse hospitals, 15 were rehabilitation hospitals, 3 were specialty hospitals, and 17 were general medical and surgical facilities. Similarly, although the number of VA hospitals saw a net increase over the 20-year period, two VA hospitals—in Martinez and Sepulveda, California—were closed because of actual or potential earthquake damage. Changes in medical technology and practice have contributed to the decreasing demand for both VA and community hospital care since 1980. Advances in medical technology, such as laser and other less invasive surgical techniques, allow much care previously provided in hospitals to be provided at home, on an outpatient basis, or in a nursing home. Such advances also shorten the length of stay for many procedures still performed in the hospital. Similarly, changes in medical practice and the development of psychotherapeutic drugs to treat mental illness have led to fewer and shorter hospital admissions for psychiatric patients and to the deinstitutionalization of many long-term psychiatric patients. While changes in technology and medical practice contributed to declining demand for both community and VA hospitals, for many years VA lagged behind the private sector in effectively using such changes. VA, however, is now aggressively shifting patients from inpatient to outpatient and other less costly settings. As a result, many issues remain unresolved concerning the future effects of changes in medical technology and practice on demand for VA hospital care. For example, VA’s success in reducing inpatient surgeries is diminishing the economic viability of, and threatening the quality of care provided by, many VA hospitals’ inpatient programs. Limited data are available on efforts to ensure that vulnerable populations, such as the homeless, do not lose access to VA services through efforts to shift care to outpatient settings. Advances in medical technology continue to be a major force driving change in the health care system. But, unlike the first three-quarters of the century when medical advances fostered increased demand for hospital care, recent advances have reduced this demand. Technology advancements now permit (1) many surgeries to be performed in a doctor’s office or hospital outpatient department, (2) shorter lengths of stay following inpatient surgeries, and (3) treatments for many chronically and catastrophically ill patients to be provided at home rather than in a hospital. Although VA, through its affiliations with medical schools and research programs, played an important role in developing and testing many of these technologies, it lagged behind the private sector for many years in using new technology to shift care from inpatient to outpatient settings. As a result, the full effect of technology on demand for VA hospital care has yet to be felt. During the past few years, VA has aggressively shifted care to outpatient settings. Technological changes and medical innovations are shifting many surgeries and medical treatments from inpatient to less intensive, outpatient settings. The following treatments for ulcers, kidney stones, and cataracts are examples: H2 antagonists are drugs with brand names such as Tagamet and Pepcid-AC used to reduce the production of gastric acids. In 1977, before the introduction of H2 antagonists, about 155,000 people had surgery for ulcers. By 1993, surgeries for ulcers had dropped to about 16,000. The recent discovery that most ulcers are caused by bacteria and can be treated with antibiotics will probably result in fewer such surgeries. Lithotripsy (in Greek, “stone crusher”) is a process that uses shock waves to fracture kidney stones into pieces small enough to pass through a patient’s urinary tract. Although patients may be able to pass smaller stones on their own, many stones are too large to pass through the ureter, a gradually narrowing tube in the urinary tract. In the past, when patients could not pass a kidney stone, the primary treatment was surgery to remove the stones. Now, however, a specialized piece of equipment—an extracorporeal shock-wave lithotripter—produces shock waves to fracture the kidney stone, allowing the patient to pass the stone without surgery. Lithotripsy requires no lengthy hospital stay, no incision or surgery, and no lengthy recovery period. Up to 95 percent of the approximately 400,000 Americans treated for kidney stones each year can now be treated through lithotripsy rather than surgery. Lithotripsy can generally be performed as an outpatient procedure. Phacoemulsification is a method of treating cataracts in which an ultrasonic device disintegrates the cataract, which is then suctioned out. This procedure, which involves only a tiny incision, can be done on an outpatient basis with the patient typically returning home within hours after the cataract is removed and a plastic lens implanted in the eye. In the past, cataract removal generally required an inpatient hospital stay of several days. Cataract surgery is the most often performed therapeutic surgical procedure in the United States on people 65 years of age and older. Medicare pays over $3.4 billion a year for cataract surgery, paying for about 1 million of the 1.3 million cataract procedures performed annually. The percentage of surgeries performed on an inpatient basis has declined steadily in the private sector since 1989. In 1993, over 55 percent of surgical operations in community hospitals were performed on an outpatient basis. Until recently, VA was much less successful in shifting care to outpatient settings than were community hospitals. For example, audits by VA’s Office of Inspector General (OIG) in 1991 and 1992 identified the unavailability of outpatient surgery or other capabilities as the primary cause of unnecessary admissions and days of care in VA surgical wards. Specifically, the OIG estimated the following: The New Orleans VA medical center could have avoided about 32 percent (931 of the 2,921 days) of surgical care had the center established an outpatient surgery program. About 32 percent of the Denver VA medical center’s 1- to 4-day surgical admissions were for medical care that could have been provided on an outpatient basis without jeopardizing patients’ welfare. About 45 percent of the 2-day surgical admissions at the Togus, Maine, VA medical center could have been avoided by treating the patients on an outpatient basis. The medical center agreed with the finding and attributed the inappropriate admissions to the perception that VA’s resource allocation method did not cover the cost of outpatient surgery. The Dallas VA medical center incurred about $766,000 in unnecessary expenses because physicians admitted patients who did not require hospital care and hospitalized patients longer than medically necessary. The lack of facilities dedicated to outpatient surgery was the sole reason cited for the inappropriate admissions. About 72 percent of inpatient cataract surgeries and 29 percent of other short-term surgical admissions reviewed at the West Los Angeles VA medical center could have been done on an outpatient basis. The Veterans Health Administration’s (VHA) recently established performance measures for Veterans Integrated Service Network (VISN) directors set expectations for what portion of surgeries should be done on an outpatient basis. For example, under one fiscal year 1996 performance measure, VISN directors were judged to be fully successful if from 50 to 64 percent of surgeries and invasive procedures were done in an outpatient setting; 65 percent or more was considered exceptional performance. All VA medical centers now have outpatient surgery programs. All but eight VISN directors exceeded the 50-percent minimum for fully successful performance in fiscal year 1996; one VISN director—in VISN 11 (Ann Arbor)—was exceptional. Seven of the eight VISN directors not meeting the minimum made statistically significant improvements in the percentage of outpatient procedures performed. Systemwide improvement has been impressive, from 35 percent in fiscal year 1994 to 52 percent in fiscal year 1996. VHA’s goal is to reach at least 65 percent of surgeries and other invasive procedures performed on an outpatient basis in fiscal year 1998; 75 percent or more is considered exceptional performance. Advances in medical technology have also reduced the length of stay following inpatient procedures. For example, the development of the endoscope allows many procedures to be done through a natural body opening, such as the mouth, or through a small incision. An endoscope is an instrument with an optical system for observing the inside of a hollow organ or cavity. Another comparable instrument, the laparoscope, permits the removal of the gall bladder through surgery involving only minimal incisions. As a result, the length of stay following gall bladder surgery has often been reduced from a 3- to 7-day recuperative period to a 1- to 2-day period. In some cases, gall bladder surgery is now done as an outpatient procedure. Similarly, the use of balloon angioplasty to open narrowed coronary arteries reduces the need for more invasive bypass surgery. To perform angioplasty, surgeons insert a catheter with a deflated balloon on its tip into an artery narrowed by plaque. Plaque is the fatty material that accumulates inside the walls of the arteries and blocks blood flow. The balloon is inflated to widen the clogged artery. Angioplasty is clearly less invasive than bypass surgery. Advances in medical technology also make it possible for many chronically and catastrophically ill patients to receive medical treatment at home. For example, people with chronic respiratory problems who require a ventilator and nursing assistance can often return home if they are provided with a ventilator, visits by a nurse, and associated supplies. Similarly, sophisticated medical care previously available only in a hospital or nursing home can now be provided at home because of the development of medical technology such as ventilator therapy and infusion pumps. The development of new drug therapies and mental illness treatment and care practices has helped reduce acute psychiatric admissions to both community and VA hospitals. Efforts to deinstitutionalize the chronically mentally ill have also helped reduce hospital admissions. Because the chronically mentally ill were typically in state and county hospitals for the mentally ill rather than in community-based facilities, VA hospitals treating veterans for mental illness were more affected by efforts to deinstitutionalize the chronically mentally ill than were community facilities. Psychotherapeutic drugs are those that lessen the primary symptoms afflicting mentally disturbed people such as anxiety, depression, and psychosis. Among the psychotherapeutic drugs are antianxiety agents such as Librium, Valium, Xanax, and Ativan, all of which are forms of benzodiazepine; antidepressants such as Nardil (phenelzine sulfate), Adapin (doxepin HCL), and Etrafon (perphenazine and amitriptyline hydrochloride); antipsychotic products such as Clozaril (clozapine), Haldol (haloperidol), and Thorazine (chlorpromazine); and psychostimulants such as Ritalin (methylphenidate hydrochloride) and Cylert (pemoline). Such drugs often allow people with mental illnesses that in the past would have required lengthy periods of institutionalization to obtain outpatient treatment. In the past, many mentally disabled people were institutionalized, typically in state and county mental hospitals. Because of concern over the deplorable conditions in many of these facilities, new treatment methods and philosophies, and the potential for cost savings, however, efforts were made to place institutionalized mentally disabled patients in the community. The Mental Retardation Facilities and Community Mental Health Centers Construction Act of 1963, which was repealed by the Omnibus Budget Reconciliation Act of 1981, became the basis for a major part of the federal government’s involvement in “deinstitutionalizing” the mentally disabled. The Congress later amended the Social Security Act to enable more mentally disabled people to return to the community. Deinstitutionalization was intended to allow mentally disabled people to be as independent and self-supporting as possible by (1) preventing unnecessary admission to and retention in institutions; (2) finding and developing appropriate care alternatives in the community, such as day care and foster homes; and (3) improving conditions, care, and treatment for those needing some institutional care. In a 1977 report, we noted that deinstitutionalization had returned many mentally disabled people to communities. For example, the resident population in public mental hospitals steadily declined nationwide from 505,000 in 1963 to 120,000 in 1983. In 1967, about 193,000 people were in public institutions for the mentally retarded. By 1982, the number had declined to about 118,000. Although the use of VA psychiatric beds declined significantly, the decline in use of state mental hospitals declined even more. In its 1977 report, The Aging Veteran: Present and Future Needs, VA noted that the number of VA psychiatric beds dropped from 54,345 in 1967 to 28,173 in 1977, despite an increase in annual admissions from 71,076 to 161,969. During the same time period, outpatient psychiatric visits to VA mental hygiene clinics, day treatment centers, and day hospital programs increased from about 750,000 to over 1.6 million. VA identified the following important developments that modified its approach to psychiatric care: improvements in psychiatric therapy, development of a wide variety of psychotropic drugs that made it possible for many psychiatric patients to function independently, recognition that geographically isolated institutions may not provide the best environment for rehabilitation, recognition that psychiatric care is more effectively delivered as a service of a general medical and surgical teaching hospital, a change in philosophy that has encouraged returning many psychiatric patients to the community, and expansion of outpatient resources and treatment modalities. Unlike acute medical and surgical hospital use, the need for which increases as people age, VA found that the frequency of major psychiatric hospitalization decreases as people age. In its report, Aging Veteran, VA said that the decline in psychiatric hospitalization would probably continue as the veteran population aged. Specifically, VA noted that the hospitalization rates for schizophrenia, psychoneuroses, personality and behavior disorders, and alcoholism decrease as people age. It concluded in 1977 that “it seems reasonable to assume that the aging veteran population will not create new pressures for psychiatric beds.” Demand for psychiatric hospital care did, as VA predicted, continue to decline, although admissions over the last 20 years continued to increase slowly. In fiscal year 1996, VA operated 15,690 psychiatric beds, a decline of over 70 percent during the past 30 years. VA was slow to take advantage of new technologies and medical practices and shift patients from hospital beds to outpatient clinics and other care settings. As a result, estimates of nonacute days of medical and surgical care in individual VA hospitals ran as high as 72 percent only 6 years ago. VA has begun addressing these problems during the past several years, and early results are encouraging. For example, VA increased the percentage of surgeries and other invasive procedures performed on an outpatient basis from 35 percent in 1994 to 52 percent in 1996. VA’s success in reducing inpatient surgeries, however, could further diminish the economic viability of the inpatient surgery programs at many VA hospitals and threaten their ability to provide quality care. In fiscal year 1996, 56 of the 129 VA hospitals with inpatient surgery programs had an average of fewer than 25 surgery beds occupied on any given day (average daily census (ADC)); 28 had an ADC of less than 10, including 6 with an average workload of only one or two patients. In addition to the high cost of maintaining inpatient surgery programs for so few patients, such programs raise concerns about quality of care because surgeons may not perform enough operations to remain proficient. The VA OIG initially raised questions about continuing to operate surgical programs with limited workload in a 1991 review of 33 VA surgical programs. The OIG recommended that VA consider closing inpatient surgical services at the 33 locations and (1) realign services with other medical centers or (2) provide the services through community hospitals. The OIG estimated that such a realignment would provide opportunities to better use staff resources and avoid the need for some replacement equipment and construction, saving over $100 million. In addition, the OIG’s audit expressed concerns about the quality of care provided at smaller hospitals with minimal workloads that are unaffiliated or minimally affiliated with a medical school. Five years after the OIG report was issued, however, 4 of the 33 medical centers reviewed by the OIG discontinued their surgical programs. Workloads at the remaining medical centers and others have continued to decline. With such a limited inpatient surgical workload, VA could discontinue the inpatient programs and either refer veterans to other VA facilities or use its new contracting authority to purchase care from community hospitals closer to the veterans’ homes. Referring veterans to other VA hospitals could help build workload at those facilities but would probably make health care less accessible for veterans (except in those places where two or more VA medical centers were in close proximity such as in Chicago, Boston, and Pittsburgh). In addition, the cost of transporting veterans to a distant VA medical center would add to the cost of providing the care through another VA facility. Transferring veterans to distant medical centers could also deprive them of the emotional support of family and friends unable to make the trip. Such travel could be particularly difficult for elderly spouses. Uncertainties also exist about the extent to which VA should shift additional mental health services to outpatient settings. For example, many VISNs plan to discontinue their inpatient substance abuse programs and provide outpatient services instead. Other VISN planning documents do not specifically address this. In 1972, more than 95 percent of veterans discharged from the substance abuse program were classified as poor; in 1995, about 50 percent of veterans in inpatient substance abuse programs were homeless at the time of admission, and 35 percent had both substance abuse and one or more psychiatric disorders. VA recognized this problem and is developing clinical guidelines and an addiction severity index to evaluate substance abuse patients. In a July 1997 report, the VA OIG reported that substance abuse treatment program officials in the 12 medical centers reviewed had established in-house residential care beds and identified community housing and social support resources for homeless patients before they converted their substance abuse treatment programs to outpatient programs. The OIG also found, however, that the wide variation in reporting of the number of patients treated in substance abuse treatment programs in the VA databases prevents VHA officials from really knowing the impact of these conversions to outpatient treatment on access to care for homeless and other economically disadvantaged veterans. The OIG also identified needed improvements in (1) methods for identifying homeless veterans seeking treatment in both VA and community-based substance abuse treatment programs; (2) efforts to ensure that halfway house beds are available for veterans needing such aftercare; and (3) medical record documentation to show that VA employees discussed the ability of veterans, particularly homeless or economically disadvantaged veterans, to arrange transportation to outpatient substance abuse treatment. The OIG found transportation to be a major barrier to outpatient substance abuse treatment, particularly in small urban areas. A third of the patients from small urban areas interviewed by the OIG indicated that inadequate transportation systems limited patients’ access to outpatient care. The OIG reviewed the medical records of 71 homeless patients discharged from inpatient substance abuse treatment programs and found that 50 records had no information to show that program officials had discussed transportation issues with the veterans. In response to the OIG report, VHA identified actions to strengthen the substance abuse program, including establishing a committee to discuss possible solutions to the transportation problem. Because these actions are in the planning stage, it is not clear what their effect will be on lessening the impact of VA’s shift of substance abuse treatment to outpatient settings on access to care for homeless veterans. Although the OIG has evaluated VA’s efforts to shift substance abuse treatment from inpatient to outpatient settings and corrective actions are planned or under way, less is known about the effects of other efforts to shift care to outpatient settings. For example, a large percentage of homeless veterans suffer from serious mental illness, including post-traumatic stress disorder (PTSD). As a result, such veterans may face the same transportation barriers as veterans with substance abuse problems in accessing outpatient mental health care, for example, PTSD treatment. Little is known about the extent to which veterans discharged from VA psychiatric hospitals receive needed outpatient mental health services as well as the full range of other VA benefits to which they may be entitled to enable them to function independently. Fundamental changes in the structure of public and private health insurance have significantly reduced community hospital use but affected VA hospitals less. The establishment of prospective payment, capitation, and other payment methods under public and private health insurance has provided community hospitals strong financial incentives to reduce hospital admissions and lengths of stay or both. Similarly, insurers’ increased focus on medical necessity through such programs as preadmission certification has reduced both admissions to and lengths of stay in community hospitals. Finally, increased third-party coverage of home health and hospice care has made it possible to (1) discharge patients from hospitals sooner and (2) reduce the use of hospital care by the terminally ill. These changes, however, have had limited effect on demand for care in VA hospitals because these hospitals do not financially depend on insurance payments. VA is implementing changes in allocating funds to its hospitals and managing patient care that seek to simulate changes in public and private insurance. Because these changes are recent and because of differences between VA and private-sector actions, such changes’ effect on future demand for VA hospital care is uncertain. For example, it is not clear to what extent VA’s new preadmission screening program will change physicians’ admitting practices without the financial incentives used in the private sector. Similarly, it is unclear how Veterans Integrated Service Networks (VISN) and individual VA facilities will react to the financial incentives created by VA’s new capitation-based resource allocation system without the contractual obligations to provide covered services that private-sector managed care plans have. Prospective payment and other payment reforms initiated by Medicare and other third-party payers have significantly reduced demand for hospital care in community hospitals. These payment reforms were designed to provide community hospitals financial incentives to reduce hospital admissions and lengths of stay or both. Third-party payment reforms, however, have not played a major role in reduced demand for VA hospital care; VA hospitals, unlike community hospitals, do not depend on third- party payments. VA is changing its funding of health care facilities to create financial incentives like those in the private sector. The methods—fee-for-service and cost-based reimbursement—originally used by both public and private health insurers to pay for hospital and other health care services created incentives for physicians and hospitals to provide unnecessary services. Under fee-for-service reimbursement, physicians receive an amount for every service provided. As a result, physician income depends largely on the volume of services provided. Fee-for-service payments thus create financial incentives to provide unnecessary services. Similarly, under cost-based reimbursement, hospitals were typically reimbursed retrospectively on the basis of costs incurred. Hospitals were paid their actual costs as long as they were reasonable, related to patient care, and not in excess of maximum allowable amounts established by the program. This method encouraged hospitals to spend more and keep patients in the hospital longer because the more they spent for services, the larger their reimbursement would be. Although the 1970s saw several attempts, particularly under federal programs, to set limits on reimbursement rates, these efforts did not succeed in controlling cost growth. For hospitals, the most significant change in payment methods came with the 1983 enactment of PPS for acute care hospitals treating Medicare beneficiaries. Unlike the cost-based system preceding it, PPS has incentives for hospitals to shorten lengths of stay and provide care more efficiently. Hospitals are paid a predetermined amount for each Medicare discharge. Acute care patients are placed in 1 of over 400 diagnosis-related groups, or DRGs, on the basis of their principal diagnoses, the presence of complicating conditions, whether certain procedures were performed, and their age. In determining the payment amount, HHS basically calculates the average cost of treating Medicare patients in each DRG using historical hospital cost data and then adjusts the PPS rates for factors such as differences in area wages, teaching activity, and care to the poor. Hospitals whose average costs are lower than the PPS rates may keep all of the difference; hospitals whose costs are above these rates must absorb the loss. To reduce the risk to hospitals of costly cases, Medicare pays hospitals additional amounts for high-cost “outliers.” PPS drastically changed hospitals’ financial incentives. Under the cost- reimbursement system, hospitals had an incentive to keep patients longer and provide more ancillary services because each day of care and service provided was reimbursed separately. Under PPS, hospitals have a financial incentive to limit lengths of stay and the number of ancillary services provided because payment is fixed without regard to these factors. Both the average length of hospital stay and the number of admissions to community hospitals declined after PPS was introduced. Although PPS was initially limited to payment for services provided to Medicare beneficiaries, many other health care programs adopted similar payment methods. For example, the Civilian Health and Medical Program for the Uniformed Services (CHAMPUS) implemented a DRG-based PPS on October 1, 1987, to reduce government costs and provide an incentive for hospitals to reduce operating costs. Similarly, in 1991, 20 states reported using a DRG-based PPS under their Medicaid programs. Unlike community hospitals, whose revenues come mainly from third- party payments, VA hospitals do not depend on such payments. VA lacks the authority to bill Medicare for services provided to Medicare-eligible veterans. Although VA bills private health insurers for services it provides to their policyholders, recoveries occurring before June 30, 1997, except for the amount spent on the recovery effort, were returned to the Treasury. VA receives an annual appropriation from the Congress to cover the costs of services it expects to provide to veterans, including those with private health insurance or Medicare coverage. Until 1984, the distribution of appropriated funds to individual VA medical centers had been based mainly on their historic expenditures; that is, each medical center generally received its prior year’s allocation adjusted for inflation and certain other factors such as operating new facilities and programs. VA experimented with a case mix PPS to allocate resources to its hospitals in the mid-1980s but abandoned the system in 1990 when concerns arose about “gaming” and the equity of resource allocations. In 1984, VA introduced a national average cost-based prospective budgeting approach, the Resource Allocation Method (RAM) for distributing globally budgeted funds to its medical facilities. Like HCFA’s PPS, RAM was based on DRGs. Initially, VA planned to use RAM to measure and redistribute acute inpatient care resources, including all general medical, surgical, rehabilitation, neurological, and psychiatric services. In 1985, RAM was expanded to include outpatient and extended care services. Funds for outpatient care were allocated using an age-adjusted, capitation method with six price groups determined by the type and extent of utilization during a year. Extended care, including intermediate hospital care and nursing home care, was to be funded through a Resource Utilization Group (RUG) system. Similar to hospital DRGs, the RUG system classifies long-term care patients according to the amount of direct nursing that they require. Unlike Medicare’s PPS’ effects on community hospitals, however, RAM had little effect on VA hospitals’ budgets. RAM showed that VA hospitals incurred differing costs for treating similar patients and provided for shifting significant amounts of resources among facilities to encourage more efficient operations. VA never fully implemented RAM, however, shifting few resources (less than 2 percent of the total dollars budgeted) among facilities. RAM was abandoned in 1990 because of concerns that medical centers were gaming the system to maximize resource allocations. Gaming involves medical centers performing work beyond their resources to justify additional resources in the future. Although VA cited gaming as the main reason for abandoning RAM, it was not implemented partly because stakeholders lacked confidence in the equity of the resource allocations. After RAM was abandoned, VA moved toward a new patient-based allocation system known as the Resource Planning and Management (RPM) system. Even after introducing RPM in 1994, however, VA continued to allocate resources mainly on the basis of historical cost. RPM, like RAM, was never fully implemented, and few resources were actually shifted among VA facilities. In April 1997, VA began to implement a new resource allocation system—the Veterans Equitable Resource Allocation (VERA) system based on the capitation funding principles applied by many risk-based managed care plans. Capitation was the second major change in how public and private health insurers pay for health care that contributed to declining demand for hospital care. Under capitation, a health maintenance organization (HMO) or other risk-basis managed care plan agrees to provide comprehensive health services to enrollees in return for a prepaid, fixed payment for each enrollee regardless of the quantity or types of services provided to any particular enrollee. The loss an HMO suffers from treating enrollees whose health care services cost more than the HMO receives in capitation payments is offset by the profit the HMO makes from enrollees who use services worth less than the capitation amount. Capitation reverses the financial incentives existing under the traditional fee-for-service reimbursement system. It gives HMOs and other managed care plans incentives to limit the utilization of health care services because their profits increase if they provide fewer services. Because revenue is collectively obtained from the entire enrolled population of the managed care plan, the effect of an individual enrollee’s health care use on the HMO’s profitability is limited. In other words, capitation tempers the financial incentive of an HMO to deny needed services to an individual patient. Many HMOs and other managed care plans use capitation or other financial incentives to shift some of the risk to individual providers or groups of providers. Depending on their design, such capitation payments may encourage primary care physicians to limit referrals to specialists and admissions to hospitals and hospitals to limit the lengths of stay and admissions. The financial incentives vary by type of HMO. For example, staff model HMOs provide services through salaried primary care physicians; such physicians do not directly benefit financially by limiting the services they provide. Other types of HMOs and managed care plans, however, provide physicians financial incentives through capitation to control (1) use of primary care services, (2) referrals to specialists, and (3) hospital admissions. Capitation payment mechanisms require primary care physicians or groups of physicians to accept a monthly designated amount as payment in full for each assigned enrollee, no matter how often during the month the physician or group of physicians provides services or how much the services cost. This shifts a substantial portion of financial risk for medical services from the HMO to the primary care physician; an individual primary care physician or group of physicians can gain or lose profits depending on the amount of patient services delivered. The amount of financial risk transferred from the HMO or managed care plan to the physician or physician group is lowest when the capitation covers only primary care services; the risk increases as the physician or physician group becomes responsible for a wider range of services such as care by specialists and hospital care. Although much debate continues on the cost-effectiveness of HMOs and their effect on access to and continuity and quality of care, studies have found that HMO enrollees have lower hospital utilization compared with fee-for-service plans, particularly regarding shorter hospital lengths of stay. Therefore, the rapid growth of HMOs and other managed care plans has significantly contributed to decreasing demand for hospital care. Enrollment in HMOs increased from 9 million in 1980 to an estimated 56 million in 1995 (see fig. 4.1). HMO enrollment skyrocketed from 3,356 per 100,000 population in 1978 to 17,526 per 100,000 population in 1993, according to a report prepared for the National Committee for Quality Health Care. In addition, many states are enrolling Medicaid recipients in HMOs or other managed care plans. Capitation did not, however, contribute significantly to the declining demand for VA health care. Throughout the 15-year period during which VA hospital workload steadily declined, VA hospitals were funded mainly on the basis of their historical workload, creating incentives to increase—not decrease—inpatient workload. VA began implementing a capitation-based resource allocation system— VERA—in April 1997. Under VERA, facilities’ resource allocations are developed on the basis of the number of users rather than on the number of services provided. Users are divided into two groups—those with routine health care needs (called Basic Care) and those with special, typically chronic, and complex health care needs (called Special Care). For fiscal year 1997, VA allocated $2,596 for each Basic Care user and $35,707 for each Special Care user. VA adjusted allocations to reflect differences in labor costs in geographic areas. Because VISNs receive a fixed allocation for each Basic and Special Care user regardless of the types or volume of services provided, the allocation system no longer provides a financial incentive to unnecessarily hospitalize patients to increase resource allocations. VERA should ensure that VISNs have a financial incentive for their facilities to treat patients in the most cost-effective setting. Although VERA holds promise for creating financial incentives for VISNs to reduce unnecessary hospital use, we have testified that VA has not adequately studied the reasons for the cost variations among VISNs. Flat-rate reimbursement was the third major change in payment methods that affected demand for hospital care. States often use flat-rate payments under their Medicaid programs and managed care plans in negotiating provider agreements. States have considerable flexibility in determining how they pay for hospital care under their Medicaid programs. Generally, states’ methods for reimbursing hospitals may not yield rates that exceed amounts paid under the Medicare program. Before Medicare’s PPS implementation, most states, like Medicare, reimbursed hospitals on a retrospective cost basis. Due to increased flexibility given states through the Omnibus Budget Reconciliation Act of 1981, all but four states shifted from retrospective cost-based reimbursement to some PPS by 1991. Fourteen states developed systems that pay a flat rate per day or per case regardless of diagnosis. The rates are generally established for each individual facility but may be subject to overall limits for classes or “peer groups” of facilities depending on number of beds, affiliation with medical schools, and location. Under flat-rate PPSs, hospitals receive a fixed payment for each day of hospital care provided or each patient treated regardless of the volume or cost of services provided. Hospitals have incentives to limit the amount and types of services provided. Like the other payment reforms, flat-rate payment methods have not contributed to the declining demand for care in VA hospitals. Private-sector hospitals have a financial incentive to limit the services they provide because their profits depend on the extent to which they can provide care for less than the amount they receive from Medicaid. VA’s system, however, does not base hospitals’ funding on their per diem costs. Under a traditional fee-for-service health plan, enrollees obtained access to all types of care through an independent physician who was reimbursed by the health plan for the specific treatment provided. The fee-for-service payment method encouraged physicians and hospitals to provide unnecessary services. However, two major changes in how insurers manage their enrollees’ access to covered health care services—primary care case management and preadmission certification—have been used to control admissions to and lengths of stay in community hospitals. Although these changes have significantly contributed to the declining use of community hospitals, they have had less effect on demand for care in VA hospitals because VA hospitals do not depend financially on payments from third-party insurance and, until recently, VA hospitals did not have comparable programs. VA, however, began systemwide implementation of its own primary care program in 1994 and established a systemwide preadmission screening program in August 1996. Unlike preadmission screening programs of health insurers, however, the VA program does not financially penalize a physician or hospital if a patient admitted to a hospital is determined to need less care or a patient stays beyond the number of days determined appropriate for the condition(s) being treated. In addition to providing financial incentives for physicians to limit referrals to specialists and admissions to hospitals, HMOs and other managed care plans control use of specialists and hospital care through primary care case management. The objective of case management is to coordinate and organize health care resources to address patients’ specific medical problems and to control the cost and volume of the health services delivered. Each insured individual selects or is assigned to a case manager through whom all medical care (including hospital and specialty care) is provided or approved. Primary care case management may take place either in a risk-based prepaid health care setting, such as an HMO, or in a nonrisk-based fee-for-service system. For example, 17 states participating in Medicaid managed care in 1993 operated primary care case management programs. Under these programs, recipients have a specific primary care doctor or provider who oversees their care. Providers are paid on a fee-for-service rather than a risk basis. Medicaid recipients enrolled in primary care case management plans obtain access to care through a primary care physician who controls (acts as a gatekeeper) and coordinates the delivery of health services in a cost-conscious way. Primary care case management did not significantly contribute to the declining use of VA hospital care. In the past, VA care was episodic, with veterans appearing at the emergency room or outpatient clinic when they were sick. The more traditionally operated general medicine clinics do not always pair the veteran with the same physician, so no single physician may be responsible for the veteran’s care. One of the objectives set forth by VA’s Prescription for Change was to establish primary care as the central focus of patient treatment. Though 20 percent of VA users perceived that one provider or primary care team was in charge of their care in 1994, 72 percent of users in 1996 were assigned a primary care provider. VA’s goal is to have 80 percent of users enrolled in primary care during fiscal year 1998. While prospective payment gives hospitals incentives to reduce lengths of stay and the number of ancillary services provided, it does not give incentives to control hospital admissions. One way to control unnecessary hospital admissions is through preadmission certification of the medical necessity of acute, inpatient hospital services. Under preadmission certification, the insurer must review and approve of the need for admission (other than in an emergency) beforehand. Hospital preadmission certification can also effectively identify potential candidates for more cost-effective alternatives to inpatient care such as home health care. Such certification has become common in private health insurance policies and in HMOs. About 75 percent of private-sector employers now purchasing health insurance for their employees, an official of the Health Insurance Association of America (HIAA) estimated, want a hospital preadmission certification program included in their overall health care package. Beneficiaries or their physicians typically have to contact their insurers at the time of the nonemergency admission to the hospital to obtain certification that the insurer will pay the hospital. Similarly, all fee-for-service health plans participating in the Federal Employees Health Benefits Program (FEHBP) must operate hospital preadmission certification programs. For example, the governmentwide Blue Cross and Blue Shield Service Benefit Plan requires that the enrollee or enrollee’s doctor check with the local plan before the enrollee is admitted to a hospital (or within 2 business days after the day of a maternity or emergency admission). Precertification allows the plan to evaluate the medical necessity of the proposed hospital admission and to determine the number of days of hospital care authorized for treating the enrollee’s condition. If a policyholder is admitted to the hospital without precertification, the plan reduces benefits by $500, even if the admission was medically necessary. If the plan determines that the hospitalization was not necessary, it will not pay inpatient hospital benefits. If the plan determines the admission to be medically necessary but part of the stay not to be medically necessary, the plan will not pay inpatient hospital benefits for the portion of the stay that was not medically necessary. Insurers’ preadmission certification requirements did not significantly contribute to the declining demand for VA hospital care between 1980 and 1995. This is because the VA system hardly had any financial incentives to provide care in the most cost-effective setting. Even in those cases in which a private health insurer’s preadmission certification requirement applied, failure to obtain such certification or to admit the patient after certification was denied did not affect hospital revenues. A VA hospital that admits a patient who does not need hospital care incurs no penalty. In fact, VA’s past resource allocation methods gave medical centers a financial incentive to admit patients whose care could have been provided more efficiently in an outpatient setting and to keep them in the hospital as long as possible. VERA is intended to overcome this problem and provide financial incentives for VISNs to provide care in more cost-effective settings. As noted, however, VERA does not provide financial incentives for individual physicians to use more efficient practices. We reported in July 1996 that VA, unlike private-sector health care providers, had no systemwide external preadmission screening program or other utilization review program to provide incentives to ensure that only patients who need hospital care are admitted and that patients are discharged as soon as medically possible. In response to our recommendation that it establish an independent external preadmission certification program, the Veterans Health Administration, in August 1996, issued a directive requiring VISNs to establish utilization management programs to assess, monitor, and evaluate the appropriateness of the level of care provided by their facilities. By September 30, 1996, facilities had substantially implemented preadmission review of 100 percent of planned admissions to determine each patient’s most appropriate level of care and continuing stay reviews to determine the appropriateness of each additional day of acute hospitalization. Each VISN was to determine the design and extent of the continuing stay reviews. The directive also said that each network was to ensure that facilities establish a process for coordinating referrals and arrange for the inpatient and outpatient alternatives to acute hospitalization for each patient. The outpatient alternatives should, the directive states, include clinic appointments to primary care clinics, preferably, or specialty clinics; urgent care evaluation units; outpatient care evaluation units; temporary lodging; or observation beds. Expanded insurance coverage of home health care has helped reduce community hospital admissions and lengths of stay. Both public programs, such as Medicare and CHAMPUS, and private insurance have expanded coverage of home health care, particularly when such care is considered less expensive than continued hospital care or an alternative to hospital care. Although VA also provided home health care during our study period (1980 to 1995), the availability of such care was more limited. For chronically and catastrophically ill patients, home health care may (1) reduce the number or length of rehospitalizations, (2) benefit the patient, and (3) cost less than hospital care for many patients who would otherwise remain in the hospital if home care were not available. The increased demands for home health care also reflect many Americans’ desire for treatment options that allow autonomy, functional independence, quality of life, and dignity, while providing needed support. With the implementation of the Medicare inpatient PPS in 1983, use of the home health benefit was expected to grow as patients were discharged from the hospital earlier in their recovery periods. Expenditures changed little in the next 5 years, however. Home health expenditures grew significantly after home health coverage was broadened and program controls were reduced in the late 1980s. Figure 4.2 shows the growth in Medicare home health visits per 100,000 beneficiaries between 1978 and 1993. The extent to which home health care has helped decrease hospital lengths of stay has not been quantified. Nevertheless, the availability of home health care has surely enabled decreased lengths of stay. Although home health care has been a Medicare benefit since the program’s inception, changes in the legal and regulatory provisions governing the home health benefit, together with changes in HCFA’s policies, have played a major role in increased use of the benefit. Initially, Medicare provided a limited posthospital home health care benefit of up to 100 visits per year. Benefits were available only following discharge from a hospital and had to be provided within 1 year after the patient’s discharge and for treating the illness that caused the hospitalization. These restrictions were eliminated by the Omnibus Reconciliation Act of 1980. Other important restrictions, however, remained. For example, under HCFA’s interpretation of the law, home health care was available only on a part-time and intermittent basis. After HCFA’s interpretation of this and other benefit coverage requirements was struck down in a 1988 lawsuit (Duggan v. Bowen), Medicare coverage was further broadened. As a result of the lawsuit, HCFA revised its home health guidance to cover home health care that is part time or intermittent, enabling home health agencies to increase the frequency of visits. In addition, patients now qualify for skilled observation by a nurse or therapist if a reasonable possibility exists for complications or the need to change treatment. Moreover, the benefit now allows maintenance therapy, that is, therapy services required for the patient to simply maintain function. Previously, patients were eligible for therapy only if expected to show improvement from such services. These changes made Medicare home health care available to more beneficiaries for less acute conditions and for longer periods of time. For example, in 1992, about one-third of Medicare home health beneficiaries entered the program without a prior hospital stay during the year and, of those who had been hospitalized, only half had been hospitalized within the 30 days before receiving home health care. Both the number of Medicare beneficiaries receiving home health services and the number of services received by each beneficiary have increased significantly. In 1989, 1.7 million Medicare beneficiaries received home health services; by 1993, this number had grown to 2.8 million. During the same time, the number of visits provided to beneficiaries receiving home health care more than doubled, from an average of 26 visits per year in 1989 to an average of 57 visits per year in 1993. Linking these increases to decreased use of hospital care is difficult, however. As discussed, the largest increases in home health visits did not occur during the 5 years following implementation of the PPS. During that period, however, the Deficit Reduction Act of 1984 reduced the number of intermediaries processing home health claims, and HCFA intensified education of the home health intermediaries to promote more consistency in claims reviews. These improved controls resulted in an increased claim denial rate of between 1985 and 1987. Thus, reductions in home health use may have offset any increased use of home health care to shorten hospital lengths of stay. Although controls over home health care improved during the mid- and late 1980s, they have largely deteriorated since then, contributing to the growth in benefit payments. The Congress, in October 1992, authorized DOD to establish a program for individual case-managed home care of military beneficiaries with extraordinary medical or psychological disorders. The program grew out of two demonstration projects intended to test whether expanded home care benefits, coupled with case management, could reduce medical costs and improve services to CHAMPUS beneficiaries. The original program focused on serving patients who, in the absence of case-managed home care, would remain hospitalized. Although private health insurance plays a comparatively small role in financing home health care, it is the fastest growing benefit. Between 1989 and 1993, private health insurance payments for home health services increased from $0.4 billion to $2.5 billion (see fig. 4.3). Home health payments increased 13.6 percent between 1992 and 1993, compared with an increase of 7.9 percent for payments for hospital care, which had the second highest rate of increase. VA home health care benefits have grown more modestly, though still significantly, than such benefits under private health insurance. VA’s efforts to meet veterans’ home health care needs focus on providing long-term care services for chronic medical conditions as well as shorter term services for acute medical conditions. VA’s Hospital-Based Home Care (HBHC) program most often provides care to those with chronic conditions. Veterans requiring short-term skilled care often following a hospital stay generally receive services from community-based providers. VA either arranges for Medicare to pay for eligible veterans to receive home care from community-based providers or, under its fee-basis program, pays community-based providers to provide care for those not eligible for Medicare. HBHC is an extended-care program designed to meet the long-term care needs of veterans who have chronic multiple medical and psychosocial problems, a terminal illness, or a need for posthospital rehabilitation or monitoring. The objectives of the program are to provide primary care services to homebound patients; create a therapeutic and safe home environment; support the caregiver—the veteran’s spouse, other family member, or friend—in caring for the patient; reduce the need for, and provide an alternative to, hospitalization or other institutionalization; promote timely discharge of patients from hospitals or nursing homes; and provide an academic and clinical setting for students of the health professions. VA’s HBHC program, begun in 1972, had been implemented in VA’s 173 hospitals by fiscal year 1975. In fiscal year 1994, VA served 9,953 veterans under the program. The fee-basis program, the second method VA uses to provide home health services, involved nearly all VA hospitals in fiscal year 1995. The hospitals use the program to purchase skilled home health services from community- based providers. In fiscal year 1994, VA spent $27.3 million on fee-basis home health care services for about 12,800 patients. Most veterans in the program receive short-term home health care services for acute medical conditions, such as hip fractures or surgical wounds. Skilled nursing is the predominant service covered under the fee-basis program. Finally, VA provides homemaker/home health aide services for veterans who otherwise would be placed in a nursing home under a pilot program implemented in April 1993 in response to Public Law 101-366. Although the program was initially limited to services for veterans with service- connected disabilities, Public Law 103-452 expanded eligibility to include all veterans, and the Veterans’ Benefits Act of 1997 made the program permanent. Under the pilot program, a VA facility provides primary health services for veterans receiving homemaker/home health aide services. Community health nurses and social workers select a licensed home health agency to provide the homemaker/home health aide services. The continued need for the services is reassessed every 3 months, and the cost of homemaker/ home health aide services on a per patient basis is limited to 65 percent of the average per diem costs of VA nursing home care units. All VA medical centers may participate in the pilot program. In 1996, 118 medical centers operated pilot programs, which had an average daily census of about 1,457. In addition to the veterans receiving hospital- and fee-based care, VA facilities referred about 19,000 Medicare-eligible veterans to Medicare- certified home health agencies in fiscal year 1994. Medicare, rather than VA, paid for the home health services provided to such veterans. The rapid expansion of hospice care benefits from 1978 through 1993 has reduced hospital use by the terminally ill. Although VA also offers hospice benefits, its benefits were primarily for inpatients and limited to selected medical centers until 1993. As a result, these benefits did not significantly affect demand for inpatient hospital care between 1980 and 1995. Hospice care involves a medically supervised program of home or inpatient palliative and supportive care for a terminally ill patient and the patient’s family. Specialized care for terminally ill patients began in Europe in the 1800s, but in the United States, the first hospice was not formally organized until 1974. Medicare’s 1983 addition of a hospice benefit helped to rapidly expand hospice care: The number of hospices increased from 158 in 1985 to 1,459 in 1994. The number of Medicare- covered hospice days per 100,000 Medicare beneficiaries increased from 3,270 days in 1986 to 19,864 days in 1993. (See fig. 4.4.) Virtually all terminally ill Medicare beneficiaries are now eligible for hospice care. Until recently, coverage was limited to four periods of care—two 90-day periods, one 30-day period, and a final period of unlimited duration. The Medicare hospice benefit also offers financial incentives for hospices to provide care in the patient’s home rather than in a facility. Other health care programs also initiated or expanded hospice benefits. For example, over 30 states had added hospice benefits under their Medicaid programs by 1991 and DOD’s direct delivery system and CHAMPUS authorized a hospice benefit in 1991. Similarly, many private health insurers covered hospice benefits by the early 1980s. Although hospices mainly serve patients with cancer, a broad range of terminally ill patients, such as patients with acquired immunodeficiency syndrome, are also served. Moreover, an estimated 15 percent of the children who die in the United States could potentially benefit from hospice services. All terminally ill veterans are eligible to receive hospice care from VA with no limits on the length of time covered. VA’s Commission on the Future Structure of Veterans Health Care reported in November 1991 that only 45 VA medical centers had hospice programs as of October/November 1990. One year later, however, VA reported that all of its medical centers provided hospice care. VA has developed new methods for allocating resources and monitoring the appropriateness of hospital admissions and lengths of stay modeled after private-sector actions. The effects of these changes on future demand for VA hospital care are uncertain, however, because of important differences between VA and private-sector programs and because the changes are recent. VERA may help VA reduce hospital admissions as the private sector already has through prospective payment and capitation. The ultimate effect of VERA on hospital operations, however, depends on several factors. First, how effective will VERA be in changing practice patterns absent the financial risk upon which both prospective payment and capitation are based? Unlike private-sector hospitals and health plans, VISNs do not have a contractual obligation to provide their users needed health care services. Theoretically, if a VISN runs out of funds, it may deny care to any veteran, including those with service-connected disabilities. By contrast, private insurers have a contractual obligation to provide their members the full range of health care services covered by the plan. Because implementation of VERA did not begin until April 1997 and resource shifts are being phased in over several years, little is known about how VISNs and individual facilities are reacting to both increased and decreased resource allocations and the potential effects, both positive and negative, on veterans’ access to health care services. Determining the effect of VERA on VA hospitals’ efficiency will be difficult because VISNs and individual facilities can and do shift costs to other programs such as the Medicare home health and hospice programs and the Medicaid nursing home program. In other words, increased costs in other programs may offset reductions in VA costs per patient served. Another reason why VERA’s effects are uncertain relates to VISNs’ decisions on allocating resources. If VISNs use VERA to provide veterans the same opportunity for VA-supported hospital care regardless of veterans’ residence, then fewer funds will be available to support existing VA hospitals and more funds will be allocated to purchase care from community hospitals closer to veterans’ homes. This is because about 89 percent of veterans live more than 5 miles from a VA hospital providing acute medical and surgical care, and many veterans—given a choice between care in non-VA facilities close to their homes and more distant VA facilities—with no difference in out-of-pocket costs, would most likely choose non-VA care. Although it is too early to evaluate the effectiveness of VA’s new preadmission screening and continuing stay review requirements, data from both the Washington, D.C., and Martinsburg, West Virginia, VA medical centers indicate that about 45 percent of acute inpatient admissions and about 60 percent of acute days of care (in both centers) did not meet standards for acuity or intensity of care. Preliminary data from VISN 5 (Baltimore) suggest that they are having a limited effect on reducing unnecessary hospital admissions and excessive lengths of stay in that area. VISN 5 (Baltimore) uses its reviews mainly for data collection, evaluation, and monitoring. Unlike the preadmission certification and continuing stay review programs run by private health insurers, the VA program has no similar enforcement mechanism. Private-sector community hospitals generally do not get paid if they admit patients without the insurer’s prior approval, except in an emergency. Under VA’s preadmission certification program, however, neither the hospital nor the physician authorizing the admission incurs any direct financial penalty for admitting a patient whom the screening program determined did not need to be admitted. Even without giving hospitals and physicians a direct financial stake in admission decisions, preadmission screening and continuing stay reviews should somewhat affect nonacute admissions. Data are not yet available for gauging the extent to which individual physicians are changing their admitting practices because of the review programs. Once such data are available, the need to establish the types of financial disincentives to nonacute admissions that exist in the private sector can be determined. Finally, expanded home health and hospice benefits under public and private health insurance could affect demand for VA hospital care. The availability of Medicare home health benefits, which require no beneficiary cost sharing, may have contributed to decreased use of VA as well as community hospitals. Similarly, VA’s focus on home health and hospice care, both through direct provision of services and referrals to Medicare and other programs, could further reduce VA lengths of stay. Although medical advances and changes in the payment and care management methods used by public and private health insurers did not affect demand for VA hospital care as much as demand for community hospitals, several additional factors affected VA but not community hospitals. First, VA hospitals have had a steadily declining target population since 1980, while the general population has been increasing. Second, the Medicare and Medicaid programs gave many veterans the means to obtain care from community hospitals closer to their homes than VA hospitals. As the veteran population declines, an increasing proportion is becoming Medicare eligible and using such coverage to obtain all or a portion of their hospital care from more convenient community hospitals. Finally, the growth of HMOs and preferred provider organizations (PPO) with their relatively low cost-sharing requirements has largely eliminated one of VA’s competitive advantages over community hospitals—its ability to offer veterans free care if they use VA hospitals. Recent and proposed changes in the VA system and other health care programs create considerable uncertainty about future demand for VA hospital care. For example, how will expansions of veterans’ eligibility for VA health care services and VA’s ability to buy care from and sell care to private-sector hospitals and health plans affect future use of VA hospitals? Similarly, proposals to delay Medicare eligibility and give Medicare beneficiaries the choice of establishing medical savings accounts (MSA) could increase demand for VA hospital care. On the other hand, actions to make it easier for people to maintain insurance coverage when they change jobs could decrease future demand for VA care. VA hospitals have had a steadily declining target population since 1980. The decline is expected to escalate during the next 12 years, resulting in an overall one-third reduction in the number of veterans between 1980 and 2010. In contrast, the general population has increased steadily since 1980, helping offset the effect on community hospital demand of other efforts to decrease demand. The veteran population, which numbered slightly more than 30 million in 1980, declined to about 26 million in 1995. In contrast, the general population increased from about 228 million in 1980 to more than 263 million in 1995. (See fig. 5.1.) Projected changes in the veteran population by 2010 indicate that demand for VA hospital care will continue to decline unless VA acts to increase the percentage of veterans using VA hospital care. The veteran population is expected to decline another 23 percent (6.1 million) by 2010. In contrast, the general population is expected to increase by about 13.2 percent (34.7 million) in the same period. (See fig. 5.2.) With the downsizing of the military since the end of the Vietnam War and with World War II ending over 50 years ago, the aging of the veteran population has become more pronounced. The proportion of the veteran population under the age of 45 is projected to decline from 31 to 16 percent between 1990 and 2010. In contrast, the proportion of the veteran population that is 75 years old or older will increase from approximately 5 to about 23 percent in the same 20-year period. Although veterans’ health care needs increase among older veterans, the overall decline in the number of veterans should more than offset the increased hospital use by older veterans and should further reduce the number of days of VA hospital care. If veterans continued to use VA hospital care at the same rate that they did in 1994, the number of days of care provided in VA hospitals should decline about 11 percent, from 15.4 million in 1994 to about 13.7 million by 2010. In other words, even if VA made no other changes in its health care system to reduce the amount of care unnecessarily provided in its hospitals, the declining numbers of veterans would reduce demand despite the aging of the veteran population. These estimates may, in fact, overstate demand for VA inpatient hospital care. Between fiscal years 1994 and 1996, VA hospital days of care declined from 576 to 542 per 1,000 veterans. More importantly, days of care per 1,000 veterans aged 85 and older declined 30 percent in the 2-year period. Despite a 26-percent increase in the number of veterans 85 and older, days of care provided to veterans in the age group declined 11 percent. One of the main reasons for the declining use of VA services by older veterans is the introduction of Medicare and Medicaid. The rate at which elderly veterans used VA hospitals dropped by 50 percent between 1975 and 1996. The introduction of Medicare and Medicaid in 1965 gave many veterans new health care options. This is important because veterans who have health insurance are much less likely to use VA hospitals than veterans without public or private insurance. Medicare, which provides hospital insurance to almost all Americans aged 65 and older and some under 65 who are disabled, gave many veterans new or improved access to health insurance. Similarly, the enactment of Medicaid improved access to health care services for some low-income veterans. Almost immediately after the enactment of the two programs, demand for VA hospital care began to steadily decline as the Medicare and Medicaid programs were increasing demand in community hospitals. Medicare increasingly affected demand for VA hospital care between 1975 and 1996 as the veteran population aged. This is because most veterans become eligible for Medicare when they turn 65 years of age even if they were previously employed in jobs that did not provide health insurance. VA research has confirmed that a significant portion of VA’s elderly users leave VA’s inpatient care system or reduce their use of VA hospital care as they become Medicare eligible. VA hospital discharges per 1,000 veterans aged 65 or older declined from 78 in fiscal year 1975 to 39 in fiscal year 1996. Hospital discharges among veterans between the ages of 45 and 64 decreased, but to a lesser extent, in the 21-year period, from 33 to 29 per 1,000 veterans. Hospital discharges increased from 19 to 25 per 1,000 veterans under age 45. (See fig. 5.3.) The data show that the peaks in use by veterans in the two younger age groups roughly correspond to the aging of the large numbers of Vietnam-era and Korean Conflict veterans. For example, the 1985 peak in hospital use by veterans aged 45 to 64 corresponds to the period in which most Korean Conflict veterans were in this age group. Hospital use by this group of veterans subsequently began to decline as more Korean Conflict veterans reached 65 years of age. Similarly, VA hospital discharges per 1,000 veterans under age 35 have declined steadily since 1985 as most Vietnam-era veterans continue aging; discharges per 1,000 veterans aged 35 to 44 generally increased during the same time period. Increasing enrollment in HMOs, PPOs, and point of service (POS) plans also affected demand for VA hospital care by reducing or eliminating the financial incentive for veterans to use VA hospitals. Unlike traditional fee-for-service health insurance that typically requires policyholders to pay a significant portion of their hospital costs through deductibles and copayments, HMOs, PPOs, and POS plans generally require no or small cost sharing when policyholders obtain care from designated hospitals. In 1985, both public and private health insurance plans were still predominantly fee for service and had significant out-of-pocket costs. Although most fee-for-service insurance provided first-dollar coverage of hospital room and board, patients often paid sizable deductibles and coinsurance for physician and ancillary services. Specifically, about 66 percent of private health insurance policies provided first-dollar coverage of hospital room and board, but 95 percent required policyholders to pay from 10 to 20 percent of hospital charges for physician and ancillary services; the remaining 5 percent required policyholders to pay 25 percent of charges. In addition, fee-for-service insurance often required policyholders to pay a specified amount of covered charges before insurance paid any benefits. Such deductibles were generally applied annually. In 1985, between 80 and 90 percent of fee-for-service health plans had deductibles for major medical benefits. The significant cost sharing associated with fee-for-service health insurance costs veterans with such insurance out-of-pocket expenses when they obtain care from community hospitals. Although veterans with higher incomes are less likely to use VA facilities, it provides a financial incentive for veterans with limited incomes to use VA rather than community hospitals. This is because VA does not require these veterans to pay applicable copayments and deductibles under their public or private insurance. Fee-for-service payment methods have declined in both public and private insurance as enrollment in HMOs and other managed care plans has increased. Enrollment in HMOs increased from 9 million in 1982 to 50 million in 1994. In 1993, however, 49 percent of American workers with health insurance still had a conventional fee-for-service plan. By 1995 that percentage had dropped to 27. The nearly three-fourths of workers with employer-provided health insurance now covered under a managed care plan have largely eliminated the financial incentive for employed veterans to use VA hospitals. A slower shift is occurring among Medicare enrollees. Between 1987 and 1996, enrollment in Medicare risk-contract HMOs increased from 2.6 percent of beneficiaries to 10 percent of beneficiaries. By 2002, however, enrollment is projected to be 22.9 percent of total beneficiaries. Like enrollees under other HMOs, Medicare beneficiaries enrolled in risk-based HMOs usually have minimal out-of-pocket expenses. In addition, HMOs often add additional benefits, such as prescription drugs, not otherwise covered under Medicare. Recent and proposed changes in the VA system and other health care programs create considerable uncertainty about future demand for VA hospital care. First, VA expects last year’s expansion of eligibility for VA health care to enable it to increase VA system users by 20 percent. It is not clear, however, to what extent new users attracted to VA outpatient care through community-based outpatient clinics (CBOC) will use VA for hospital care. VA’s 1998 budget proposed reinvesting all efficiency savings and using additional resources to expand its system users by 20 percent. VA expected to add a total of $5.8 billion in new resources in the next 5 years (from public and private insurers and others), starting with $737 million in 1998 and increasing to $1.7 billion in 2002. VA expected these additional resources to allow it to increase the number of veterans served by 587,000, which would increase its patient base from 2.9 million to 3.5 million in 2002. If VA attains the targeted resource levels, it could attract 587,000 new users by 2002. The recent expansions of VA’s contracting authority and veterans’ eligibility for care should facilitate creation of new CBOCs, which, along with VA’s efforts to improve accessibility of hospital-based clinics, will probably attract new users. It is unclear, however, whether the new users will use VA for hospital care. To the extent that CBOCs are far from their sponsoring VA hospitals, the likelihood of veterans using a VA hospital drops off rather significantly at distances of more than 5 miles from the VA hospital. The second factor that could affect future demand for VA hospital care is VA’s expanded authority to buy hospital care from and sell hospital care to the private sector. This authority could increase the use of VA hospitals if VA uses it to serve more nonveterans or decrease the use of VA hospitals if VA uses it to allow veterans, such as the new users attracted to the system through CBOCs, to use community hospitals closer to their homes. A third factor that could affect future demand for VA hospital care is delaying Medicare eligibility. As discussed, veterans tend to stop using or reduce their use of VA hospitals after they become eligible for Medicare. Thus, delaying eligibility for Medicare benefits could delay veterans’ leaving the VA system. More importantly, VA could serve as an increasingly important source of health care coverage for veterans retiring before they qualified for Medicare. Many such veterans might not be able to continue coverage under their employer-provided health insurance, or such coverage might be prohibitively expensive. MSAs, authorized under the Balanced Budget Act of 1997, are the fourth factor that could increase future demand for VA hospital care. Medicare- eligible veterans may have financial incentives to establish such accounts, enroll in the VA health care system, obtain essentially free care from VA, and then pocket the excess funds in the account. MSAs could, however, be structured to prevent people with such accounts from using other federal health benefits. The Balanced Budget Act permits the Secretary of HHS to apply rules that will ensure that such dual enrollment will not result in increased expenditures for the federal government. Veterans enrolling in MSAs would no longer be able to use both VA and Medicare services. About half of the Medicare-eligible veterans using VA services use both VA and Medicare services. Further changes in the private health insurance market could also affect future demand for VA hospital care. First, recently enacted legislation could make it easier for people to maintain their private health insurance when they lose or change jobs. The Health Insurance Portability and Accountability Act of 1996 (P.L. 104-191) limits to 12 months plans’ ability to restrict coverage of employees’ preexisting health care conditions. Before this law, plans could permanently exclude coverage of preexisting conditions. The law also made it easier for veterans to change jobs without losing health insurance coverage; this, in turn, could reduce some veterans’ incentives to use VA facilities. For example, in 1994 we reported that veterans participating in focus groups told us that they use VA health care when they lack health insurance. Although, as discussed, a continued growth in managed care plan enrollment could further reduce use of VA health care, growing dissatisfaction with HMOs and other managed care plans could result in increased use of VA hospitals. For example, physicians from VA medical centers in California, Florida, New Mexico, and other states have noted an increase in the number of elderly veteran patients who seek care at VA facilities while enrolled in HMOs. Two studies at individual VA facilities found that HMO enrollment ranged from 10 percent among veterans of all ages to about 25 percent among elderly veterans. Finally, the recent trend toward increased beneficiary cost sharing in managed care plans could provide financial incentives for veterans to obtain care from VA hospitals. One found that copayments for hospital stays rose from $4.50 a day in 1987 to $24.90 a day in 1993; for inpatient mental health care services, copayments increased from $3.39 to $14.51 per day. The study also found that the higher copayments decreased demand for services from the HMOs. For example, researchers in Washington found that adding a $5 copayment reduced visits to primary care physicians by 5 percent. It is unclear, however, to what extent increased use of VA-provided services would offset reduced use of HMO-provided services. Because of the declining demand for inpatient hospital care, community hospitals have hundreds of thousands of unused hospital beds. Overall, about 26 percent of community hospital beds exceeded demand in 1995, and over 65 percent may exceed demand within the next 15 years. Although fewer—about 14 percent—of VA’s operating beds exceeded demand in 1995, actions to improve the VA health care system’s efficiency, coupled with other changes in the health care marketplace, could result in 80 percent of VA’s hospital beds exceeding demand within the next 5 to 10 years. With the likelihood that most hospital beds in both VA and the private sector will exceed demand within the next 5 to 15 years, the administration and the Congress will face difficult challenges and policy decisions about the future of VA hospitals. Among the challenges VA faces concerning the closure of VA hospitals are determining the number of hospital beds it needs, their locations, and the extent to which VA should buy rather than provide hospital care. Where hospital closures are warranted, VA will face added challenges to ensure that community hospitals or other VA hospitals meet veterans’ hospital care needs and to minimize the effect of the closures on VA employees and the community. With the expanded authority to sell VA’s excess capacity to private-sector health plans, facilities, and providers, the administration also faces difficult decisions about the extent to which VA should increase demand for care as an alternative to closing hospitals. Because decisions to either increase demand to preserve VA hospitals or close underused hospitals would significantly affect veterans, VA employees, community hospitals, medical schools, and individual communities, the administration and the Congress face difficult challenges in determining the future of VA hospitals. Use of both community and VA hospitals varies widely in different parts of the country. Among the possible causes of such variation are differences in health status, demographics of the veteran and general population, market penetration of managed care plans, and differences in efficiency. The number and use of community hospital beds vary significantly by census division and, even within census division, by state. Nationally, community hospital beds numbered about 3.3 per 1,000 population in 1995, ranging from 2.3 in the Pacific states to 4.3 in the West North Central states. Other census divisions with significantly higher-than-average operating beds and average daily censuses (ADC) were the East South Central and Middle Atlantic states; the Mountain division was well below the national averages. (See figs. 6.1 and 6.2.) Within some census divisions, hospital use also varied significantly. For example, among South Atlantic states, Maryland and Virginia had an ADC of 1.7 per 1,000 population; the District of Columbia and West Virginia had 4.9 and 2.7, respectively. Similarly, among Mountain states, Utah’s ADC was 1.1 per 1,000 population but Montana’s was 3.2. Appendix I contains additional information on the number of operating beds and ADCs per 1,000 population by census division and state. Many factors, such as differences in age, health status, and insurance coverage, could affect hospital use. For example, states with more elderly people may have greater hospital use. Similarly, regional variation in the incidence of certain diseases could result in higher use of hospital care in some areas. For example, the higher incidence of cancer in the Middle Atlantic states could cause greater hospital use there than in other areas. Medical practice in different parts of the country may also account for variation in hospital use. For example, patients in the Northeast tend to have longer lengths of stay than similar patients in the western states. (See table 6.1.) Finally, the market penetration of managed care may affect hospital use. States in which HMOs and preferred provider organizations have significantly penetrated the market tend to have less hospital use. Of the nine states with hospital usage of 1.5 beds per 1,000 population or less, managed care accounted for 40 percent or more of the insurance market; in only two states (Alaska and New Mexico) did managed care account for less than 20 percent of the insurance market. In contrast, of the 10 states with hospital usage of 2.7 beds per 1,000 population or higher, in only 1 state (Nebraska) did managed care account for 40 percent of the market; in 4 states, managed care had captured 5 percent or less of the insurance market. Appendix II contains additional information on managed care’s market penetration by state and census division. (See fig. 6.3.) The number and use of VA hospital beds also vary widely by VISN. Differences in the rate of use of VA hospitals correlate to regional differences in use of community hospitals, suggesting that differences in health status or medical practice may at least partially explain the variation. VA data, however, provide conflicting views of the reasons for the variation. In fiscal year 1995, the VA system operated an average of 50,785 beds and had an ADC of 37,003. With about 2.9 million unduplicated users, the VA system operated about 18 beds per 1,000 users and had an ADC of 13 per 1,000 users. The number of operating beds per 1,000 users ranged from 10 per 1,000 users in VISN 18 (Phoenix) to 26 in VISN 3 (Bronx). Similarly, the ADC ranged from 6 per 1,000 users in VISN 18 (Phoenix) to 21 per 1,000 users in VISN 3 (Bronx). (See figs. 6.4 and 6.5.) Although the use of surgical beds varied somewhat by VISN, the use of medicine and psychiatric beds varied most. The ADC in medicine beds ranged from 3 to 11 per 1,000 users; the ADC in psychiatric beds ranged from 3 to 8 per 1,000 users. Appendix VI provides additional details. Variation in the use of VA hospitals tends to mirror the variation in use of community hospital beds. The two census divisions with the lowest community hospital use per 1,000 population—Mountain and Pacific— contained four of the five VISNs with the lowest VA hospital use. Similarly, the census division with the highest community hospital use—Middle Atlantic—contained the three VISNs with the highest rate of VA hospital use. Appendix VII compares operating beds and ADCs for VISNs with their corresponding census divisions. Several possible reasons explain veterans’ varying use of VA hospitals. First, the variation may reflect differences in efficiency among VISNs and individual facilities. VA’s resource allocation models have consistently attributed much of the variation in VA costs to inefficiency. The Resource Allocation Method, Resource Planning and Management system, and new Veterans Equitable Resource Allocation (VERA) method all found that VA’s costs varied widely by facility and VISN for treating similar patients and concluded that inefficiency caused most of the variation. Differences in health status could also help explain the variation in hospital use. To the extent that veteran users in some VISNs have poorer health than those in other VISNs, then higher hospital use can be expected, and it may not be reasonable to expect such VISNs to decrease utilization rates. Similarly, differences in the age of the veteran population can affect hospital use. Hospital use generally increases with population age; therefore, VISNs serving elderly veterans could be expected to have higher rates of hospital use. VA, however, in developing VERA, concluded that the higher hospital use in some VISNs could not be explained by differences in veterans’ ages. Insurance use could also affect the extent of VA hospital use. Veterans with public or private insurance are much less likely to use VA hospital care than are the uninsured. Thus, variation in the rate of insurance coverage among VISNs could help explain variation in hospital usage. Similarly, the market penetration of managed care plans could help explain the lower hospital use in some VISNs. This is because veterans enrolled in managed care plans can generally obtain hospital care closer to their homes with low cost sharing through managed care plans. Finally, differences in medical practice may explain variation in hospital use. As previously discussed, hospital lengths of stay for short-term hospitalizations are generally longer in the Northeast than in the West. This could help explain the higher rate of hospital use in VISNs in the Middle Atlantic states. VERA and the Veterans Health Administration’s 1997 performance measures for VISN directors, however, give conflicting views of the extent to which such variation is due to differences in efficiency rather than medical practice or health status. Under the performance measures, VA compared the VA acute bed-days of care per 1,000 users in each VISN with Medicare beds-days of care per 1,000 beneficiaries in the comparable census division. VA defined as fully successful performance reduced VA bed-days of care that matched local Medicare performance. Of the seven VISNs required to reduce acute bed-days of care by 20 percent or more to achieve fully successful performance, VERA designated four to receive additional resources. The VISN required to reduce acute bed-days of care the most—37 percent—was VISN 19 (Denver), which VERA identified as needing a 6.6-percent increase in funding. Similarly, VISN 2 (Albany) and VISN 4 (Pittsburgh)—whose acute care rates were already below the Medicare rate—were found under VERA to be among the less efficient VISNs. Under VERA, VISN 2 (Albany) would absorb the second largest decrease in funding. Under the performance measures, however, it would be expected to absorb the funding decrease without reducing acute bed-days of care. Another performance measure that provides a conflicting view of VISN efficiency is reduced operating beds. Under this performance goal, fully successful performance is judged to be reduced operating beds to match the assigned targets. As was the case with days of care, however, the VISNs with the largest targeted reductions in operating beds are among those qualifying for the largest resource increases under VERA. Ten of the 11 VISNs expected to close 300 or more operating beds in fiscal year 1997 should, under VERA, receive increased resource allocations of up to 15 percent. In contrast, of the 11 VISNs expected to close fewer than 300 beds, 6 should, under VERA, receive fewer resources. For example, VISN 2 (Albany) is not expected to close any operating beds but should receive a 7.5-percent decrease in funding. Table 6.2 compares the change in resource allocation under VERA with the 1997 network directors’ hospital performance measures. Because VA is phasing in VERA’s implementation, the actual shifts in resource allocations are less than the projected shifts had VERA been fully implemented in 1997. The health care literature identifies many different approaches for estimating excess hospital beds. Each approach has certain limitations. For example, some approaches estimate current excess capacity; others focus on future needs. To provide a range of estimates of current and future excess capacity, we developed estimates using three approaches: Target occupancy rates. Under this approach, excess capacity is defined as the number of beds that would need to be eliminated to raise actual occupancy rates up to a prescribed efficient level. For example, if average occupancy were 60 percent and the target rate were 85 percent, 25 percent of beds would be excess. In fact, an 85-percent occupancy level is generally considered optimum. In other words, a hospital is not considered to have excess capacity until its average occupancy drops below 85 percent. Estimates of medically unnecessary days of care. Under this approach, a percentage of the days of care provided is assumed, on the basis of studies, to be medically unnecessary. A 1970s study used this approach and estimated that 264,000 community hospital beds were in excess. The study assumed that one-third of the days of care provided by community hospitals were medically unnecessary. Between 1980 and 1995, community hospital beds declined by about 115,000 beds mainly in response to actions taken to reduce medically unnecessary days of care. Estimates derived from this approach are often added to estimates of excess capacity derived through the first approach. Target beds per 1,000 population. Under this approach, excess capacity is the difference between operating beds and some target number of beds per 1,000 population. For example, the Institute of Medicine set a target to reduce the beds per 1,000 population from 4.4 to 4.0 beds in a 1976 report.Unlike the target occupancy rate approach, this approach can be used to predict future bed needs by basing the estimates on projected population. The use of target occupancy rates is the most conservative approach for estimating excess beds because it basically counts empty beds at the time of the study. It does not consider changes that could affect either the future supply of or demand for hospital beds. In addition, it assumes that current hospital utilization rates are appropriate, that is, that all admissions and lengths of stay are appropriate. Just as the use of target occupancy rates may understate the extent of excess beds, the other two approaches may overstate realistic reductions in excess beds. This is because reaching such targets would necessitate a level of uniformity in medical practice that has so far been out of reach. Community hospitals have far too many beds than needed. Overall, community hospitals had 873,000 beds, and 228,000 (26 percent) of these were unused in 1995 and could have been closed without increasing hospital occupancy rates above the 85-percent rate generally considered optimal. Although the number of hospital beds per 1,000 population varies significantly by state and census division, all areas of the country have far too many hospital beds. To the extent such variation is reduced or eliminated, excess beds will probably increase in the next 10 to 15 years. For example, if hospitals nationwide reduce usage to the levels already reached in California and several other western states, as many as 610,000 (65 percent) community hospital beds could become excess even with projected population growth. The Pew Health Professions Commission estimated in 1996 that over 60 percent of hospital beds may be excess and that as many as half of the nation’s hospitals may close. Defining excess capacity as the difference between operating beds and the number of beds that would be needed to meet demand at the 85-percent occupancy level indicates that 26 percent (228,000) of the approximately 873,000 community hospital beds were excess in 1995. This is nearly double the excess capacity estimated in 1975 using this method. During the 20-year period, the number of operating beds in community hospitals dropped by 69,000, but the ADC dropped by 158,000. By 1995, community hospitals’ occupancy rate had declined to under 63 percent. All but three states (Delaware, New York, and Hawaii) in 1995 had more than 10 percent of excess beds. Seven states (Alaska, Kansas, Oklahoma, Oregon, Texas, Utah, and Wyoming) had more than 35 percent of excess beds. On the basis of an 85-percent target occupancy rate, excess capacity ranged from 12 percent in the Middle Atlantic states to about 35 percent in the West South Central states and 32 percent in the Mountain states. As previously discussed, people in the Middle Atlantic states use roughly twice as much hospital care as do those in the Mountain states. Appendix III contains additional information on excess capacity by census division and state under the target occupancy rate approach. Estimating excess capacity using the target occupancy rate approach has become increasingly problematic because of inconsistencies in hospitals reporting a number of beds they have. Specifically, some hospitals report how many beds they are licensed to operate; others report staffed and operating beds. This can significantly affect estimates of excess capacity. Consider the following illustration: Hospital A is licensed to operate 100 beds but is normally staffed to operate only 50 beds. The hospital has an ADC of 45 patients. If it provides American Hospital Association (AHA) data on the number of licensed beds it has, then it has an occupancy rate of 45 percent and 40 excess beds. If, however, the hospital reports the average number of staffed and operating beds, then it has an occupancy rate of 90 percent and no excess capacity. Because of inconsistencies in hospitals’ reporting the number of beds, AHA discontinued reporting occupancy rates in 1995. Implementation of prospective payment systems, use of preadmission certification requirements, and expansion of HMOs and other managed care organizations have reduced the amount of medically unnecessary care provided by community hospitals. On the other hand, as previously discussed, states in which HMOs and PPOs have significantly penetrated the market tend to have lower rates of hospital use, suggesting that further reductions are possible. Assuming that 10 percent of the days of care provided by community hospitals nationally are medically unnecessary, an additional 65,000 beds beyond the 228,000 excess beds estimated using the target occupancy rate approach would be considered excess. Moreover, assuming that 20 percent of community hospital days of care are medically unnecessary, 357,000 hospital beds would be estimated to be excess. By 1990, the Institute of Medicine’s 1976 goal for reducing the number of community hospital beds to four beds per 1,000 population had been met, and, by 1995, the number of community beds had fallen to 3.3 per 1,000 population. Hospital demand, however, averaged only about 2.1 beds per 1,000 population that same year. As occupancy rates continue to fall, researchers are once again considering what the appropriate target should be. For example, one market forecaster from California indicated that hospital use in California is below 45 percent of licensed capacity and that hospital demand currently averages only 1.1 beds per 1,000 population. The forecaster estimated that, in California, demand for hospital beds will drop to 0.8 bed per 1,000 population from 2000 to 2005. Recognizing the continued shift of care from hospitals to outpatient and other more cost-effective settings and the development of new technologies and medical practices that preclude or shorten hospital stays, we chose two targets—two beds per 1,000 population and one bed per 1,000 population—to estimate future bed needs. The two beds per 1,000 population target assumes that further reductions in hospital admissions and lengths of stay will be minimal—current hospital demand averages 2.1 beds per 1,000 population. The one bed per 1,000 population target assumes more significant reductions in future demand such that demand nationally would be slightly lower than current demand in Alaska, Utah, and Washington—1.1 beds per 1,000 population—but higher than the projected future demand in California mentioned above—0.8 bed per 1,000 population. At a target of two beds per 1,000 population, about 347,000 community hospital beds could be considered in excess of need using 1995 population data. Because the number of operating beds as well as hospital usage differ widely by state, to reduce excess beds to the target of two beds per 1,000 population (using 1995 population data) would necessitate closing about half the hospital beds in the Middle Atlantic, East South Central, and West South Central states. In contrast, Pacific states could reach this target by closing only about 14 percent of their community hospital beds. Hospital use in 18 states, primarily in the Mountain and Pacific census divisions, is already below the level needed to support two hospital beds per 1,000 population. Assuming that hospital use in those states does not increase to the national average, we substituted the estimate of current excess capacity derived from the target occupancy rate approach for the lower estimate of excess capacity derived from applying the two beds per 1,000 population target. This adjustment increases the overall estimate of excess beds to about 370,000 or about 42 percent of the operating beds in 1995. Population growth—assuming no new hospital beds are added—will reduce the excess capacity from 370,000 beds to about 272,000 beds by 2010. Adding projected population growth lowers the estimates of excess capacity in all census divisions but most affects the South Atlantic, Mountain, and Pacific states. In other areas, such as the Middle Atlantic and New England states, population growth is not expected to significantly reduce excess capacity. We estimated that at a target of one bed per 1,000 population (using 1995 population data), about 610,000 community hospital beds would be excess. Population growth—again assuming no added capacity—would reduce excess beds to about 572,000 by 2010. Appendix IV contains detailed estimates by census division and state based on 1995 population; appendix V contains estimates based on projected 2010 population. A number of previous studies have also predicted dramatic declines in community hospital beds in the next 5 to 10 years. For example, a 1995 survey of hospital executives suggested that the number of community hospital beds will probably decline in the next decade at an average rate of 5 percent per year. Similarly, the Pew Health Professions Commission, in a 1995 study, predicted that health care will continue to shift from a supply orientation to a demand-driven system, resulting in as many as half of the nation’s hospitals closing and the loss of perhaps 60 percent of hospital beds. Finally, the health research organization, Interstudy, predicted that 40 percent of all U.S. hospitals could be closed, merged, or converted to other uses by the year 2000. As in the private sector, VA hospitals also have excess beds. About 14 percent of VA hospital beds exceeded demand in fiscal year 1995, but more than 80 percent could exceed demand if VA can reduce hospital use systemwide to the level already achieved by its Northern California Health Care System (NCHCS). This system closed over 5,000 beds in fiscal year 1996, bringing the total beds closed to over 38,000 since 1980. Veterans’ use of VA hospitals varies significantly by VISN just as use of community hospitals varies by census division and state. Defining excess capacity as the difference between operating beds and the number of beds that would be needed to serve the ADC at an 85-percent occupancy level indicates that VA had only about 7,300 excess hospital beds in fiscal year 1995, half as many excess beds as it had 5 years earlier. (See table 6.3.) Applying this approach to VISNs suggests that among those VISNs with the most excess beds are many that already operate the fewest hospital beds per 1,000 users in the VA system. For example, VISN 18 (Phoenix) and VISN 4 (Pittsburgh) have the same number of excess beds—304—although VISN 18 (Phoenix) operated fewer than half as many beds per 1,000 users. Under this approach, the VISN with the most excess beds is VISN 16 (Jackson) with 844 excess beds; the VISN with the least excess beds is VISN 10 (Cincinnati) with only 105 excess beds. (See app. VIII.) Unlike community hospitals that have felt the effects of prospective payments, preadmission screening, and managed care on the extent of medically unnecessary care for over 10 years, the VA system has only recently focused on reducing medically unnecessary days of care (see ch. 4). As a result, estimates of excess VA hospital beds need to consider the likely effect of efficiency improvements on future bed needs. In 1985, we reported that 43 percent of the medical and surgical days of care in VA hospitals could have been avoided. Since then, a number of studies by VA researchers and VA’s Office of Inspector General (OIG) have found similar problems. For example, a January 1996 VA study reported that about 40 percent of the admissions to acute medical and surgical services were nonacute. The study also reported that about 30 percent of the days of care in the acute medical and surgical services of the VA hospitals reviewed were nonacute. In the study, reviewers from 24 randomly selected VA hospitals assessed the appropriateness of 2,432 fiscal year 1992 admissions to acute medical, surgical, and psychiatry services. The study found similar rates of nonacute admissions and days of care in all 24 hospitals. Many factors accounted for the nonacute admissions, including lack of outpatient care alternatives, conservative physician practices, delays in discharge planning, and social factors such as homelessness and long travel distances. Conservatively assuming that 10 percent of the days of care provided by VA hospitals in fiscal year 1995 were medically unnecessary, 4,353 beds in addition to the 7,252 estimated using the target occupancy rate approach would be considered excess. If, as suggested by VA studies, 40 percent of the days of care were assumed to be medically unnecessary, total excess beds would increase to 24,667, roughly half of VA’s operating beds. (See table 6.4.) Because hospital use varies significantly by hospital and VISN, the same level of medically inappropriate care may not apply in each hospital and VISN. The studies, however, have generally found significant levels of medically unnecessary care at every VA hospital reviewed. Appendix VIII has estimates by VISN of excess beds based on different assumptions about the level of medically unnecessary care. Because the veteran population differs from the general population, the target beds per 1,000 population used to estimate community hospitals’ bed needs does not apply to VA hospitals. For example, private-sector hospitals have cribs and bassinets that VA hospitals do not have; the veteran population excludes children and is predominantly male; VA hospitals include long-term medical and psychiatric beds not generally found in community hospitals; and estimates of community hospital beds already include veterans’ hospital care needs, and most veterans rely on community hospitals for care. As a result, we developed three alternative population-based targets: actual hospital usage generated in VA’s NCHCS, actual hospital usage in VA’s VISN 18 (Phoenix, including Arizona, New Mexico, and parts of Texas), and VA’s national average hospital usage. NCHCS most closely resembles the outpatient-based health care system envisioned for VA’s future. When VA closed its hospital in Martinez, California, in 1991 because of concerns about its safety during a possible earthquake, veterans in NCHCS’ catchment area were left with limited access to hospital and outpatient care. Before its closing, the Martinez hospital had an ADC of 235 patients. A replacement outpatient clinic—which became a prototype for the VA system—opened in November 1992. The clinic included modern outpatient surgery capabilities, sophisticated imaging technology, and attractive surroundings. As a result, much of the care that previously required a hospital admission could now be done on an outpatient basis. VA also reached an agreement with the Air Force that allowed VA to operate 55 beds at the David Grant Air Force Medical Center at Travis Air Force Base, with another 18 “swing” beds available when needed. In addition to the hospital beds at Travis, NCHCS clinics place veterans needing hospital care at other VA hospitals—primarily those at Palo Alto and San Francisco—and, in the case of medical emergencies, in community hospitals. In 1995, the four NCHCS clinics served over 33,000 veterans, providing a total of 338,000 outpatient visits. Veterans served by the four clinics were admitted to hospitals about 2,800 times, primarily for general medicine services but also for surgical, neurological, and psychiatric services. This admission rate, about 85 admissions per 1,000 veterans served, supported an ADC of about 75 beds or about 2 beds per 1,000 veterans served. Assuming an 80-percent occupancy rate, NCHCS needed to operate about 2.5 beds per 1,000 users. This is a conservative estimate of the number of beds VA needed to operate because it (1) assumes an 80-percent rather than an 85-percent occupancy rate and (2) includes use of community hospital beds for emergency care in estimating the need for VA beds. Applying the target of 2.5 beds per 1,000 users to the VA system yields a systemwide need for only about 7,230 hospital beds. Even if VA’s users increase by 20 percent as VA predicts and they generate hospital demand at the same rate as current users, VA would need only 8,676 hospital beds. However, new users attracted through community-based clinics are unlikely to generate as much hospital demand as current users because new users have indicated they are more likely to choose their local hospital rather than a distant VA facility. Reaching this target would require closing about 85 percent of VA’s current operating beds. VA’s VISN 18 (Phoenix) has the least VISN-wide hospital use in the VA system, supporting an ADC of 6 per 1,000 unduplicated veteran users in fiscal year 1995. Assuming an 85-percent occupancy rate, VISN 18 (Phoenix) needs to maintain about seven beds per 1,000 users to support its hospital demand. Applying a target of seven beds per 1,000 users nationally yields a systemwide need for only 20,230 hospital beds to support VA’s 1995 user population. Systemwide, VA had an ADC of 13 beds per 1,000 veteran users in fiscal year 1995. At an average occupancy rate of 85 percent, VA would need to maintain 15 beds per 1,000 veteran users to support this workload. If the VISNs that operated more than 15 beds per 1,000 users reduced their usage to the national average, then 9,445 beds would be considered excess in those VISNs, but no excess beds would be assumed in other VISNs. This is a very conservative approach; each of the VISNs with usage below the national average closed additional hospital beds in fiscal year 1996. In fact, the 11 VISNs with an ADC below the national average closed almost 2,000 beds in fiscal year 1996, about 40 percent of the beds closed in the VA system. VA’s Under Secretary for Health has noted that the traditional general acute care hospital, as an institution, will eventually become a large intensive care unit, taking care of only the sickest and most complicated patients. The Under Secretary has stated that all other medical care will be provided in outpatient care settings, at home, in hospices, or at various types of extended-care facilities. Most of the hospital beds in both VA and the private sector will likely exceed demand within the next 15 years, leading to more closing of both VA and community hospitals. Among the challenges VA faces concerning closing VA hospitals are determining the number of hospital beds it needs and their locations, determining when closing hospitals would be more cost-efficient rather than reducing operating beds, ensuring that community hospitals or other VA hospitals meet veterans’ hospital care needs following closures, minimizing the impact of such decisions on VA employees and the community, and identifying alternative uses for closed facilities. With its expanded authority to sell excess capacity to private-sector health plans, facilities, and providers, the administration also faces difficult decisions about the extent to which demand for care should be expanded before closing a facility. Just as decisions to close VA hospitals affect multiple stakeholders, so too would decisions to more directly compete with community hospitals. Whether the administration proposes to close a VA hospital or expand its market share, developing a process for making changes that adequately considers the needs and concerns of all major stakeholders, including veterans, VA employees, community hospitals, affiliated medical schools, and the community will be a major challenge. To meet current and future demand, VA faces many challenges in determining the number of hospital beds it needs and their locations. VA’s past methods for estimating its bed needs, however, tended not only to build in but expand excess beds by using national rather than local hospital usage. As previously discussed, VA data provide conflicting explanations for the widely varying hospital use among VISNs. Baseline data on the amount of medically necessary hospital care provided by each of its hospitals could enable VA to more effectively plan for the future. Historically, VA has overestimated its hospital bed needs. For example, in its 1984 report, Caring for the Older Veteran, VA developed estimates of what it termed “real need.” In criticizing a more conservative estimate of bed needs developed by the Congressional Budget Office, VA suggested that real need should be measured by applying the use rates from areas of the country with the highest VA hospital use rates to rates in other parts of the country. Using this approach, VA recommended construction of 85,000 additional hospital beds by 1990, even while use of hospital beds was declining. VA estimated that it would need between 134,000 and 246,000 hospital beds by the year 2000. VA used similar approaches in planning specific construction projects, often adding to the number of beds determined through its hospital sizing model. For example, VA tried to add 117 beds to a construction project at the Atlanta medical center on the basis of anticipated workload increases. The Office of Management and Budget, however, determined that the VA hospital sizing model had already accounted for the factors VA was using to justify the additional beds and directed that the project be scaled back. VA used the concept of “suppressed demand” to justify hospital projects in Hawaii, Northern California, and East Central Florida that would have exceeded demand. For example, VA decided that it needed to build a new hospital in East Central Florida largely on the basis of an analysis that showed that the number of VA hospital beds available for Florida veterans was below the national average—about 1.40 beds per 1,000 Florida veterans compared with 2.02 beds per 1,000 veterans nationwide. Our analysis, however, suggested that Florida veterans’ lower use of VA hospitals was likely caused, at least in part, by differences in Florida veterans’ health and economic status and insurance coverage and those of veterans nationwide. VA has subsequently developed plans to meet central Florida veterans’ needs without building a new hospital. VA also added beds to a proposed joint venture construction project at Tripler Army Medical Center in Hawaii on the basis of perceived suppressed demand. VA compared Hawaii veterans’ rate of VA hospital use with that of mainland veterans and found that veterans were hospitalized in Hawaii at only 43 percent of the national rate. VA added 27 beds to the proposed 105-bed facility on the basis of suppressed demand. As in Florida, VA did not adequately evaluate other possible explanations for the lower-than-average use of VA health care services by Hawaii veterans. For example, it did not consider the extent to which military retirees dually eligible for VA and DOD benefits were using their DOD benefits. More importantly, it did not consider the extent to which veterans in Hawaii had other health care options and therefore did not seek VA care. Hawaii has one of the highest percentages of residents in the country with health insurance. Veterans without health insurance are eight times more likely to use VA hospitals than are veterans with insurance. VA subsequently determined that the Tripler Army Medical Center would not need the additional 27 beds to meet demand for beds. VA’s performance measures for fiscal year 1997 essentially take the most conservative approach for measuring excess VA hospital beds—target occupancy rates. VISNs are expected to close only beds that exceed the need for meeting current demand at an 85-percent occupancy level. In other words, they do not assess the medical appropriateness of the care provided in occupied beds to determine the number of additional beds to be closed and patients shifted to other care settings. Because VA’s performance measures and VERA data give conflicting views of the role such factors as health status, medical practice, and HMO market penetration play in the varying use of VA hospital beds, an assessment of the medically necessary care provided by each facility could serve as a baseline for decision-making. Although researchers have studied the nonacute admissions and days of care at selected VA hospitals, their studies have not reported results for individual hospitals or reviewed all of the hospital beds at a facility. The studies, however, reported wide variation in the numbers of nonacute admissions and days of care provided by the hospitals reviewed. For example, one study reported nonacute admissions in 50 randomly selected VA hospitals ranging from 25 to 72 percent. Basing decisions on current utilization data without determining the appropriateness of the data overstates the beds VA needs to operate an efficient health care system. Baseline data on the numbers of medically necessary admissions and days of care are important because they both establish targets for efficiency improvements and provide the essential workload data for decisions about hospital closures and service consolidations. Similarly, assessments of the potential to deinstitutionalize psychiatric patients could provide baseline data for determining the future need for psychiatric beds. Such baseline data would essentially determine the extent to which differences in health status or medical practice contribute to higher hospital use rates in some VISNs. The apparent correlation between the rates of VA and community hospital use by census division/VISN suggests that factors other than differences in efficiency contribute to varying hospital use rates. The extent to which variation caused by factors such as differences in medical practice can be reduced is not clear, but the wide variation that still exists in hospital use rates in the private sector suggests that conforming medical practice will be difficult. On the other hand, the generally lower rates of hospital use in areas with high concentrations of managed care enrollment suggest that, given the right incentives, physicians will change their practice patterns. VA and the private sector have reacted very differently to declining inpatient workload. In the private sector, hundreds of hospitals have been closed in the last 10 years. VA, however, has not closed any hospitals because of declining use, choosing instead to reduce the number of operating beds or close particular services such as inpatient surgery. This process, however, often leaves VA operating only a small part of a hospital’s capacity. Closing beds clearly results in some savings by reducing staffing costs. But, with fewer patients over whom to distribute the fixed costs of operating a facility, the cost per patient treated rises. At some point, it becomes more cost-effective to close a hospital and provide care either through another VA hospital or through contracts with community hospitals. VA demonstrated the feasibility of closing underused hospitals when it closed the Sepulveda, California, VA medical center in 1995 after it suffered earthquake damage. The workload from the Sepulveda hospital was transferred to the West Los Angeles medical center. VA’s OIG had found that the reported numbers of inpatients treated at both Sepulveda and West Los Angeles had declined significantly in the prior 4-year period and that the workload may have been even less than VA reported because VA had overstated it. VA does not plan to rebuild the Sepulveda hospital but plans to establish an expanded outpatient clinic there. The OIG concluded that the West Los Angeles medical center had sufficient resources to care for the hospital needs of veterans formerly using the Sepulveda hospital. The only other hospital VA has closed in the last 25 years is the Martinez, California, medical center. Like Sepulveda, it was closed because of seismic deficiencies and its workload transferred to other VA medical centers. Before closing, the Martinez hospital had an ADC of about 240 patients. VA developed plans to replace the hospital as a joint venture with DOD at the David Grant Medical Center at Travis Air Force Base. VA planned to operate 243 beds in the new hospital. Last year, we reported that this construction project was not needed because existing VA and community hospitals could meet VA’s current and future need for hospital beds. A congressionally mandated evaluation of veterans’ health care in northern California reached the same conclusion. As a result, VA ceased plans to construct new beds at Travis and instead developed plans to use existing VA and community beds and has 55 beds at a former DOD medical facility in Sacramento. Nonetheless, closing hospitals and contracting for care entail some risk. Allowing veterans to obtain free hospital care in community hospitals closer to their homes could increase demand for VA-supported hospital care, offsetting any savings from contracting. To the extent that new demand is generated by veterans who lack other health care options, contracting could improve the health status of veterans. On the other hand, if the demand is generated mainly by insured veterans seeking a health care option with lower out-of-pocket payments, contracting could increase costs without significantly improving veterans’ health status. VA wants to ensure that closing a VA hospital does not result in veterans losing accessibility to care either through other VA facilities or through community hospitals. Studies performed at VA and public hospitals indicate, however, that when facilities are closed or access is restricted, patients do not always seek alternative sources of care. Researchers have reported that reduced access to care adversely affects some patients’ health. For example, one study found that patients previously served by a public hospital “had difficulty finding new health care providers, waited longer for routine medical care, and felt that the availability of hospital services had decreased.” A second study reported that among the veterans examined, “the general health perceptions and functional status of discharged patients had worsened when compared with non-discharged patients . . . . Among previously hypertensive patients who were discharged found statistically and clinically significant elevations in blood pressure.” A third study found that, “mong those who stop using the VA [because they were found ineligible for VA outpatient care], many do not receive any medical care or obtain a regular provider within the first 9 months after their release from the VA system.” In addition, our 1992 study of the closure of the Martinez VA medical center found that VA had not developed plans or procedures for referring VA patients to other VA hospitals before it announced the emergency closing of the center. The problems VA encountered after the Martinez hospital closure—while understandable because the hospital closed due to an emergency—highlight the need for planning to ensure that patients affected by future hospital closures can obtain needed hospital services through community or VA hospitals. In some rural communities, VA may need to maintain a small VA hospital because no community hospitals are nearby. In such cases, VA might improve health care services not only to veterans but to the general community by opening its doors to nonveterans. The expanded workload might lower per patient costs by better using excess capacity and improve quality of care by broadening the type of patients served. The administration and the Congress will also have to decide what to do with any hospitals that are closed. One option is to convert VA hospitals to provide nursing home or other types of care. Although converting space to provide nursing home care is often cheaper than building a new facility, converting hospital beds to other uses would increase costs. Construction funds would be needed for the conversions, and medical care funds would be needed for the new nursing home residents in formerly empty beds. Nursing home care is a discretionary benefit for all veterans, including those with service-connected disabilities. Such care is, however, one of the main health care needs of the growing elderly population. Another option would be to convert part of a hospital to another use while leaving the rest of the building as a hospital. Such use—whether patient care or nonpatient care related—would reduce the costs for providing hospital care by distributing the building’s fixed costs over a larger user base. In addition to converting unused wards to provide nursing home care, space could be leased to public or private health care organizations, veterans service organizations, or others to generate revenues to help offset the high costs of maintaining a small inpatient unit in a large building. A third option would be to sell or otherwise dispose of the property. Some properties have strong potential for commercial development. Sale of such properties might raise enough revenue to make it profitable for VA to relocate nonhospital services. Other properties, particularly those in rural areas, may not be commercially valuable, and it might be cost-effective to retain such properties for outpatient clinics and other nonhospital services. Still other properties might be made available to state and local governments for use as nursing homes, homeless shelters, or other purposes. One way to avoid closing VA hospitals would be to increase demand for VA hospital care, which involves two basic approaches. First, VA could compete to increase its market share of the veteran population. Second, VA could use its excess hospital capacity to serve veterans’ dependents or other nonveterans. Either approach has significant implications for the communities in which VA hospitals operate. For example, increasing demand for VA hospital care would probably decrease demand for community hospital care unless VA targeted only those users with unmet hospital care needs. By competing with nearby community hospitals for a larger market share, VA could cause the closure of community hospitals. The effect on community hospitals would be greatest if VA would increase workload by competing to treat nonveterans. On the other hand, treating nonveterans in VA hospitals could strengthen VA’s teaching and research missions by broadening the type of patients treated. This was one of the main reasons Australia opened its veterans hospitals to nonveterans. Because decisions either to close VA hospitals or directly compete with private-sector hospitals for a larger market share of the declining inpatient demand would significantly affect veterans, VA employees, community hospitals, and the community in general, it is important to involve all affected parties in the decision-making. Neither VA’s Prescription for Change nor individual VISN strategic plans establish a process to be followed for closing a VA hospital or the extent to which VA should involve the community. Nor do they establish a process for assessing the possible effects of decisions to compete for increased market share. VA hospitals are often one of the main employers in the communities in which they operate. Consequently, closing a VA hospital could significantly affect the community’s economic health and employment rate. For example, an underused community hospital might be able to handle the VA workload if a nearby VA hospital closed. In this case, closing the VA hospital would reduce VA’s costs, provide continued care for veterans in the community, and improve the financial viability of the community hospital. Unfortunately, VISN strategic plans have little or no information on the availability or financial status of the community hospitals located near VA hospitals that could illuminate decisions about closing VA hospitals. The Congress established a process that was used for closing military bases in 1991, 1993, and 1994. An eight-person commission was established to review closure recommendations that were to be made, in part, on the basis of published criteria. Some of these criteria addressed cost implications to the government, economic and environmental impacts on communities, and the ability of communities’ infrastructure to support the proposed changes. Members of the Congress from districts affected by base closures and realignments had an opportunity to play an active part in the commission’s fact-finding and public hearing process. Ultimately, however, the Congress committed to accepting all of the recommendations as a package. Just as decisions to either close a VA hospital or compete with community hospitals for patients would affect nearby community hospitals, so too could changes in community hospitals affect the future of VA hospitals. For example, closure of a community hospital could increase demand for VA hospital care. The effect on VA would be greatest if the hospital had a large charity care workload, were the only other hospital in the community, or were located near the VA hospital. Conversely, opening a new community hospital near a VA hospital could decrease demand for VA hospital care. Similarly, new programs or the procurement of new high-tech equipment by community hospitals could lure patients from VA hospitals. VISN strategic plans have little information about the status and plans of community hospitals located near VA hospitals and the possible effects of their actions on VA. Among the most important changes in response to payment reforms and declining demand for hospital care are changes in how hospitals are managed and in their relationships with other hospitals, other types of health care providers, and health care systems. Specifically, community hospitals are increasingly joining forces with other hospitals to form alliances and networks (horizontally integrating) either locally or nationally; expanding their product lines to include other types of health care services, such as nursing home and home health care, to help generate hospital demand (vertically integrating); hiring outside management to evaluate hospital efficiency and effect needed changes; and improving accounting and information systems to enable managers to identify and eliminate inefficiencies and unprofitable lines of business. Except for hiring outside management, VA is making the same types of changes as community hospitals. In fact, the VA system was both horizontally and vertically integrated long before the concepts gained favor in the private sector. VA is, however, increasingly integrating its hospitals regionally and expanding the range of services provided in part by establishing community-based outpatient clinics (CBOC). In addition, VA, like community hospitals, is implementing new accounting and information systems. VA faces many important issues and challenges in changing the management of its hospitals. For example, in forming alliances as networks, VA faces choices between limiting networks to VA hospitals or having VA hospitals network with DOD and community hospitals to improve accessibility of VA-supported care. Similarly, considerable uncertainty exists about the effectiveness of VA’s strategy for increasing demand for hospital care by establishing CBOCs far from VA hospitals. Such actions can improve accessibility of VA outpatient care but are unlikely to help increase demand for VA hospital care. VA also faces a difficult challenge in ensuring that its management information systems can generate the complete and accurate data Veterans Integrated Service Network (VISN) and hospital managers need both to identify efficiency savings and prevent actions that could compromise the quality of or access to VA hospital care. Finally, VA must decide to what extent it should follow the lead of some community hospitals and test the possibility of contracting for management of one or more of its hospitals. Many community hospitals are forming networks and alliances either locally or nationally. Such horizontal integration includes the merger, consolidation, or other informal pooling of resources by two or more hospitals to meet common objectives. Although VA hospitals have been horizontally integrated under common central office management from the inception of the VA health care system, the hospitals have largely functioned independently. As VA restructures its health care system, however, it is increasingly integrating and consolidating management and both patient and nonpatient care services at nearby hospitals. The term “horizontal integration” includes (1) legal mergers that join hospitals under common ownership, (2) hospitals maintaining separate ownership but forming networks and alliances to lessen duplication of services, and (3) hospitals collaborating to enhance their buying power and lower costs by forming a purchasing cooperative. Although alliances and networks are often formed locally or regionally, legal mergers often involve the formation of national hospital chains such as Columbia/HCA. Horizontal integration is intended to allow hospitals to gain control over markets by working with potential competitors; lessen duplication of services by sharing such services as information systems and laboratory facilities with other nearby hospitals; reduce administrative costs; reduce procurement costs by obtaining volume discounts; and better market their services to employers, managed care plans, and other purchasers. Horizontal integration is expected to allow hospitals to contain overhead costs, provide more efficient patient care, and increase opportunities for managed care contracting. Networks and alliances may also help hospitals market their services by offering employers and insurers “one-stop shopping,” minimizing purchasers’ transaction costs. In addition, hospital networks offer purchasers stability: they can expect access to the same providers each year. Horizontal integration can help hospitals’ marketing efforts by reducing purchasers’ uncertainties about hospitals’ quality of care, the accessibility of hospital care for their beneficiaries, and the availability of a wide range of medical technology. Horizontal integration has increased significantly since 1990, when about 45 percent of community hospitals belonged to some kind of multihospital system. Between 1990 and 1993, 71 hospital mergers took place. In 1994 alone, however, more than 650 hospitals were involved in mergers or acquisitions. This trend continued in 1995, when 447 or about 1 out of every 12 (8.5 percent) of the approximately 5,200 community hospitals nationwide were involved in mergers or acquisitions. In addition, four large corporate deals increased the total number of hospitals involved in mergers to over 900 or about 1 in 6 community hospitals. Eighty-one percent of 1,200 acute-care hospital executives surveyed by Deloitte & Touche in 1994 predicted that their hospitals would join a network within 5 years. To remain competitive and reduce costs, their hospitals would join a network to share such services as information systems and laboratory facilities, according to these executives. Horizontal integration has involved hospitals with different religious affiliations and profit statuses. For example, such mergers have taken place in Denver. Similarly, many community not-for-profit hospitals nationwide are converting to for-profit status as they join or are acquired by chains. VA has been a horizontally integrated hospital system from its inception. Most of its hospitals, however, have operated independently, often competing with other VA hospitals to add new services and equipment, disregarding overall need either within the VA system or the community. By establishing VISNs, however, VA is decentralizing system management. VA is both integrating the administrative management and operations of nearby medical centers to increase efficiency and consolidating services at fewer locations. In addition, some VISNs are beginning to review more closely their role in the community. In March 1995, VA submitted to the Congress a plan, its Vision for Change, to restructure its health care system from a centralized system with four regional offices to a decentralized system with 22 VISNs. The Congress approved the plan on September 5, 1995. According to Vision for Change, a VISN is designed to be the basic budgetary and planning unit of the veteran health care system. It is intended to reflect the Veterans Health Administration’s (VHA) natural patient referral patterns, numbers of beneficiaries and facilities needed to support and provide primary, secondary, and tertiary care, and, to a lesser extent, political jurisdictional boundaries such as state borders. Under the VISN model, health care is intended to be provided through strategic alliances among VA medical centers and other government providers and other such relationships. Facility integrations are a critical part of VA’s nationwide strategy to restructure field operations. By mid-1997, VA had approved the management integration of VA facilities in 18 geographic areas. A task force VA had established in 1994 to examine ways to achieve efficiencies in the VA health care system had identified about 30 potential management consolidations of geographically close medical centers that have complementary missions. The Under Secretary for Health’s March 1996 Prescription for Change identified a series of actions to restructure VA facilities or their management to reduce administrative costs and increase resources devoted to direct patient care. In addition to completing the ongoing facility integrations, the Prescription outlined actions to support additional facility management mergers and clinical or support service consolidations; promulgate screening criteria for potentially realigning facilities and seek opportunities to restructure processes to best align resources; change personnel policies to give VISNs the authority to tailor their develop a network business plan, including a 1-year tactical plan, a 2- to 3-year strategic plan, and 5-year strategic targets; and develop a systemwide business plan based on input from the VISN plans. VA has implemented or is implementing many actions outlined in the Prescription. For example, since the 8 initial management integrations, central office has approved 11 additional integrations. Similarly, in September 1995, VA established the “Criteria for Potential Realignment [CPR] of VHA Facilities and Programs,” also referred to as the “CPR List.” Other actions VA has completed include delegating to field managers authority to conduct (1) reductions-in-force for title 5 personnel and (2) staffing adjustments for title 38 personnel. Finally, VISNs submitted their initial strategic plans to VA’s central office in fall 1996, and they were included in the VHA section of the overall strategic plan. Many of the VISN strategic plans address consolidating specific services: VISN 3 (Bronx) plans to consolidate many of the laboratory services now provided separately by the Lyons and East Orange medical centers. It plans to similarly consolidate services at the Bronx and Castle Point medical centers. VISN 5 (Baltimore) consolidated all cardiac surgery at the Washington, D.C., VA medical center and all neurosurgery at the Baltimore medical center. VISN 7 (Atlanta) plans to consolidate surgical services now provided at both the Montgomery and Tuskegee medical centers at Montgomery. We are now reviewing VA’s efforts to integrate the two facilities. VISN 8 (Bay Pines) contracted for a study of the feasibility of integrating clinical programs, support services, and the management of its Lake City and Gainesville medical centers. In addition, VISN 8 (Bay Pines) consolidated laundry services for the Miami and West Palm Beach medical centers at West Palm Beach to provide additional outpatient care space at the Miami medical center. Similarly, the VISN consolidated warehousing for the Tampa and Bay Pines medical centers at Bay Pines to make additional outpatient care space available at the Tampa medical center. The network may also consolidate food service operations for the two medical centers. VISN 10 (Cincinnati) is considering consolidating five laboratories into one or two to attain economies of scale. VISN 12 (Chicago) plans to integrate and consolidate clinical and support services where such actions will yield savings and improve patient care. For example, it has task groups exploring the feasibility of consolidating cardiac surgery and neurosurgery programs. Two VISNs’ business plans indicated that they have no plans to consolidate facilities because of the distances between their facilities. For example, the VISN 6 (Durham) plan indicated that all of its medical centers are separated by distances requiring from 1 to 5 hours of driving time. Similarly, the VISN 9 (Nashville) plan indicated that the network is considering no facility consolidations because of the geographic dispersion and clinical mix of the network’s facilities. In addition to focusing on integrating and consolidating VA facilities, VA’s Prescription for Change calls for establishing strategic partnerships with other government health care providers and the private sector through the use of sharing agreements. Among other things, the CPR List provides guidance on contracting for services from community hospitals rather than providing them directly. Neither the CPR List nor the Prescription, however, specifically addresses the possible integration of VA facilities with local networks or alliances with non-VA hospitals. Eleven VISN strategic plans mention efforts to integrate VA facilities with community providers or contract for community hospital care: Several alliances of community hospitals have approached VISN 3 (Bronx) about joining them to form a single provider network for veterans and their families. The VISN’s plan, however, does not indicate whether the network expects to pursue such an alliance. VISN 13’s (Minneapolis) plan indicated that its four Minnesota medical centers hope to create a Minnesota VA Health Plan that will contract with local community health care providers to offer primary and emergency care for eligible enrolled veterans. VISN 14 (Omaha) is considering closing the inpatient hospital medical care and intermediate care units at Grand Island and Lincoln and pursuing contracts with community hospitals to provide acute inpatient care to VA users requiring such care. VISN 19’s (Denver) Cheyenne medical center plans to close its surgical unit because of low utilization and contract for surgical care from a community hospital. VISN 20’s (Portland) plan discussed its goal of making the network a health care organization providing services either in the network’s own facilities or in contract facilities. Other VISN strategic plans, however, mentioned little or nothing about integrating VA facilities with non-VA hospitals in their community. Thirteen VISN plans mentioned sharing agreements with other government facilities and medical school affiliates. Although horizontal integration is expected to allow hospitals to achieve service efficiencies, little systematic evidence exists to support this view. Studies of California’s local hospital systems in the late 1980s and early 1990s challenged the view that horizontally integrated hospitals produce efficiencies. In a cross-sectional analysis examining high-technology services, cost per admission, administrative costs, and price and cost margins, researchers concluded that hospitals’ benefits from integration derive from marketing efficiencies rather than from production efficiencies. Specifically, researchers found that multihospital systems do not consistently reduce the number of high-tech hospitals in multihospital systems do not generally have lower patient care costs than their unintegrated counterparts, integrated systems are more likely than their unintegrated counterparts to have unusually high administrative costs, and hospital systems still may be profitable if they can generate marketing benefits. Hospitals, these researchers concluded, may also prosper if associated with a teaching hospital, religion, or national chain. Many community hospitals are adding product lines by establishing home health care and expanding outpatient care to increase hospital workload and efficiency and improve marketing. Although such vertical integration is a more recent development in the private sector, most VA hospitals have been part of vertically integrated medical centers for years. VA is further expanding, however, the availability of some services, such as outpatient care, to improve access and increase hospital demand. Under a vertically integrated system, patients may typically be treated as outpatients (prehospital care), admitted to an acute inpatient facility for services that cannot be provided on an outpatient basis, and then transferred to a nursing home or home health care agency (posthospital care). Operating an outpatient clinic allows hospitals to provide services in a lower cost setting and respond to potential demand for inpatient services. Similarly, operating a nursing home or home health agency can make it easier for hospitals to discharge patients from high-cost acute beds by providing them postacute beds that they control. Such strategies can be particularly important under hospital prospective payment systems. This is because the hospital may bill separately for outpatient services and home health and nursing home care that would have been included in the fixed payment if provided in the hospital. Vertical integration may involve a single hospital setting up an outpatient clinic. It may also involve a single hospital converting to a health care system as Detroit’s Henry Ford Hospital did. In 1971, 210 physicians and one outpatient care clinic were affiliated with the Henry Ford Hospital. Supported by a grant from the Ford Foundation, by 1980, the system had grown to include a 350-physician group practice, five medical centers, and an education and research center. After implementing a 10-year strategic plan, the Henry Ford Health Care System grew to include 35 outpatient care centers, an 800-member multispecialty physician group, a 450,000-member HMO, a 903-bed tertiary care hospital, a 100-bed psychiatric facility, a chemical dependency program, two nursing homes, and home health services. By providing a continuum of care, hospitals expect to increase profits, control patient flow, and achieve maximum market penetration. Providing a continuum of care allows hospitals to compete for inpatient referrals through the primary sources of admissions to community hospitals: community-based physicians, provider networks, and managed care systems. Moreover, by offering more services, hospitals expect to more effectively compete for contracts with physician networks and managed care systems. A 1995 survey of over 500 hospital executives found that most viewed vertical integration as offering the best chance for survival over the next decade. About 63 percent of the executives said that expanding external services (such as home health care and community outreach programs) offered the most hope for hospital survival—compared with just 30 percent of executives in 1990. Meanwhile, the executives were less likely to view expanding hospital-based outpatient services as important to hospital survival (44 percent in 1990 compared with 28 percent in 1995). Executives’ views of the benefits of offering specialized services as a survival strategy dramatically changed from 1990 to 1995. Of the hospital executives surveyed in 1990, 20 percent viewed such specialization as vital to survival. In 1995, however, only 8 percent viewed offering specialized services as an important survival strategy. Vertical integration has greatly increased since the early 1970s. According to the American Hospital Association, between 1972 and 1990, the percentage of acute care hospitals offering home health services increased from 6.2 to 35.5 percent, operating nursing homes increased from 8.6 to 21.0 percent, and operating an outpatient clinic increased from 27.5 to 85.2 percent. As a vertically integrated system, the VA health care system has for many years offered, in addition to hospital care, such services as outpatient, nursing home, domiciliary, and hospital-based home care. In 1996, VA operated, in addition to its 173 hospitals, 398 outpatient clinics, 133 nursing homes, and 40 domiciliaries. It also operated several special- emphasis programs focused on the health care needs of certain veterans, such as those who are homeless and those suffering from post-traumatic stress disorder (PTSD), substance abuse, blindness, acquired immunodeficiency syndrome, or spinal cord injuries. Through these facilities and programs, VA has offered a continuum of care that, even today, community hospitals do not adequately offer. Among the objectives cited in VA’s Prescription for Change is increasing the accessibility of VA services. VA has focused these efforts, however, on developing alternatives to hospital care—actions that would tend to reduce demand for VA hospital care—rather than generate new demand. To improve veterans’ access to VA health care, VHA, in February 1995, encouraged its facilities to establish more “access points,” now known as CBOCs. VA has opened, or developed plans to open, 86 CBOCs during the past 3 years. Although VA’s Prescription for Change indicates that VA was considering opening approximately 275 CBOCs, VA has not determined the exact number of CBOCs it will open. Virtually all VISN strategic plans have indicated that networks will establish additional CBOCs. In addition to establishing CBOCs, VISN strategic plans have identified other initiatives to expand and reinforce the continuum of care offered by the VA health care system: VISN 11 (Ann Arbor) includes community support services in its continuum of care. In addition, the VISN has worked with neighboring VISNs 10 (Cincinnati) and 12 (Chicago) to develop services at state veterans’ homes in those VISNs. VISN 12 (Chicago) plans to expand its continuum of clinical service settings so that patients’ care can be provided in the most cost-effective and clinically appropriate setting. Specifically, the VISN is studying (1) establishing CBOCs and (2) shifting substance abuse and PTSD care to more cost-effective outpatient and residential settings. Researchers, providers, and analysts give vertical integration mixed reviews. Research shows that community hospitals that have established primary care clinics have increased their market share of inpatient services. Similarly, a study of California hospitals found that offering a continuum of care increased revenues—even after inflation. Adding more community-based physicians to the medical staff, providing more outpatient care, and expanding outpatient surgery services increased hospital revenues between 1983 and 1990. Prehospital strategies, such as adding hospital-based outpatient care and surgery, greatly contributed to increasing revenue or at least reducing declining Medicare revenues. Posthospital strategies, such as setting up home health agencies and nursing homes, did not increase revenue as much as the aforementioned practices. In reviewing the vertically integrated Henry Ford Health Care System, the Pew Health Professions Commission concluded that integrated health care systems have the potential to align health care delivery and financing to help improve care, increase patient and customer satisfaction, and reduce or hold costs to a minimum. Others, however, question the benefits of vertical integration. For example, one futurist has warned of the inherent discord in vertically integrated systems. He has noted that hospitals, health plans, and doctors continue to have conflicting motives under our health care system. In his view, integrated health care systems do not create proper incentives. Because they tend to pay salaries to doctors, they destroy physicians’ incentives to share financial risk. And, in his opinion, hospitals that vertically integrate are more concerned with filling beds and increasing revenue than improving care. Concerns have also been raised about vertical integration at the local level. For example, the merger between a 250-doctor clinic and a nearby hospital failed after 4 years. The clinic expected the merger to help it access capital and reduce overhead and enable it to tap into managed care contracts. Instead, according to the clinic’s vice president, the clinic was in ruin after 2 years; all of its midlevel administrators had left, its administrative costs had doubled, and the clinic had not benefited from managed care contracts. The vice president questioned whether physicians and hospitals can truly align their incentives. Researchers also question whether vertical integration increases rather than decreases health care costs. For example, Robinson noted costs are likely to be higher for hospital-owned outpatient, home health, and nursing home services than for comparable nonhospital providers. He noted that hospital-owned facilities tend to have higher wage rates for nurses, technicians, clerical workers, and other staff than wage rates in independent physician offices, nursing homes, and home health agencies. Finally, he noted that hospitals’ practices tend to be more intensive than those of independent nursing homes and physician offices; hospitals therefore have higher costs, even after accounting for wages and other costs. In addition, Robinson noted that a vertically integrated system allows potential for opportunistic cost and revenue accounting because costs may be shifted among inpatient, outpatient, and postacute care divisions.In other words, one segment of a vertically integrated system may be used to subsidize other segments. Many community hospitals have used outside management expertise to help improve efficiency and profitability. Although VA has not contracted out the management of any of its hospitals, it has used outside expertise to manage the VA system. Contract management is an arrangement in which a hospital’s board of trustees retains an outside organization to manage the hospital. The contractor provides an administrator, usually along with an entire management team, to oversee daily hospital operations. This arrangement contrasts with that in which a board of trustees hires an administrator or chief executive officer directly. Contract management is intended to improve the financial performance of hospitals facing possible closure. Contract management is expected to provide hospitals (1) greater management expertise, (2) easier access to capital markets, and (3) lower procurement costs. Contract management can produce lower procurement costs because of the economies of scale provided by joint purchasing with other hospitals managed by the same contractor. Contractors manage over 10 percent of the nation’s community hospitals. Contractors managed 10.4 percent of community hospitals in 1982, and by 1987, this had grown to 12.4 percent. A representative from the Agency for Health Care Policy and Research (AHCPR), which developed the above estimates, indicated that the organization has not developed more recent estimates but believes that contract management is growing. Contract-managed hospitals tend to be small, rural hospitals with fewer technology-intensive services. Contract-managed and noncontract- managed hospitals have like case mixes but appear to have greatly differing financial performances. With at least 2 or more consecutive years of control, contract managers have been able to reduce costs to below those of noncontract hospitals and to substantially improve their hospitals’ capital structure. For example, the salaries and benefits cost per admission for hospitals that had been contract managed for 2 years or more was $2,089 compared with $2,459 for similar hospitals not contract managed. Similarly, the ratio of assets to liabilities for the contract-managed hospitals studied improved from 2.391 after 1 year to 2.897 after 2 or more years of contract management, slightly exceeding the performance of noncontract-managed hospitals. VA has not used contract management for any of its hospitals. As previously discussed, before October 1996 VA was not generally authorized to contract for direct patient care services or services incident to direct patient care. VA officials did not know of VA considering any use of contract management or whether contracting restrictions would have prohibited such contracts. Neither VA’s Vision for Change nor Prescription for Change discussed the hiring of contract management. Nor do any of the VISN business plans directly address the hiring of such management. The VISN 12 (Chicago) plan, however, indicates that the VISN will, if the need arises, recruit management staff with the skills and expertise needed to help accomplish its mission. Although VA has not contracted for management of entire hospitals, it has used management expertise from the private sector in managing the veterans health care system, starting at the top with the Under Secretary for Health. The Under Secretary’s prior experience included running the California Medicaid program (Medi-Cal), the nation’s largest. Similarly, VA selected many VISN directors from outside the VA system. Hospitals and health plans are spending billions of dollars on health care information systems. As in the private sector, VA is developing and implementing both information and financial management systems to provide the data it needs to make sound management decisions. Decision support systems (DSS) provide managers with information on business operations to ease decision-making. In the health care industry, these systems provide managers and clinicians with data on patterns of patient care and patient health outcomes, which can then be used to analyze resource utilization and the cost of providing health care services. Several vendors offer various types of DSSs for the health care industry. Administrators and physicians often have limited information to support efforts to manage product lines and the process of clinical care. Existing information systems usually support only one portion of the health care system such as clinical laboratories and financial reporting systems. No major integration of financial and clinical data systems has taken place. Research on hospital information systems indicates that better integrated financial and clinical information could provide more efficient and effective decision support to both administrators and physicians. For example, the clinical data in the system could support the development and monitoring of practice guidelines and critical pathways. Although cost savings have eluded those that have invested in integrated clinical and financial data systems, such investments have improved provider productivity, medical outcomes, and patient satisfaction. DSSs can compute the costs of services provided to each patient by combining patient-based information on services provided with financial information on the costs and revenue associated with those services. For example, a private-sector hospital performing cataract surgery collects information on the services provided to each patient, including the laboratory tests performed and the medications supplied, through its billing system. The hospital then collects revenue and cost information through its accounting systems, incorporating the collections from the insurance companies and applicable parties, such as Medicare, and expenditures for utilities and equipment. Using a DSS to combine the clinical and financial information from the billing and accounting systems, the hospital can, for example, (1) calculate the specific cost of providing cataract surgery to a patient; (2) compare revenue received to costs incurred to determine profitability of this type of service; (3) compare costs incurred for different physicians and for surgery performed at different locations; (4) evaluate patient outcomes; and (5) analyze ways to increase the quality of service, reduce costs, or increase profitability. DSSs can also help compare patient care with predefined health care standards. DSSs have improved productivity and lowered costs. Responses to a survey published in 1989 also cited service improvement as a major benefit of their systems but seldom mentioned improved quality of care as an additional benefit. A 1992 survey of health care chief executive officers (CEO) found that they viewed DSSs as most critical in supporting cost- control efforts (82 percent), physician-hospital relations (78 percent), quality improvement (66 percent), and managed care (65 percent). For each of these areas, however, 50 percent or less of the CEOs were satisfied with existing DSSs. The CEOs viewed DSSs’ financial reporting capabilities most favorably; over 70 percent were satisfied with existing systems. DSSs are viewed as particularly important as the nation moves increasingly toward managed care, which requires hospitals to integrate their business and clinical operations. For example, an information system for a managed-care system might include the capability to (1) analyze capitation rates, (2) process claims, (3) determine eligibility, (4) manage health care utilization, and (5) credential providers. By 1990, more than 200 vendors were selling DSSs to hospitals. These systems included support for some or all functions of financial planning and modeling, diagnosis-related groups, cost accounting, facility utilization, and strategic marketing. A study by Sheldon Dorenfest Associates found that health care information system spending totaled $8.7 billion in 1995 and would probably reach $11 billion in 1997. In 1996, however, the Healthcare Financial Management Association said that only one in five integrated delivery systems had computerized planning systems that profiled doctors, projected demand, measured outcomes, or tracked patients electronically. In addition, a study by Abt Associates for the Healthcare Financial Management Association found that no integrated U.S. health care delivery network had truly integrated its clinical and financial systems. Networks that have invested money in developing systems have done so without expecting, or getting, cost savings, according to the study. Although savings are elusive, an Abt senior consultant found improvements in provider productivity, medical outcomes, and patient satisfaction. The study cited shorter waiting times resulting from automatic scheduling systems as one example of the benefits of information systems. Like the private sector, VA is working to improve its cost and utilization data. Its information and accounting systems cannot provide detailed information on the specific services VA provides or the cost of those services. VA’s efforts include (1) implementing a DSS, (2) developing a National Patient Care Database, (3) developing a computerized patient medical record, and (4) implementing a new financial management system. Since February 1994, VA has been phasing in at its facilities a new DSS that uses commercially available software to help provide managers data on patterns of care as well as their resource and cost implications. This DSS fundamentally differs from existing VA databases because it organizes each patient’s selected resource utilization and clinical outcome data in a longitudinal format. This, according to VA, allows the Department to evaluate patterns of care for a user-defined patient population for an extended time period beyond a specific episode or care site. The DSS receives input from diverse data systems and consistently allocates specific costs, including personnel, supplies, and fixed overhead, to each patient service or procedure. The DSS, by combining patterns of patient resource utilization (cost) information and patient outcome (quality), reflects the value of patient care delivered by VA. As of March 1996, 68 VA medical centers were in various stages of implementing DSS. VA’s Prescription for Change called for 30 additional centers to be added to the DSS every 6 months until implementation is complete. Subsequently, VA accelerated DSS implementation, and the remaining centers began implementing DSS in March 1997. Consistent with guidance provided in the Prescription, more than half the VISN strategic plans address DSS implementation. Only VISN 11’s (Ann Arbor) business plan, however, identifies efforts to ensure the integrity and validity of data entered in DSS. VISN 11’s (Ann Arbor) and VISN 12’s (Chicago) plans also have more detailed information on potential uses of DSS data for comparative analyses than other plans. Many of the VISNs’ plans indicate that networks are developing separate information system plans. VA’s Prescription for Change also called for establishing linkages between VA data systems and other public health care programs such as Medicare and Medicaid. It noted that VHA participated in the National Committee on Vital and Health Statistics’ Core Data Elements Project sponsored by the National Center for Health Statistics. VA is also developing a National Patient Care Database. Several systems are now used to gather clinical workload data. For example, VA has separate databases with inpatient (patient treatment file) and outpatient care (outpatient file) data. This limits the amount of information on services provided to individual patients according to the database. As a result, the current systems do not provide the data VA needs to support broader management, medical resource management, and policy decisions. VA’s current outpatient file is inadequate to meet VA’s needs for clinical and management information. The outpatient file has information on specific clinic stops but not on the diagnoses made, services provided, or physicians or other clinicians providing services. In addition, the data VA collects are not compatible with those collected by the Health Care Financing Administration (HCFA) or other health care programs, making it difficult to compare VA with other programs in efficiency or quality. To address these problems, the Under Secretary for Health required VA facilities to gather, beginning in October 1996, certain information to receive workload “credit” for outpatient visits. VA developed a new encounter form to gather data on patient demographics, diagnoses, procedures performed, and providers. In completing these forms, VA facilities must use the same coding and terminology typically used by HCFA and the private sector, including diagnostic and procedure codes. The National Patient Care Database is expected to eliminate fragmented and overlapping data systems, resolve inconsistencies in current data systems, implement standard-based codes and data sets, move the focus from the program to the patient, and improve the timeliness of data. VA is developing the National Patient Care Database in two phases. In 1997, it collected outpatient care data; in 1998, it began adding inpatient data. In addition, according to VA’s Prescription, VHA plans to work more with the National Library of Medicine’s electronic medical record system cooperative project to conduct large-scale testing of vocabularies for computer-based patient records. Similarly, many VISN strategic plans identify developing computerized patient records as a goal. Finally, the Prescription called for the design of a management information system that would track and link care to individual caregivers throughout the VA system. VA established a National Provider Index that identifies caregivers and links them to patient care. The information is being incorporated into the DSS and National Patient Care Database. VA replaced its former accounting system with the new Financial Management System (FMS) using upgraded technology and the governmentwide standard general ledger structure. According to VA officials, FMS is a tool to help VA improve its financial management and internal controls. Many issues need to be addressed about VA’s efforts to change its hospitals’ management and their relationships with other providers. These issues involve horizontal and vertical integration as well as contract management issues. Traditionally, almost all veterans provided hospital care through the VA system have been expected to use VA-operated facilities. In establishing its 22 VISNs, VA horizontally integrated into networks with 4 to 11 VA hospitals in broad geographic areas. VA therefore expects veterans to be able to obtain virtually any health care service through referral to a network hospital. VA’s hospitals and clinics, however, are often located hundreds of miles apart, making referrals between them problematic. Horizontal integration in the private sector usually involves referral networks of hospitals and other nearby facilities. The referral networks established by VISNs, however, often cover vast distances. VISN 5 (Baltimore), one of the smaller VISNs geographically, includes hospitals in Washington, D.C.; Martinsburg, West Virginia; and Baltimore, Maryland (a total of three hospitals). (See fig. 7.2.) The distances between Martinsburg and Washington, D.C., (about 90 miles) and Martinsburg and Baltimore (about 95 miles) raise questions about the extent to which patients needing services not available at the Martinsburg hospital are expected to obtain those services from VA hospitals in Washington, D.C., or Baltimore. Such referrals are necessary if community hospitals in Martinsburg or nearby cities such as Hagerstown, Maryland, cannot provide the services. But, for services available from community hospitals, referral to a distant VA medical center may create unnecessary hardships for veterans and their families. VISN strategic plans, however, have little information on the community hospital services available and the relative cost of providing services through contracts with such hospitals compared with the cost of referring a patient to the nearest VA hospital that can offer the services (including any transportation and lodging costs). By integrating its hospitals with non-VA hospitals in their communities, VISNs might be able to establish referral patterns comparable with those of community hospitals. For example, the Washington, D.C., medical center could form a referral network with the four Washington area military hospitals to improve the two systems’ beneficiaries’ access to hospital services. (See fig. 7.3.) VA hospitals would need to address the following issues before joining a local network: To what extent would the network help increase demand for VA-supported hospital care? Can the VA hospital support additional workload without compromising services for veterans? Would VA be able to generate enough revenues from selling services to military and community hospitals in the network to offset the increased contracting costs? To what extent would current VA hospital users shift their use to other, more convenient, military or community hospitals in the network? To what extent can VA reach agreements to consolidate specialized services in fewer locations to increase efficiency and quality? Another potential advantage of VA hospitals joining local networks would be VA’s increased consideration of the health care capacity and needs of local communities in its planning. For example, VA could reach agreements with community hospitals about the proliferation of high-technology equipment. Similarly, in placing expensive new equipment in the VA system, VA could consider the extent to which the equipment could serve the community as well as veterans. VISN strategic plans, however, generally do not address the health care capacity and needs of the communities with VA hospitals. One approach that might increase veterans’ access to more convenient community or military hospitals but preserve veterans’ incentives to use VA hospitals would be to impose higher veteran cost sharing for services obtained from non-VA hospitals. In effect, VA would be establishing a point-of-service plan, allowing veterans to obtain care from any willing provider but paying for more of the cost of the care if it is obtained from a preferred provider (a VA hospital) or participating provider (other network hospital). As of July 1997, VA had initiated integrations in 18 geographic areas with five reported as completed. VA indicates the integrations are having positive results. VA has, however, had difficulties planning and implementing some of the integrations. Our ongoing work has revealed areas where improvements could be made. For example, VA generally makes integration decisions incrementally, that is, on a service-by-service basis throughout the process instead of on the basis of decisions affecting all activities in integrated facilities. Also, planning and implementation activities often take place simultaneously, which precludes VA’s considering the collective effect of such changes on the integration. In addition, stakeholders, though involved at varying times in different ways, do not always receive sufficient information at key decision points. Our work suggests that as VA considers ways to improve its facility integration process, several actions might facilitate better results. These include adopting a more comprehensive planning approach, completing planning before implementing changes, improving the timeliness and effectiveness of communications with stakeholders, and using a more independent planning approach. Considerable uncertainty surrounds the potential effects of VA’s vertical integration efforts on future demand for VA hospital care. Because VA has been a vertically integrated health care system for many years, it may have already reaped many of vertical integration’s benefits. For example, community hospitals expect to retain or increase demand for hospital care by operating nursing homes and home health agencies. VA, however, has both operated nursing homes and contracted for nursing home care in the private sector since the 1960s. Transfers between these nursing homes and VA hospitals have long generated a portion of VA’s hospital demand. One way for VA to increase hospital demand would be to expand its nursing home program, either by establishing additional VA nursing homes or contracts with community nursing homes. Such actions would, however, require significant new VA resources to only slightly increase hospital admissions. Changes need to be made in the financing of VA nursing home care. Veteran cost sharing provides less than 1 percent of the cost of providing VA-supported nursing home care. On the other hand, expanding the availability of nursing home care would help bridge the gap in health care coverage for elderly veterans. The second major issue concerning vertical integration is the extent to which CBOCs may generate new demand for VA hospital care. Many CBOCs are located far (often over 50 miles) from the nearest VA facility. CBOC physicians are expected to refer veterans needing specialized services or hospital care to a VA hospital. Distance from a VA hospital, however, significantly affects the likelihood that veterans will seek care from a VA facility. The rate at which veterans use VA hospitals declines significantly at distances of over 5 miles from a VA facility. Thus, the extent to which CBOCs serve veterans who have other health care options through public or private health insurance further reduces the likelihood of VA hospital use. Because VA’s contracting authority did not expand until October 1996, it is too soon to determine its effect on demand for VA hospital care. Use of the authority to contract for hospital and specialized services from private-sector providers to improve veterans’ access to hospital care could further reduce demand for VA hospital care. Our work on other countries’ veterans health care systems found that use of veterans’ hospitals declined once veterans gained access to community hospitals through national health insurance or changes in the veterans’ program to authorize contract care. As discussed, one option that could limit the effect of giving veterans greater access to community hospitals closer to their homes would be to require higher veteran cost sharing for care from non-VA hospitals. One change that community (but not VA) hospitals have tried is contracting for outside hospital management to restructure operations and improve efficiency. VA could test contract management under several scenarios. For example, because contract management appears to have succeeded most in small, rural community hospitals, VA could work with the Congress to develop a pilot program to test contract management at one or more of its small rural hospitals. On the other hand, it could try contract management in hospitals facing significant management challenges. Similarly, VA could use outside management to plan and implement facility integrations. In designing such demonstration projects, however, VA would need to establish evaluation plans to determine the effects both on efficiency and quality of care. In other words, it would need to ensure that the contractor did not increase efficiency by compromising quality of care. While the DSS may significantly improve VA’s ability to manage its health care operations, the ultimate usefulness of the system will depend not on the software but on the completeness and accuracy of the data entering the system. If the DSS cannot provide reliable information, VA facilities and VISNs will either continue to make decisions on the basis of unreliable information or spend valuable time developing their own data systems. Two years ago, we recommended that VA develop a strategy to identify data needed to support decision-making and ensure that these data are complete, accurate, consistent, and reconciled monthly. VA’s Prescription for Change advocated swift implementation of the DSS but did not target any actions to ensure that the data and systems entering the DSS could provide complete and accurate data. Similarly, VISN strategic plans generally do not address plans to ensure the completeness and accuracy of data entering the DSS and other data systems. As a result, it is not clear whether the DSS, FMS, and other data systems will generate the reliable data VA needs to support management decisions. VA’s facility integrations create additional challenges for VA data systems. For example, decentralized hospital computer programs at VA facilities have been largely locally developed and may not be compatible with other facilities’ systems. Similarly, VA will have to resolve facilities’ differences in data coding and entry. Both VA and community hospitals face the challenge of reprogramming their computers to recognize the next century. Most computer software in use today is limited to a two-digit date field such as “97” for 1997. Thus, this software will not be able to distinguish between the years 1900 and 2000 because both will be designated “00.” VA’s draft strategic plan states that VA’s objective is to ensure that its information systems will provide uninterrupted service to support VA medical care in the year 2000. The plan includes a performance goal that full implementation and testing of compliant software (that is, software capable of processing dates beyond 1999) will be completed by December 1999. Personnel accounts for over 40 percent of community hospital expenditures. Hospitals are the major employers of nursing staff, including registered nurses, licensed practical nurses, and nursing assistants. Throughout the 1980s, the use of nursing staff, particularly registered nurses, increased steadily, raising costs. By 1992, registered nurses accounted for about 25 percent of hospital employment. The increased demand for and limited supply of registered nurses led to significant wage increases, raising operating costs further. Because personnel accounts for such a large part of hospital costs, any effort to reduce costs must focus on effectively using health care workers. Community hospitals often change their basic work processes to more efficiently use personnel resources. For example, community hospitals are contracting for patient and nonpatient care services when such contracting is less costly than providing the services through the hospital’s staff; using part-time and temporary nurses and other health care professionals to more flexibly meet changing workloads and patient mix; cross-training personnel to perform multiple jobs to more efficiently use developing nurse extender programs to allow nurses to devote more time to direct patient care; and restructuring care delivery around patient-centered teams to increase efficiency and patient satisfaction. In the past, VA has not focused as much as the private sector on work transformation in part because of limitations on its authority to contract for patient care services. VA’s Prescription for Change, however, placed increased emphasis on such concepts as cross-training and patient- centered care. Veterans Integrated Service Network (VISN) strategic plans, however, hardly mention efforts to implement the changes the Prescription calls for. As a result, VA faces many issues concerning the extent to which its hospitals should change work processes. Community hospitals try to control costs by contracting for a wide variety of patient and nonpatient care services. By doing so, hospitals can shift some costs from fixed to variable, allowing them to react to changing workloads. In other words, hospitals using contract services pay only for the services they use. In addition, use of contract employees reduces employee benefits costs. Until recently, VA’s legislative authority did not permit it to contract for patient care services. VA is now, however, increasingly exploring options to contract for both patient and nonpatient care services. Although we found no studies that identify the number or percentage of community hospitals using contract services, annual surveys of hospital executives by Modern Healthcare suggest that this is a growing trend. The services hospitals most frequently contract for include food service, emergency services, housekeeping, laundry, equipment maintenance, and pharmacy services. Between 1994 and 1995, the number of hospitals surveyed that reported using contract services increased, particularly services for emergency room, financial management, equipment maintenance, and physical and rehabilitation therapy. (See table 8.1.) Controlling costs was the main reason chief executive officers (CEO) cited for using contractors to provide support and business services. Nearly 60 percent of the executives responding to Hospitals & Health Networks’ Fifth Annual Contract Management Survey in 1995 cited cost as a reason for contracting for support services; 56 percent cited the need to obtain specialized expertise; 42 percent cited the ability to downsize the workforce. Similarly, slightly more than half of the respondents said that they contract for business services to contain costs and take advantage of vendors’ specialized expertise. Cost was not as much a factor in hospitals’ decisions to contract for clinical services. Respondents most often cited the need to obtain specialized expertise (54 percent) and difficulty in recruiting staff (52 percent) as reasons for contracting for such services. Lack of capital appears to be a major factor in decisions to contract for diagnostic imaging services. A diagnostic imaging contractor provides such services as mobile computed tomography, magnetic resonance imaging, single photon emission computed tomography, ultrasound, and nuclear medicine. Hospitals that cannot afford to purchase, or justify on the basis of workload, such equipment, which may cost $2 million or $3 million, may purchase the service from a contractor. Another reported trend is for contractors to hire hospital employees. For example, when Marriott contracts to provide food service operations, it may hire hospital employees. Contracting for food service operations can save money because contractors generally pay lower wages than hospitals. Overall, hospital executives appeared satisfied with use of contract services. Over 90 percent of hospital executives participating in Hospitals & Health Networks’ 1995 survey were very or generally satisfied with contracts for clinical, support, and business services. More than one-third of the nation’s hospitals regularly use temporary staffing agency personnel. The nursing shortage of the mid-1980s led hospitals to rely on temporary contract nursing staff to meet staffing requirements. Some inner-city hospitals reportedly pay $50 an hour for such personnel. Temporary staffing agencies (1) help hospitals meet staffing shortages and (2) allow nurses flexibility in their work schedules. Hospital administrators like using agency personnel because it avoids the costs of providing insurance and other benefits to permanent employees. On the other hand, permanent employees often complain that use of agency personnel makes it harder to maintain continuity of care. Many also resent the significantly higher hourly wages that agency nurses receive without having to assume the same nonclinical responsibilities as permanent staff. The number of nurses working in independent contract positions is increasing. Despite the increased use and considerable cost of such nurses, we found little information on them. A 1990 survey of registered nurses in Illinois (66,005 out of 117,796 nurses responded), however, found that agency nurses received higher hourly wages but fewer benefits than permanent hospital staff nurses. Hospital staff nurses were more likely than agency nurses to receive pension plans, health and dental insurance, reimbursement for continuing education, child care services, and parking. Historically, VA hospitals have not been allowed to contract for patient and nonpatient care services to the same extent as community hospitals. Now that legislative barriers to such contracting have been removed, VA expects its hospitals to increasingly use contract services when they are less expensive and of equal or better quality. Until fiscal year 1994, VA was, in general, prohibited from contracting for direct patient care services, such as nursing services, which are currently provided by federal employees. Section 8110(c) of title 38 of the U.S. Code generally precluded VA from entering into contracts under which VA direct patient care, or activities incident to direct patient care, would be performed by non-VA personnel. VA interpreted activities such as dietary and laundry services as incident to direct patient care and therefore exempted them from efforts by the Office of Management and Budget to have agencies contract out functions previously performed by federal employees. The Veterans’ Benefits Improvements Act of 1994 (P.L. 103-446, title XI, section 1103) suspended these requirements for fiscal years 1995 to 1999. The Secretary of VA must, however, (1) ensure that contractors give priority to former VA employees displaced by contract awardees and (2) provide former VA employees all possible help in obtaining other federal employment or entering job training programs. In August 1995, the Under Secretary for Health distributed criteria for potentially realigning VA facilities and programs to help field managers identify opportunities for improving efficiency. Several criteria focus on contracting for services when the community offers the same kind of service of equal or better quality at a lower cost. While the criteria present hypothetical examples of situations in which a VA facility would purchase a service from another facility rather than provide it directly, field managers could also interpret the criteria to include situations in which a private contractor would be hired to operate services, such as laundry and food services, within a VA facility. Public Law 104-262, which became law on October 9, 1996, removed additional barriers to expanded VA contracting. Specifically, it (1) expanded the types of professionals and services for which VA may contract, (2) simplified procedures for complying with federal procurement processes when contracting with commercial firms, and (3) permanently eliminated the restriction on contracting for patient care-related services. Several VISN business plans have identified efforts to contract for patient and nonpatient care services using this expanded authority: VISN 7 (Atlanta) plans to purchase laboratory services from more cost-effective non-VA providers. VISN 12 (Chicago) expects to save 40 percent on staffing and 25 percent on other costs by contracting out selected administrative, clinical, and support services. Among the activities the network is considering for contracting are grounds maintenance, warehousing, and fire prevention. VISN 14 (Grand Island) is reevaluating its in-house provision of dialysis services to determine whether it would be less expensive to contract for the services. It is also weighing the possibility of sharing its other dialysis resources with community providers. To cope with rapidly changing workloads and help contain staffing and benefits costs, community hospitals are using more part-time and intermittent nursing employees. Although VA also uses part-time and intermittent employees, such use has declined in the past 5 years. In addition, some community hospitals are developing regional staffing pools to share personnel among facilities. VA officials did not know of any personnel sharing among its facilities but believe integration of VA medical centers may encourage this practice. The use of part-time and intermittent employees provides several advantages as well as disadvantages to both staff and hospitals. First, the use of part-time and intermittent employees can enable hospitals to cost-effectively meet staffing needs due to changes in patient loads, case mix, and vacancies. Intermittent employees generally receive higher wages instead of benefits and have more control in scheduling their work assignments than do part timers. Second, using part-time and intermittent employees also allows hospitals an expanded pool of nurses from which to recruit and the ability to retain nurses who might have left the workforce or sought other employment if their families’ situation changed. Nurses often prefer part-time or intermittent work because it gives them greater flexibility in scheduling their work hours and more time to spend with their families and reduces stress. By requiring intermittent employees to work a minimum number of shifts, weekends, and holidays, hospitals also make it easier for full-time staff to schedule time off. The use of part-time and intermittent employees also has disadvantages, however. Programs that allow lots of movement of such staff among work units may have difficulty keeping intermittent employees abreast of hospital policies and procedures. On the other hand, programs that allow such employees to work in a limited number of units may have difficulty meeting staffing needs without relying on outside staffing agencies. Finally, intermittent employees are often viewed as lacking a permanent staff’s commitment to an organization. One hospital, Tampa General Hospital, addressed these problems by organizing its intermittent staff, including registered nurses, licensed practical nurses, paramedics, certified surgical technicians, mental health technicians, and emergency medical technicians, into unit-based and divisional pools. The unit-based pool places intermittent employees under the direct supervision of the unit nurse manager. Although assigned to a specific unit, such employees receive a pay differential as well as retirement and Social Security benefits. In contrast, Tampa General’s divisional nursing pool is intended for employees who want greater flexibility in scheduling and assignments. Staff in the divisional pool work at least 16 weekend hours every month and one 8-hour shift during the Thanksgiving, Christmas, and New Year holiday season. The nursing pools have allowed the hospital to decrease its use of overtime and staffing agencies, according to hospital officials, and offered other advantages. First, the unit-based pool has allowed the hospital to meet fluctuating demand or cover vacancies with nurses familiar with the unit. Second, when intermittent employees convert to permanent status, orientation costs are typically lower than for newly hired nurses. Between 1966 and 1986, the percentage of nurses working part time in hospitals ranged from 15 to 20 percent. The percentage of nurses working part time increased to 26 in 1988, according to another source. Similarly, in a 1993 study conducted by the Florida Hospital Association, 47 percent of the hospitals surveyed indicated that they used intermittent contract staff only when needed. In addition, 40 percent of the hospitals reported that they had “float pools” to meet staffing needs. Float pools comprise hospital staff who agree to work in different units due to changing patient loads and case mix. The survey did not include data on use of nurses obtained from nursing agencies. Almost half of the more than 7,000 nurses responding to the Patient Care Survey of the American Journal of Nursing reported that part-time or intermittent registered nurses have been substituted for full-time registered nurses at their facilities; two out of five reported the substitution of unlicensed auxiliary personnel for registered nurses. Nurses in the Pacific region reported significantly higher rates of substitution. Nurses in the Northeast and East North Central regions reported the greatest cutbacks in the use of registered nurses. Like community hospitals, VA uses part-time and intermittent nurses and other health care professionals to increase its flexibility in meeting changing workloads and patient mix. Unlike community hospitals, however, VA is decreasing its use of part-time and intermittent nurses. Overall, the use of part-time and intermittent nurses in the VA health care system has declined steadily since 1992, when about 13.3 percent of VA nurses worked as part-timers or intermittents. At the end of fiscal year 1995, about 11.2 percent of VA nurses and nurse anesthetists worked as part-timers or intermittents. According to VA’s chief consultant in its Nursing Strategic Health Care Group, hospitals that have had to reduce staffing due to budget problems have sometimes eliminated part-time and intermittent nurses to protect full-time, permanent nurses. This, she said, can result in reducing the hospital’s flexibility in responding to changing workloads. VA statistics on part-time and intermittent employees provide systemwide information on physicians, dentists, and nurses but have no data on other types of health care workers. In addition, VA officials did not know of any studies or data on the actual extent to which part-time and intermittent nurses and other staff are working in VA hospitals. To date, VA’s restructuring efforts have not specifically focused on use of part-time and intermittent staff. Neither VA’s Vision for Change nor Prescription for Change addresses the use of part-time and intermittent employees. Nor do any of the VISN business plans address the issue. VA officials, however, agreed that use of part-time and intermittent employees can increase flexibility and reduce costs. They also said that use of part-time and intermittent nurses probably varies within the VA system due to local conditions, such as the supply of nurses. Some community hospitals have developed staffing networks to pool hospital personnel geographically. For example, rural hospitals in Vermont have developed an interhospital staff sharing system to alleviate staffing shortages. Under the pooling arrangements, some hospitals lend staff more often than they borrow them from the pool, while others borrow more than they lend. All hospitals, however, cited advantages. For example, hospitals borrowing staff from the pool reported that it allowed them to (1) keep a department or unit in a small institution open and (2) avoid having to transfer patients because of short staffing. Similarly, hospitals lending staff through the pool said that it gave them an alternative to sending staff nurses home without pay during low demand periods. Lending hospitals are responsible for ensuring the competency of pool members. Employee participation is voluntary, but those who participate are (1) paid $3 to $5 above their regular hourly salary depending on when they work and (2) reimbursed for travel. Even with the salary differential, hospitals paid less than they would have if they had obtained staff from a nursing agency. Other advantages of the pooled resources cited were better communication among hospitals in the pool, avoiding the need to use more costly and less reliable staffing agencies, sharing innovative approaches and best practices as pool staff were exposed to other hospitals’ care management practices. Hospital administrators plan to expand the pool to include other health care providers in smaller, more geographically compact Vermont communities. For example, hospitals and home health agencies might share staff. VA central office officials did not know of any VA hospitals that have set up float pools comparable with those in community hospitals but said that such programs might be considered in the future, particularly by hospitals under common management. To maintain patient care, while coping with staff shortages, community hospitals increasingly cross-train both clinical and support personnel. VA’s Prescription for Change calls for increased cross-training and multi- skilling of VA personnel; VISN strategic plans, however, generally do not discuss plans to accomplish this. Initially, community hospitals used cross-training to help cope with the shortage of nurses during the 1970s and 1980s. As the nursing shortage eased, the demands for greater efficiency driven by managed care and payment reforms became the impetus for cross-training. Therefore, cross-training has focused heavily on training to reduce the need for or make more effective use of nurses in delivering patient care. Hospitals have developed individual programs to meet their needs and labeled both their programs and staff positions differently. Rural hospitals, in particular, have had to develop programs to cope with chronic shortages of medical personnel, and they have reportedly done so successfully. Clinical personnel are usually cross-trained within their general area of expertise to allow them to expand the scope of their practice. For example, a registered nurse normally working on general surgical cases might be cross-trained to assist with orthopedic surgery. Similarly, licensed practical nurses may be trained to assume certain duties traditionally performed by registered nurses. Cross-training allows hospitals to more efficiently use resources by expanding the number of clinical and nonclinical staff trained to perform a given task. For example, if nursing assistants are trained to perform nonpatient care duties, such as changing bed linens, they can substitute for housekeeping staff. The 1996 Patient Care Survey of the American Journal of Nursing, a national survey including responses from 7,560 nurses, reported that nurses are caring for more patients, have been cross-trained to assume more nursing responsibilities, and have substantially less time to provide all aspects of nursing care. In its December 1995 report, Critical Challenges: Revitalizing the Health Professions for the Twenty-First Century, the Pew Health Professions Commission recommended that team training and cross-professional education continue and expand. VA similarly supports cross-training and multi-skilling and, according to a VA official, VA facilities are using physician extenders and other allied health professionals. For example, the Acting Director of Surgical Services told us that VA started cross-training some technicians in intensive care, respiratory therapy, and laboratory services. Neither he nor other VA officials, however, could provide data on the extensiveness of cross- training in the VA system. In VA’s Prescription for Change, the Under Secretary for Health described several actions to expand the use of cross-training and multi-skilling. First, under its goal of improving customer service, VA plans to establish new positions for multi-skilled caregivers as part of efforts to empower staff to plan and execute their work to best respond to patient needs. In addition, to help make VA an employer of choice, VA established a work group to examine cross-training, employee development, and other workforce issues. Finally, in March 1995, VA revised its directives on the scope of practice for nurse practitioners, physician assistants, and clinical pharmacists to better utilize such personnel. The revised guidance also established prescribing guidelines for these professions. VA’s Office of Academic Affairs is also supporting cross-training through its Primary Care Education program and “firm” system. VA officials told us that the programs emphasize team building among multidisciplinary staff rather than cross-training of staff to perform more than one job. VISN strategic plans generally support the need to enhance training of hospital personnel but focus more on retraining personnel to work in outpatient settings and in providing primary care. This focus is appropriate in the short term, given the significant shift in VA care from inpatient to outpatient settings. Another approach community hospitals sometimes use to reduce personnel costs that closely relates to cross-training is expanding the roles and responsibilities of nursing assistants and other ancillary personnel. Likewise, VA supports expanded roles for nursing assistants and other ancillary personnel, but the extent to which VISNs are increasing their use is unclear. If money were no object and the supply of nurses endless, hospitals would undoubtedly prefer to use only registered nurses to provide direct patient care. But, with the shortage of registered nurses in the 1980s and increasing pressures to contain costs, community hospitals increasingly sought to develop lower cost alternatives. One such alternative is the use of specially trained nursing assistants, often referred to as nurse extenders, to assume many tasks normally performed by registered nurses. This reduces the need for higher paid nurses and allows registered nurses to use their advanced education and experience to enhance all patient care activities. Registered nurses remain pivotal in coordinating care in hospitals, sometimes as case managers. Nursing assistants’ roles, however, have been changing. In some community hospitals, nursing assistants, under the direction of a registered nurse, are assuming more responsibility for direct care. Under nurse extender programs, nursing assistants or other ancillary personnel are generally trained to replace or assist registered nurses in performing relatively simple bedside care such as changing dressings and taking vital signs. In addition, they sometimes help nurses in providing total bedside care. Still others are trained to help telemonitor, lift patients, administer electrocardiographs, or provide physical therapy. Registered nurses assume additional management and supervisory responsibilities to monitor the nurse extenders. Meanwhile, nurse extenders relieve registered nurses of many routine patient care-related duties. Creating nurse extender positions is sometimes accompanied by changing the roles of other support personnel such as those performing dietary, housekeeping, and transportation services. Following are examples of three hospitals’ efforts to expand the roles of nursing assistants and other ancillary personnel: A Southern Maryland hospital developed a new patient care delivery model to respond to a nursing shortage. The hospital, forced to close 10 percent of its beds because of a staffing shortage, could reopen the beds only through the use of agency nurses, a temporary and costly option. To reduce its need for registered nurses, the hospital created two new patient care positions—nursing technician and patient care assistant—by expanding the duties of nursing assistants and housekeepers. It expanded the former nursing assistant job description to include more technical duties previously performed by nurses. It reassigned unskilled tasks to personnel in other departments. The hospital pairs nursing technicians with the same registered nurses to establish strong working relationships. The hospital expanded the housekeepers’ role to include delivering water, mail, and linen directly to patients; accompanying discharged patients to the front door; delivering specimens and requisitions to other departments; helping nurses with patient turning and positioning; applying side rails and assembling traction to unoccupied beds; and cleaning equipment. Before assuming these expanded duties, the former housekeepers were trained in infection control procedures and body mechanics. Many of the tasks the patient care assistants assumed had been previously done by nursing assistants. Unlike the former housekeepers, who reported to the general services department, patient care assistants report directly to the care unit. The hospital also expanded the roles of other nonpatient care staff. For example, dietary aides distribute and collect patient meal trays, a task previously performed by nursing assistants. The hospital reported that it reduced by 12 percent the number of registered nurses needed by shifting non-nursing tasks to nursing technicians and patient care assistants. The hospital also reported increased employee satisfaction among nursing technicians and patient care assistants resulting from their interaction with patients and nurses, improved documentation and care planning, better continuity of care from shift to shift, more time for patient teaching, and a cleaner unit. Boston’s University Hospital developed a patient care technician position. Patient care technicians, who must have 4 years of education beyond high school, complete a formal 8-week training program followed by a 3-month probationary period. As in the Southern Maryland hospital, the patient care technician worked closely with a registered nurse. An official from the Boston University Medical Center, however, told us that the hospital discontinued the program because it was not cost-effective. She said that the positions had high turnover rates because the program was limited to individuals with college degrees in fields other than nursing and such individuals often returned to their original fields or took other jobs. Another problem the program had was inadequate training of nurses in delegating duties to the technicians. Braintree Hospital (in Maine) developed a rehabilitation technician position. In addition to the duties normally performed by a nursing assistant, the rehabilitation technician (1) prepares narrative documentation; (2) provides special eye and skin care, bowel care, simple treatments and dressings, and tube feeding; and (3) applies hot and cold compresses. Just as nurse extenders are reducing community hospitals’ demand for registered nurses, nurse practitioners, physicians assistants, and nurse midwives often substitute for physicians. Several factors have influenced this trend, including the need to lower health care costs and improve access to care for the poor and residents of rural areas. In 1990, Medicare and Medicaid began reimbursing certain nonphysician health professionals for the care they deliver, allowing them to expand their roles and perform functions previously performed by physicians. Many nurses frown on the use of nurse extenders and other unlicensed assistive personnel. For example, the Patient Care Survey of the American Journal of Nursing revealed that only about 13 percent of the nurses surveyed believed the use of such personnel improved patient care where they worked. The responses are somewhat misleading, however, because only about 42 percent of the respondents reported the hiring of auxiliary personnel to provide direct patient care previously provided by registered nurses. As in the private sector, VA is expanding the scope of work of certain paraprofessionals to enable them to substitute for physicians and pharmacists. Neither central office nor VISN strategic plans, however, focus on expanded use of nurse extenders and other personnel to substitute for registered nurses. In March 1995, VA issued revised policy directives expanding the scope of practice for physician assistants and nurse practitioners. It similarly revised policy directives covering clinical pharmacists in May 1996. VA revised prescribing guidelines to allow certain advanced practice nurses to prescribe medications without a physician’s review. In VA’s Prescription for Change, the Under Secretary for Health called for better utilization of nurse practitioners, physician assistants, and clinical pharmacists. Subsequently, a VA work group was charged with identifying barriers to increased use of nurse practitioners, clinical pharmacy and nurse specialists, and physician assistants. The work group submitted its report to the Under Secretary for Health in August 1997. The report identified informal barriers to greater use of such personnel. According to a VA official, the primary barrier is VA’s culture, which has been physician driven and therefore closed to expanded roles for allied health professionals. Neither the Prescription nor the VISN strategic plans identify efforts to expand the use of nurse extenders or other auxiliary personnel to substitute for registered nurses. Several facilities have identified efforts to create such positions, however, as they develop patient-centered care approaches. Many community hospitals are using the above-mentioned and other novel practices to fundamentally reengineer the provision of hospital care. Generally referred to as “patient-centered” care (sometimes “patient- focused” care), such reengineering typically involves creating care teams, including both registered nurses and other specially trained nurse extenders and ancillary personnel cross-trained to offer maximum flexibility and interchangeability in providing patient and nonpatient care services. Many VA hospitals are similarly developing patient-centered care programs for both inpatient and outpatient care. Although no single definition of patient-centered care exists, such programs often involve changing how care is managed using such tools as clinical guidelines (see ch. 11); case management; strengthened discharge planning; and shared decision-making among physicians, nurses, and allied professionals. Patient-centered care also focuses on customer satisfaction by increasing involvement of patients and their families in treatment decisions and reducing the number of a patient’s caregivers during a hospital stay. Finally, patient-centered care often involves decentralizing ancillary services, moving many services, such as X rays and pharmacy, to wards. Patient-centered care involves developing integrated care teams. Many hospitals have reorganized the nursing and other patient and nonpatient care personnel into care teams. Under some programs, the team includes not only nursing staff, but also pharmacists, respiratory therapists, and other caregivers with functional expertise and training. Team members’ work responsibilities typically overlap so that staff can better respond to both patients and management. By allowing team members to share responsibilities, hospitals can eliminate the inefficiencies associated with rigidly defined job responsibilities. Including the task of cleaning and preparing rooms in the work responsibilities of all team members, for example, avoids waiting for a housekeeping staff member to prepare a room—a common cause of delays in admitting patients. Although teams are a central feature of patient-centered care, their makeup and structure vary. For example, one approach relies mainly on expanded caregiver roles to improve efficiency; other approaches feature organizational changes involving staff from other units, such as the pharmacy, being supervised by the care team leader, typically a registered nurse. Another feature of patient-centered care involves reducing the number of staff interacting with patients. Patient-centered care generally reduces the number of caregivers interacting with a given patient during a 3-day stay from up to 55 to fewer than 15. A third feature of patient-centered care involves redesigning wards to bring ancillary services closer to patients. Hospitals group patients with similar care needs together on a single ward rather than disperse them to several wards. This enables redesigning wards to bring ancillary services closer to patients. By grouping like patients together, hospitals can move ancillary services for about 90 percent of the procedures required by these patients to that ward. Hospitals can use space previously used for supplies and the central nursing station for high-volume ancillary services such as pharmacy, laboratory, radiology, and physical therapy. With supplies, medical records, and caregivers closer to patients, hospitals may also move the traditional nursing station closer to patients. In addition, hospitals may locate work areas for preparing patient charts and other functions at smaller units throughout the ward. Placing ancillary services, such as X ray, laboratory, pharmacy, and rehabilitation, on the patient floor often greatly reduces travel time from patients’ rooms to the service area. In addition, X ray technicians, medical technologists, pharmacists, and therapists can become more a part of the care team. Hospitals also report that this feature reduces the time needed to obtain test results. For example, one hospital reported that it reduced the time required to obtain X ray results from almost 2-1/2 hours to just 28 minutes. Although we found little data on the extent to which community hospitals have implemented patient-centered care, nearly half of the hospital CEOs responding to a 1992 survey indicated that they are either planning to or are already implementing patient-centered care. One health analyst predicts that within 10 years, most hospitals will have patient-centered care programs. In June 1995, the Veterans Affairs Nursing Board of Directors established a task force to study patient-centered care. The task force evaluated over 40 patient-centered care delivery systems in both VA and the private sector. In April 1997, the task force issued a resource guide, VAlue: Patient Centered Health Care, which (1) reviews the models currently in use to provide “templates for transforming traditional illness-based organizations into transdisciplinary, cost-effective, health-focused systems” and (2) provides a self-assessment tool to allow facilities to identify their reorganization status. The task force analyzed 20 community hospitals and 13 VA medical centers adopting patient-centered care models in their outpatient or hospital care programs. The following examples illustrate VA’s use of patient-centered care in hospital settings: The Iowa City VA medical center is developing a patient-centered care approach that organizes staff and services around patient needs. The medical center is creating four care teams: critical/special care, psychiatry, medical/neurology, and surgical. Each team includes a wide range of direct care providers such as registered nurses, nursing assistants, housekeepers, dieticians, physical and respiratory therapists, and social workers. The program is intended to (1) increase staff and patient satisfaction, (2) redirect scarce resources to patient care activities, (3) improve patient care processes, (4) reengineer medical center systems, and (5) redesign jobs and work processes. The Providence VA medical center has established an integrated inpatient/outpatient firm system. As the medical center’s inpatient workload decreased and its outpatient workload increased, nursing staff were shifted from inpatient to outpatient care. In addition to registered nurses, the firm includes physician assistants, nurse practitioners, licensed practical nurses, clerks, and patient care assistants. The newly created patient care assistant position is one involving skills intended to include nursing, medicine, and medical administration. Although the firm is outpatient care-based, physicians, nurse practitioners, and social workers make daily rounds of firm system patients in the hospital. This provides both continuity of care and helps plan for discharging and following up on the patient after discharge. VA’s analysis of the program found that (1) access to care greatly improved, (2) waiting times decreased, (3) patient and staff satisfaction improved, and (4) patient education improved. The San Diego VA medical center restructured its nursing service to create self-directed teams to decentralize management and empower staff. The program decentralized clinical specialists to the wards and reduced the number of assistant chiefs of nursing service. The program restructured the role of the head nurse into a new position—clinical services director— and developed a new staff nurse facilitator role. VA’s analysis of the program found that the restructuring energized the nursing staff and promoted creativity. The Louisville VA medical center is developing a patient-centered care pilot project based on a program at the University of Arizona. The medical center has developed new positions for multi-skilled administrative and clinical workers. The administrative position, the patient support associate, includes duties from emergency medical services, escort services, and food and nutrition services. The clinical multi-skilled position, the patient care associate, adds duties relating to respiratory therapy, phlebotomy, rehabilitation medicine, and electrocardiograms to the existing duties of nursing assistants and licensed practical nurses. Staffing and budget considerations have delayed the pilot’s implementation. Little quantitative data exist on the benefits of patient-centered care.Hospitals that have implemented patient-centered care, however, have reported improved physician satisfaction. Many hospital executives also see benefits to patient-centered care. Hospital executives responding to a survey conducted jointly by Hospitals and ServiceMaster cited the following reasons, among others, for establishing or developing patient-centered care programs: They are the best way to provide patient care (88 percent). They will lower expenses (55 percent). They grew out of the hospitals’ total quality management or continuous quality improvement programs (43 percent). They were part of their survival strategy (37 percent). They will improve their hospital’s reputation (36 percent). They will improve their hospital’s market share (33 percent). They will help attract and retain physicians (29 percent) and allied health professionals (30 percent). Not all hospital executives responding to the survey, however, viewed patient-centered care as an improvement. Over half of the respondents indicated that they do not plan to adopt patient-centered care programs because of uncertainty about their benefits. Similarly, some VA officials expressed concern that some patient-centered care may be a veiled attempt to cut costs by reducing nursing staff. Because staffing accounts for such a large percentage of hospital costs, many challenges remain to be addressed as VA considers transforming its hospital staffing. One major challenge involves VA’s central office convincing VISNs and individual hospitals to use their new contracting authority to seek less costly ways to provide services such as laundry, dietetics, and housekeeping. Although the Under Secretary for Health’s criteria for potential realignment encourage contracting for services when they are cheaper and of equal or better quality, VISN strategic plans generally do not address such contracting. Until VA completes improvements in information and financial management systems, VA hospitals may not have the type of reliable cost and utilization data they need to make informed decisions on contracting for services rather than providing them directly or obtaining them from another VA facility. Another factor relating to such decisions is their effect on VA employees and the community. For example, contracting to obtain dietary services from a local provider might save jobs in the community and provide employment opportunities for current employees without relocating them. On the other hand, providing the services through one consolidated VA location might save jobs within the VA system and improve efficiency at the gaining VA facility through economies of scale. Such action would, however, more adversely affect the community standing to lose jobs. In addition, VISNs and individual VA hospital directors will have to make difficult choices about using part-time and intermittent employees. For example, in an era of downsizing, to what extent should VA protect full-time permanent employees by eliminating positions for part-time and intermittent employees even if doing so decreases staffing flexibility? Similarly, can VA devise alternatives to using, or other ways to use, part-time and intermittent employees to make comparable efficiency improvements without the disadvantages associated with using such employees? For example, VA facilities might be able to save resources by pooling staff with each other or with nearby community hospitals. Finally, community hospitals typically pay differentials to part-time and intermittent employees, but such differentials are not available to VA employees. Offering pay differentials might encourage some full-time staff to shift to part-time or intermittent status. In addition, it might make it easier for VA to compete with community hospitals for available staff. It is not clear, however, to what extent VA is having difficulty filling part-time and intermittent positions under its current pay system. Adding pay differentials if recruiting part-time and intermittent staff can be easily done but could needlessly increase operating costs. VA’s Prescription for Change addresses the need for increased cross- training and developing VA staff’s skills and physician extender programs. Similarly, VA issued a resource guide to patient-centered care. Decisions on starting or expanding the use of such programs are difficult, however, because the private sector does not uniformly support the concepts. For example, some have expressed concern that using nurse extenders and patient-centered care sacrifices quality of care to reduce costs. VISNs and hospital directors thus face difficult challenges in planning for the use of such personnel and programs to ensure improvement of patient care. Materials management refers to the systems, functions, and tasks involved in obtaining goods, such as pharmaceuticals, medical equipment, and other supplies, and moving them to where they will be used. It involves not only hospitals, but also manufacturers and distributors. Materials management affects from 25 to 45 percent of a hospital’s operating budget. Effective materials management (1) allows nursing staff to spend more time with patients and (2) reduces the staff, inventory, space, and other resources needed to ensure that supplies are available when needed. Community hospitals are improving materials management, reducing operating costs in several ways. For example, they may join purchasing groups and alliances to take advantage of volume discounts; use just-in-time and stockless delivery to manage inventory costs; use the hospital formulary to reduce pharmacy costs; change the methods used to procure high-technology equipment, such as purchasing remanufactured equipment, leasing rather than purchasing equipment, and centralizing procurement; and more effectively use high-technology equipment through sharing arrangements and joint purchases. The VA system is a leader in materials management and, in some cases, such as the use of purchasing alliances, VA actions preceded widespread private-sector efforts by many years. Changes in materials management, however, create policy issues and management challenges. For example, the Congress faces decisions about the extent to which nonfederal health care facilities should be allowed to use federal supply schedules (FSS). Similarly, VA faces challenges in encouraging its health care facilities to take full advantage of the changes in materials management, such as just-in-time delivery, instituted by its National Acquisition Center (NAC) and in realizing financial benefits from such changes. Joining purchasing groups and alliances is one way community hospitals strengthen materials management. By representing multiple hospitals in negotiations with manufacturers, purchasing groups can obtain volume discounts on pharmaceuticals and medical equipment and other supplies. VA’s joint procurement efforts pre-date private-sector efforts by about 25 years. During the late 1970s, community hospitals formed purchasing groups to buy equipment and supplies at discounted prices. Initially, these groups were formed mainly at the local, state, or regional level. Subsequently, some of the groups joined together to form large regional or national organizations, known as hospital purchasing alliances. Alliances take advantage of their relatively large membership to negotiate larger discounts from manufacturers and suppliers. Although some alliances use diverse suppliers, others use sole-source procurers (prime vendors) to secure volume discounts. Some alliances also try to provide their members other types of aid and experience to help them get more managed care contracts. Supplies available through purchasing groups and alliances include furniture, medical and surgical supplies, laboratory supplies, nonmedical equipment, X ray film, pharmaceuticals, and office and medical equipment. A 1995 survey by Modern Healthcare of purchasing groups and alliances identified over 12,000 hospital memberships as of September 30, 1994, a 13-percent increase over prior-year memberships. However, most hospitals belonged to two or more purchasing groups and alliances and were therefore counted more than once. According to Modern Healthcare, in 1994 each of the 10 largest purchasing groups/alliances represented more than $1 billion in annual purchases of supplies and equipment for their members. The two largest purchasing alliances responding were American Health Care Systems/Premier Health Alliance with $6.2 billion in contract purchases in 1994 and Voluntary Hospitals of America with contract purchases of $5.6 billion in 1994. In 1995, two of the largest alliances had contract compliance requirements that specified the percentage of eligible goods that members must purchase under contract. This enabled the alliances to negotiate significant discounts from vendors. American Health Care Systems required its member hospitals to buy 90 percent of eligible goods under its corporate contracts. Similarly, Voluntary Hospitals of America established a committed buying program that intended to save members 12 to 14 percent on 13 product categories. Participants must achieve 95-percent compliance on contracts with seven vendors. In return, members received quarterly dividends from an incentive pool, according to their purchasing volume. UHC, Inc., a purchasing alliance serving 68 academic medical centers, estimates that it saves about $1 million a year for each alliance member. Similarly, American Health Care Systems/Premier Health Alliance, which serves 40 multihospital systems with 820 hospitals in 46 states, estimates that it negotiates savings averaging 20 percent. VA operates one of the largest purchasing cooperatives in the United States, NAC, which has multiyear contracts valued at over $10 billion. Established in 1951, NAC supports VA’s health care delivery systems and those of other government agencies by providing an acquisition program for health care products and, since the late 1970s, managing certain FSSs. The FSSs are based on a multiple-award contracting system, which determines the low cost through negotiations with each offeror. A variety of product choices, including pharmaceuticals, and medical and other supplies and equipment, are available from the schedules. The FSS for pharmaceuticals catalogs almost 23,000 pharmaceutical products and their prices available to federal agencies and institutions and several other purchasers, such as the District of Columbia, U.S. territorial governments, and many Indian tribal governments. VA, which received responsibility for administering the pharmaceutical schedule from the General Services Administration, negotiates prices with drug manufacturers. VA is also the largest purchaser of products from the schedule; in fiscal year 1996, it purchased about $922 million worth of products or about 71 percent of the government’s purchases from the pharmaceutical FSS. Similarly, VA has received responsibility for administering the FSS for medical products and certain nonperishable subsistence items such as dietary supplements. Sales under the FSS medical products programs managed by NAC exceeded $529 million in fiscal year 1996. Although VA manages and is the largest purchaser of products from the FSS for medical products, other government agencies accounted for approximately $208 million of the $529 million in sales. NAC has three divisions: The Pharmaceutical Products Division solicits, awards, and administers national contracts for pharmaceutical products and medical gases and three FSSs. The Medical Care Products Division administers FSSs of such diverse products as medical supplies and equipment; dental equipment and supplies; wheelchairs; X ray equipment and supplies; and certain food items, including cereals, cookies, and crackers. The Medical Equipment Division administers both FSS and direct delivery contracts for highly technical equipment, such as computerized axial tomographic scanners, magnetic resonance imagers (MRI), positron emission tomography (PET) scanners, and systems used in federal medical facilities. In addition to the FSS, NAC uses national contract awards to negotiate lower prices for certain high-volume products. FSS is a multiple-award type of contract; national contracts, however, are competitively bid, single-award contracts for 1 year, typically with four 1-year options. According to NAC, the leveraged national buying power results in better prices than can be obtained under the FSS. The national contracts are mandatory for use by VA facilities. In addition, NAC uses blanket purchase agreements (BPA) and incentive agreements to encourage effective procurement. BPAs are agreements with authorized suppliers of pharmaceutical products. They essentially are charge accounts that provide medical centers a simple way of obtaining supplies and services for which demand is repetitive. Incentive agreements range from volume rebates and free goods based on quantities purchased to special incentive programs developed for Veterans Integrated Service Networks (VISN). The use of BPAs, NAC reports, has enabled both VA and DOD to save significant amounts of money. It reported that one contractor’s BPA saved VA $4 million and DOD $5.5 million in 1 year. NAC noted that VISN 8 (Bay Pines) avoided $500,000 in expenditures by using a BPA with one contractor. NAC also reported that a second contractor’s BPA saved VA and DOD over 35 percent. NAC’s Medical Equipment Division administers the FSS that negotiates contracts with clinical laboratories on a cost per test (CPT) basis. Under this newly established program, contractors must provide a price for each test they can perform. The price per test covers equipment use, all consumables, reagents, standards, controls, and supplies; all necessary service and maintenance; and training for government personnel. Procurement through CPT contracting allows hospitals to reduce capital expenditures, while maintaining access to state-of-the-art equipment. In addition to the economies of scale available through NAC, several VISN strategic plans identify further efforts to consolidate purchasing: VISN 19 (Denver) plans to establish a Rocky Mountain Network Acquisition Center to consolidate contracting activities and determine possible savings through larger scale purchasing arrangements and enhanced contracting expertise. The VISN strategic plan indicates that the network acquisition center will do essentially the same things NAC does but at a regional level. The VISN plans to use NAC for items that can be obtained at a lower price through national procurement. VISN 17 (Dallas) plans to consolidate network procurement of open market items. VISN 5 (Baltimore) plans to establish a section of its Acquisition and Material Management Service to contract at the network level for leases, community nursing home services, halfway houses, preventive maintenance services, and supply contracts that exceed $25,000. A Contract Service Center, located at the Milwaukee Medical Center in VISN 12 (Chicago), has been providing centralized consolidated purchasing to the network area since 1992. The center now handles contracting of real property leases, equipment leases, architect/engineer services, sharing agreements, medical equipment maintenance, transportation, blood and blood products, home oxygen and durable medical equipment, nursing home and extended care, elevator maintenance and inspection, and fire alarm maintenance and inspection. The VISN reports that the Center generates yearly savings of over $1 million through a variety of methods, including an active BPA and economies-of-scale quantity discounts. The Center received one of the Vice President’s National Performance Review Hammer Awards in 1995. The network plans to expand the scope of goods and services available through the Center. Community hospitals have been shifting to just-in-time and stockless inventory systems. VA similarly closed its supply depots in 1994 and now offers both VA and other government health care facilities a choice of conventional, just-in-time, or stockless delivery. The just-in-time delivery technique (developed in Japan) involves shipping supplies directly to customers or vendors on an as-needed basis, eliminating the need for large inventories. The supplier/vendor, rather than the hospital, maintains the bulk of the inventory. Hospitals implementing just-in-time delivery systems typically buy from a limited number of suppliers, share information about their operations with their suppliers, and eliminate certain hospital-based supply and inventory functions that the supplier now performs. Just-in-time delivery can reduce costs, increase productivity, improve utilization of equipment, and reduce the need for certain workers such as material handlers. Other hospitals have taken just-in-time delivery one step further by using stockless inventory, in which an outside vendor manages much of an organization’s supplies. Stockless inventory allows hospitals to eliminate storerooms, significantly reducing savings by lowering staffing. It is not clear, however, whether these savings offset the service fees paid to the suppliers. Although stockless inventory is gaining popularity, it is far from being accepted as the industry standard. A 1993 study found that just-in-time and stockless material management systems can increase hospital efficiency. For example, one small specialty hospital reported that it reduced its annual inventory value from $2.3 million to an estimated $1.2 million over a 3-year period by using just-in-time delivery. VA, like the private sector, has been shifting to just-in-time and stockless delivery systems. Delivery options available through NAC’s Pharmaceutical Prime Vendor program include conventional, stockless, and just-in-time delivery. Historically, VA benefited from the deep discounts it obtained from manufacturers through volume procurement. Manufacturers generally delivered the products to VA’s three supply depots. The supply depots, in turn, distributed the products to warehouses operated by individual VA medical centers. This distribution system was costly—about $138 million in fiscal year 1991—and resulted in storing relatively large inventories in both supply depots and at the medical centers. The Veterans Health Care Act of 1992 established ceiling prices for covered drugs, eliminating the pricing advantage of many of the products distributed through the depot system. Under the act, drug manufacturers must make their brand-name drugs available through the FSS to receive reimbursement for drugs covered by Medicaid. The act also requires drug manufacturers to sell drugs covered by the act to VA, DOD, the Public Health Service, and the Coast Guard at no more than 76 percent of the nonfederal average manufacturer price, a level referred to as the federal ceiling price. The FSS price may be higher or lower than the ceiling, but if it is higher than the ceiling, the protected purchasers, including VA facilities, pay no more than the ceiling price. Meanwhile, VA completed a pilot test of a just-in-time commercial delivery system for FSS pharmaceuticals through prime vendor arrangements. Under the prime vendor arrangements, medical centers, using centralized contracts, order products from the prime vendor with delivery made directly to the medical center, bypassing the VA distribution network. Subsequently, a VA task force established in January 1993 recommended that VA phase out its depot system and move to a commercial distribution system. With the support of the Vice President’s National Performance Review, the supply depots were closed at the end of fiscal year 1994 and contracts for just-in-time delivery of drugs instituted. Both national contract and FSS items are now distributed by the Pharmaceutical Prime Vendor program. This fee-based distribution contract allows readily available access to FSS and national contract items. In addition to conventional delivery (72 hours), the program offers both just-in-time (24 hours) and stockless (8 hours) delivery options. Just-in-time contracts for medical supplies and subsistence items were completed by 1996. This affords medical facilities the same delivery options for medical supplies and equipment as for pharmaceuticals. VA expects closing the supply depots and moving to just-in-time delivery to save $168 million over 6 years. The pharmacy is estimated to account for 4 to 8 percent of a hospital’s total expenses, and a higher demand for fewer drugs improves hospitals’ ability to secure discounts from manufacturers. Both VA and community hospitals are limiting the numbers and types of pharmaceuticals in their formularies to reduce costs. The effect of such actions on costs, however, may be limited because increases in other charges may offset savings in pharmacy charges. Although hospital formularies have existed for over 150 years, early formularies simply listed all of the drugs carried by the pharmacy. Over time, formularies became a mechanism to control costs by limiting the number and types of drugs routinely stocked in the hospital pharmacy. By procuring larger quantities of a smaller number of pharmaceuticals, hospitals can negotiate volume discounts from manufacturers. Keeping fewer infrequently used drugs in the hospital’s inventory also reduces costs. Finally, further savings can accrue if a formulary convinces physicians to prescribe less expensive, but therapeutically equivalent, drugs. Some practicing physicians complain, however, that formularies infringe on their ability to select the drugs they feel are most appropriate for their patients. VA, like the private sector, is establishing formularies to reduce costs. Historically, each VA facility has established its own formulary. VA’s Prescription for Change provided for the establishment of VISN formularies, with a national formulary to follow. VA noted that establishing a national formulary should increase standardization, decrease inventory costs, improve efficiency, and lower pharmaceutical costs through enhanced competition. VA officials told us that the VISN formularies were established as of April 30, 1996, and the initial version of the national formulary was established by June 1, 1997. According to another VA official, 22 national pharmaceutical contracts will save VA over $150 million annually, and standardized contracts for intravenous solutions have saved VA over $100 million. The official also told us that with the increased focus on standardization, VA will award more national contracts. He said that because some VISN contracting will be done simultaneously with national contracting, good communication will be necessary to avoid duplicated effort and diluting of VA’s buying power. VA has asked medical centers to include “escape” language in their contracts and agreements stating that a national contract will take precedence over local contracts. NAC established a Value Incentive program to save money by using standardized commercial products. For example, its Medical Care Products Division recently awarded national contracts or blanket purchase agreements for products such as wheelchairs, needles and syringes, urinary drainage products, and anti-embolism stockings. These contracts are for VA-preferred sources and should be used before FSS contracts. VISN strategic plans generally do not discuss standardization beyond establishing pharmaceutical formularies. The VISN 8 (Bay Pines) plan, however, indicates that the network is considering establishing a formulary for prosthetics. Similarly, VISN 20 (Portland) plans to decrease unit costs of medical/surgical supplies through more standardization. Most studies of formularies, which focus on a narrow range of drugs, a single hospital, and effects on pharmacy costs, generally confirm that a limited drug inventory reduces pharmacy costs. A 1993 study, however, reported that while such limits reduced pharmacy charges, increases in other charges tended to offset any savings. The effectiveness of hospital formularies, according to this study, depends on several other factors, such as the extent of efforts to educate physicians about appropriate drug use, the ease with which physicians can obtain nonformulary drugs for their patients, and the overall emphasis the hospital places on cost containment. The study also raised concerns that limiting the number of drugs in a hospital formulary could compromise quality because patients may react differently to the same drug. In other words, a drug that effectively treats a condition in one patient may not so effectively treat the same condition in another patient. According to the study, even small differences in a drug’s effectiveness in a therapeutic category could be clinically important, both to achieve good outcomes and to avoid adverse reactions. A drug could be less cost-effective on average but provide a much more cost-effective therapy in specific cases. High-technology equipment generally accounts for the largest share of hospitals’ capital expenditures, totaling about 7 percent of hospital spending in 1989. Although hospitals predominantly buy high-technology equipment using internal funds or gifts, many community hospitals are limiting their capital expenditures by (1) renting or leasing rather than buying such equipment when this is cost-effective and (2) buying remanufactured equipment. VA supports both approaches. Before the introduction of prospective payment and the growth of managed care, hospitals generally did not compete on the basis of costs or charges. As a result, they passed the costs of the latest technology on to their patients, or, more often, to their insurers. Essentially, hospitals could use newly acquired technologies to attract both physicians and patients. The average U.S. hospital spent nearly $2.8 million on medical equipment in fiscal year 1990, according to a survey of hospital chief executive officers. Hospitals tend to base procurement decisions on whether such new equipment will generate profits. For example, because of concerns that the number of lithotripters exceeds demand, hospital executives do not generally view such equipment as profitable. Executives responding to a 1990 Hospitals survey identified leasing as one way to acquire most types of high-technology equipment. Among the equipment the executives identified as being leased were ultrasound (15 percent), automated laboratory (34 percent), radiography and fluoroscopy rooms (19 percent), cardiac catheterization laboratories (18 percent), and MRIs (22 percent). One significant change in rental/leasing arrangements is the adoption of the same type of charge structure as for photo and other copiers for obtaining high-technology services. Under these arrangements, hospitals pay a basic rental fee plus a charge for each test conducted on the equipment. Hospitals’ costs are essentially based on the extent to which they use the equipment. If their workloads decline, so do their expenditures for the rented equipment. Under straight rental/leasing arrangements, however, hospitals pay the same amount regardless of workload fluctuations. Another option for reducing the cost of high-technology equipment is purchasing refurbished equipment. Sales of refurbished imaging equipment were expected to reach $300 to $500 million in 1997, more than double 1992 sales. Refurbished equipment costs from 25 to 65 percent less than new equipment depending on its age and the work done. Hospitals, however, generally prefer new imaging equipment because the latest technology can produce better images, be more comfortable for patients, and require fewer staff to operate. Refurbished equipment is an option, however, when the latest technology is not clinically necessary, the technology is not changing rapidly, or the equipment can be rebuilt to take advantage of technological advances. For example, technology in X ray/fluoroscopy rooms is not rapidly advancing, and equipment can often be rebuilt to operate like new equipment. Refurbishing and adding digital technology to an 8-year-old X ray machine can bring it up to current standards. Hospitals, however, often distrust refurbished equipment. The term “refurbished” might mean that the equipment underwent a complete retooling or that only cosmetic changes were made, a so-called “spray- and-pray” job. An estimated 500 to 600 firms, including equipment manufacturers such as General Electric and Picker International, refurbish equipment, but only about 24 firms perform more complex remanufacturing. The Food and Drug Administration (FDA) published regulations in June 1997 exempting the refurbishing industry from the level of review used for equipment manufacturers because refurbishers restore equipment to the original manufacturers’ specifications. Refurbishers are, however, subject to good manufacturing regulations. In addition, according to an FDA official, in December 1997, FDA published a Federal Register notice of its intention to review and, as necessary, revise or amend, its compliance policy guides and regulatory requirements for remarketing of used medical devices and those who refurbish, recondition, rebuild, service, or remarket such devices. Written commens on the notice were due by March 23, 1998. In the meantime, individual hospitals and alliances must decide for themselves which refurbishers are reputable. For example, Columbia/HCA, in 1995, designated one company a preferred supplier of refurbished imaging equipment. A hospital alliance, however, reported that its member hospitals showed little interest in purchasing refurbished equipment without a good warranty and indemnification. Like community hospitals, VA is seeking to share rather than purchase high-technology equipment or to purchase refurbished equipment. In addition, VA is emphasizing central procurement of high-technology equipment to obtain better prices. VA’s Prescription for Change calls for developing and implementing a major medical equipment acquisition methodology. It notes that a proposed methodology has to balance the need for facilities and networks to make local decisions with the need for VA’s central office to ensure that federal procurement laws and regulations are followed. Subsequently, VA developed a decentralized equipment assessment and planning program (DEAPP), a needs-driven plan similar to equipment planning programs used by the private sector. According to VA, DEAPP builds on the strength of existing medical center equipment committees and describes a consistent approach to identifying equipment needs. The methodology establishes a point-scoring system to assess needs on the basis of three categories— function, reliability/regulatory compliance, and economy. The Veterans Health Administration’s (VHA) criteria for potential realignment noted in VA’s Prescription also has guidance on how VISNs and medical centers should determine when to purchase high-tech equipment and services or obtain such services from other VA facilities or community providers. For example, it suggests that VISNs consider both capital and operating costs for new high-tech or automated equipment in cost- effectiveness analyses. Our prior work found that the Albuquerque VA medical centers underestimated the cost of providing lithotripsy services because it overestimated workload and set excessively long equipment depreciation periods. NAC’s Medical Equipment Division solicits, awards, and administers FSS and direct delivery contracts for highly technical equipment and systems used in VA and other government medical facilities. The Direct Delivery program allows medical facilities to order high-tech equipment directly from the manufacturer at prices negotiated by NAC. Among the equipment available through the Direct Delivery program are computerized tomographic (CT) and MRI scanners, nuclear medicine systems, and X ray systems. In addition to procuring new equipment, NAC negotiates cost per use contracts to provide facilities an alternative to buying high-technology equipment when demand may not justify the purchase. Under such contracts, medical facilities pay only for the services they use. For example, they might pay for each periodic use of an MRI rather than purchase the equipment. Another option available through NAC is the purchase of refurbished equipment. NAC has awarded 12 contracts for the purchase of refurbished equipment. Our review of VISN strategic plans identified several additional initiatives to improve the procurement of high-tech equipment and services: VISN 8 (Bay Pines) plans to coordinate its needs assessments for high-tech equipment with neighboring networks. The network also developed a methodology for rating and ranking medical facilities’ requests for high- tech equipment. VISN 12 (Chicago) reports that by approaching vendors as a network customer, it saved a substantial amount of money when recently buying CT scanners. VISN 18 (Phoenix) is evaluating the feasibility of purchasing remanufactured equipment, where appropriate, instead of new items. VISN 20 (Portland) has a shared equipment purchasing program under which each facility pays 20 percent of its allocated equipment budget for each item funded under the program. The planned equipment purchased under this program in fiscal year 1997 includes three CT scanners and a cardiac catheterization imaging system. VISN 7 (Atlanta) plans to consolidate the procurement of standard radiology and fluoroscopy suites, saving money on the purchase price, on expendable supplies, and on service contracts. Another method hospitals use to reduce capital expenditures is sharing high-technology equipment. To allow federal agencies’ resources to be used to maximum capacity and avoid unnecessary duplication and overlap of activities, federal agencies have been authorized for over 60 years to obtain goods or services through other federal agencies. In the past 15 to 20 years, we have identified and VA and the Congress have addressed barriers to sharing. As these barriers have been addressed, VA sharing both with DOD and the private sector has increased. More recently, VA has placed greater emphasis on sharing services and equipment among VA facilities. Health resources sharing, which involves the buying, selling, or trading of health care services, benefits both parties in the agreement and helps contain health care costs by better utilizing medical resources. For example, a hospital that buys an infrequently used diagnostic test from another hospital often pays less money than it would buying the needed equipment and providing the service directly. Similarly, a hospital that uses an expensive piece of equipment only 4 hours a day but has staff to operate it for 8 hours a day may generate additional revenues by selling its excess capacity to other providers. The following are examples of efforts to share high-technology equipment and services: Two hospitals in Missoula, Montana, agreed to share an MRI when neither hospital had sufficient demand to solely support the equipment. A microwave link relays test results between the two hospitals. In addition, the two hospitals established a mobile lithotripsy network to serve hospitals in western Montana. A PET scanner at the University of Texas Health Science Center in San Antonio was jointly funded by the University of Texas, VA, and DOD. The PET facility, the first in the DOD system, will become a national referral center for DOD patients and a regional referral center for VA patients. The PET equipment alone cost $5.3 million; the construction of a building to house the equipment cost millions more. Under an access agreement, the University of Texas will have 50 percent of the facility’s workload, with VA and DOD getting 25 percent each. The PET facility will be used for both research and patient care. The San Antonio VA medical center jointly purchased an MRI with the neighboring medical center and a linear accelerator with Southwest Texas Methodist Hospital. Ten Rhode Island hospitals formed a network to share the costs and services of four MRIs. The network bought four MRIs for the price of three, paying about $10 million for them, including the construction of one fixed site and pads for three mobile units. The network uses a centralized scheduling system, which also saves money. Because hospitals pay a fixed daily rate for MRI use regardless of volume, they have an incentive to image as many patients as possible during their allocated periods. Two hospital systems in the Sacramento area, which together operated six acute care hospitals and a psychiatric facility, joined forces to establish a $5.7 million PET scanner facility. A management firm under contract to the two systems will oversee the facility’s daily operations. Officials estimated eventual demand for PET scans at about eight to nine scans a day in the Sacramento area, with initial demand at only four to six. Neither system, each of which operated over 800 acute care beds, had sufficient demand to justify purchase of a PET scanner. A 1992 survey of hospital chief executive officers found that 38 percent reportedly had collaborated with other area health care providers to share technology. Forty-six percent said that they had collaborated on service development to avoid duplicating services. The following are examples of collaboration: Three hospitals and a home health agency in Roanoke, Virginia, created a shared, off-site, intravenous admixture center to prepare intravenous solutions. Creating the admixture center was reported to have saved about $230,000 in personnel costs over a 2-year period (October 1992 to September 1994). In addition, about $207,000 was reportedly saved over the 2-year period for nonbillable supplies (for example, syringes, needles, and diluents). Other reported benefits included expanding availability of intravenous admixture services in several service areas, eliminating duplicated services, savings from nonbillable supplies, avoiding salary and benefits costs associated with hiring new personnel, improved quality control, and acquisition of state-of-the-art equipment. Three Boston hospitals combined their cancer programs to avoid duplication. The Dana Farber Cancer Institute combined its adult patient care and research operations with those at Massachusetts General and Brigham and Women’s Hospital. Dana-Farber transferred its inpatient beds to Brigham and Women’s. To use federal agencies’ excess resources to maximum capacity and avoid overlapping of activities, VA has, at our urging, long been authorized to share excess health care services with DOD. In addition, VA has, since 1966, been authorized to share specialized medical resources with nonfederal hospitals, clinics, and medical schools. Such sharing is permitted only if it does not adversely affect health care services to veterans. As an incentive to share excess health care resources, VA facilities providing services through sharing agreements may recover and retain the cost of the services from DOD or private-sector facilities. In fiscal year 1996, VA sold about $20.0 million in specialized medical resources to private-sector hospitals and about $29.3 million in health care services to the military health care system. During the same year, VA purchased about $23.6 million in health care services from DOD and about $60.0 million from private-sector hospitals. Services sold and purchased through sharing agreements included organ transplants, open-heart surgery, and specialized laboratory and radiology procedures. In 1992, enactment of Public Law 102-405 gave VA specific authority to jointly acquire advanced technology. Specifically, it allows the joint holding of titles to medical equipment between VA and a sharing partner. In fiscal year 1995, VA spent about $900 million on the shared acquisition program. With the creation of VISNs, VA transferred responsibility for funding joint acquisitions to the networks. The Veterans’ Health Care Eligibility Reform Act of 1996 expanded both the types of providers VA may contract with and the types of services VA may contract for. In addition, it simplified the procedures for complying with federal procurement processes when contracting with commercial providers. (Ch. 1 more fully discusses these provisions.) VA’s Prescription for Change calls for VISNs to increase sharing with both government and nongovernment health care providers. Our review of VISN strategic plans identified many efforts to expand sharing among VA facilities, VA and other government facilities, VA and TRICARE, and VA and community providers: VISN 13’s (Minneapolis) strategic plan indicates that generating alternative revenues through sharing agreements with DOD, the Indian Health Service, and the Bureau of Prisons and serving as a TRICARE provider are key survival strategies. VISN 17 (Dallas) proposes to diversify its funding base by sharing with the Civilian Health and Medical Program for the Uniformed Services (CHAMPUS), TRICARE, DOD, other federal agencies, and the private sector. In addition, it proposes a pilot project to provide services to Medicare and Medicaid recipients. VISN 3 (Bronx) wants to increase the income generated through sharing agreements by $500,000 per year, primarily through agreements with DOD and its medical school affiliates. Many of the sharing efforts among VA facilities focused on developing telemedicine capability. The following examples illustrate VISN efforts to expand sharing among VA facilities: VISN 18 (Phoenix), in conjunction with DOD and the Texas Tech University Health Center, has purchased equipment to provide telemedicine capability at three network facilities. VISN 12 (Chicago) is developing a telemedicine strategic plan. The VISN’s telepathology initiative between the Milwaukee and Iron Mountain medical centers received the Vice President’s National Performance Review Hammer Award. VISN 8 (Bay Pines) plans to study the sharing of gamma camera capability and other imaging equipment networkwide. VA has also expanded sharing efforts with private-sector providers. Following are some of these efforts: VISN 11 (Ann Arbor) proposes a pilot program under which VA facilities would provide specialty services, such as clinical laboratory services, to community hospitals in exchange for primary care services. VISN 9 (Nashville) anticipates establishing a network of mental health primary care providers through contracting. VISN 18 (Phoenix) has a sharing initiative for the Phoenix medical center to purchase a new MRI in conjunction with a local hospital. The Augusta medical center in VISN 7 (Atlanta) contracted with a 16-bed community residential care facility to provide care to veterans with spinal cord injuries. The residential care facility is used to provide temporary housing for spinal cord-injured veterans coming to the medical center for outpatient annual evaluations and may, in the future, be used as a permanent home for veterans who might otherwise enter nursing homes. VISN plans mention sharing agreements with the military health care system, including the following planned actions: VISN 7 (Atlanta) plans to implement a TRICARE contract that can be replicated VISN-wide. VISN 5 (Baltimore) has a sharing agreement with Walter Reed Army Medical Center to obtain obstetric/gynecological and urology services and with Bethesda Naval Hospital to obtain neurosurgery services. VISN 19’s (Denver) Cheyenne medical center has a sharing agreement with F.E. Warren Air Force Base that includes inpatient, outpatient, and special medical services. VISN 18 (Phoenix) shares extensively with DOD, including a joint venture at Albuquerque. VA and Kirtland Air Force Base share inpatient and outpatient services at collocated facilities. In addition, VA’s El Paso health care center has a joint venture with the William Beaumont Army Medical Center. Finally, VA’s Tucson medical center and DOD jointly established community-based outpatient clinics (CBOC) in Yuma and Sierra Vista. Some VISN plans also detailed efforts to share with other federal, state, and local government facilities: VISN 19’s (Denver) Fort Harrison medical center has a sharing agreement with the Indian Health Service’s community hospital in Browning, Montana. VISN 18’s (Phoenix) Amarillo medical center is collaborating with the Pantex plant to establish an outpatient surgery unit to serve as a decontamination unit in a nuclear disaster. VISN 9 (Nashville) plans to contract with the Tennessee and Kentucky health departments for establishing CBOCs in rural, underserved areas. It also plans to contract with its medical school affiliates (Vanderbilt, East Tennessee State, and Kentucky) for establishing CBOCs in rural areas. VISN 6 (Durham) is developing an enhanced use lease of the nursing home at the Salisbury medical center to permit the state of North Carolina to operate the nursing home as a state veterans’ home. Under the proposal, the state would place $5.2 million in a trust to be used by VA to benefit veterans in North Carolina. The VISN plan indicates that one use of the trust funds would be to establish additional CBOCs. The changes in materials management in both the private sector and VA create a number of challenges and policy issues for the administration and the Congress. The administration faces challenges to ensure that VA (1) facilities use NAC and other purchasing groups to the extent practicable; (2) achieves the benefits anticipated through closure of supply depots and implementing just-in-time and stockless delivery systems; (3) appropriately balances cost containment and physician preferences in implementing its formularies; (4) facilities use cost-effective strategies to procure high-technology equipment; and (5) facilities both buy high- technology services from and sell such services to other health care providers, including community hospitals and other government agencies whenever cost-effective. An important policy issue relates to the extent to which nonfederal facilities should be allowed to use FSSs. Although NAC offers significant savings compared with local procurement, VA faces a challenge in ensuring that its hospitals obtain pharmaceuticals and medical supplies through NAC rather than through local procurement. Similarly, VA faces challenges in deciding when to establish regional acquisition centers and when to allow medical centers to conduct their own acquisition and when they should rely on NAC. For example, procurements by the regional acquisition centers should complement rather than duplicate those by NAC. Finally, VA faces challenges in ensuring that the prices it pays, whether through NAC, regional acquisition centers, or local procurement, are comparable with or better than prices available through private-sector purchasing groups and alliances. One important policy issue facing the Congress and the administration is the extent to which nonfederal hospitals and health care facilities should be allowed to use FSSs. The Federal Acquisition Streamlining Act of 1994 (P.L. 103-355, sec. 1555) authorized creation of a cooperative purchasing program that would allow state, local, and Indian tribal governments and the Commonwealth of Puerto Rico to purchase pharmaceuticals and other goods and services from FSSs. Neither the nonfederal agencies nor the manufacturers would have to participate. For example, manufacturers could decline to make their products available to nonfederal entities. VA raised concerns that drug manufacturers would seek to increase schedule prices if a larger group of purchasers received access to those prices. As a result, the General Services Administration, which has overall responsibility for the FSSs, proposed that the pharmaceutical schedule be excluded from the cooperative purchasing program because it could otherwise have the unintended effect of increasing federal agencies’ drug costs. Pharmaceutical manufacturers’ and public hospitals’ representatives’ views differ on whether the FSS should be open to nonfederal providers. Representatives of several drug manufacturers explained that their companies have been willing to give federal purchasers such low prices because they consider the FSS to be a special, limited category of pricing that affects no more than 2 to 3 percent of total dollars in domestic pharmaceutical sales. Some manufacturers, however, have expressed an unwillingness to offer the same low prices to an expanded group of government purchasers. They have also expressed an unwillingness to treating similarly different types of purchasers that they are used to treating as separate markets. The Public Hospital Pharmacy Coalition, on the other hand, favors opening the schedule to public hospitals. A Coalition analysis of the differences between FSS prices and the prices nine public hospitals paid for drugs showed that, on average, FSS prices were considerably lower—on average about 17 percent lower—than the hospitals’ purchase prices for 100 drugs on which the hospitals spent the most during fiscal year 1997. The Coalition contends that any adverse effects on FSS or other drug prices would be negligible and state and local purchasers would have access to many FSS prices that would be lower than the drug prices they currently pay. We reported in June 1997 that opening the pharmaceutical schedule to state and local purchasers could change the dynamics of negotiating FSS prices for both VA and drug manufacturers. VA has been able to obtain significant discounts from drug manufacturers by seeking the most favored customer price. Many FSS prices are more than 50 percent below nonfederal average manufacturer prices. The Congress, through the National Defense Authorization Act of 1996 (P.L. 104-106, sec. 4309), subsequently delayed opening the schedules pending our assessment of the possible impact. We reported in June 1997 that the effect of opening the FSSs for pharmaceuticals on schedule prices ultimately depends on the outcome of negotiations between VA and drug manufacturers. It is not possible to predict how schedule drug prices would change or what the ultimate effect on federal, state, and local purchasers would be. However, several factors could cause schedule prices to rise. In emergency supplemental appropriation legislation (P.L. 105-18), the Congress further delayed implementation of the cooperative purchasing program until adjournment of the first session of the 105th Congress. The overall effectiveness of NAC’s efforts to implement just-in-time and stockless delivery depends largely on individual VA medical facilities. VA, however, to assess the effectiveness of these efforts, would need information on the extent to which VA facilities are using just-in-time and stockless delivery systems, VA facilities have reduced inventories and personnel as they implement just-in-time and stockless inventory systems, VA has achieved the expected savings from closing its supply depots, just-in-time and stockless delivery has reduced local procurements, and facilities are using higher cost stockless or just-in-time delivery for items that could be procured through conventional 72-hour delivery. VA faces several challenges concerning establishing VISN and national formularies. First, as previously discussed, some believe that formularies that limit the number and types of drugs a hospital stocks may reduce pharmacy costs but increase overall health care costs. Because VA’s national and VISN formularies were recently established, no data are available yet to determine the extent to which they reduce the number of drugs hospitals stock or their effect on drug costs and overall health care costs. The effect of VA’s formularies on health care costs depends on many factors, such as the amount of flexibility they, and individual hospital directors, give physicians in prescribing drugs not on the formulary. If a physician can easily prescribe a drug not on the formulary and obtain it within 8 hours through stockless delivery or local procurement, then VA may limit its savings by limiting the number of drugs on its formularies. On the other hand, placing too many restrictions on physicians’ ability to prescribe drugs not on the formulary might deny them the ability to tailor treatments to individual circumstances. Another uncertainty about the effect of VA’s formularies on costs is the extent to which the formularies would succeed in changing physicians’ prescribing habits. Finally, the formularies’ effectiveness in reducing procurement costs depends on how restrictive the formularies are. Hospital directors face difficult challenges in choosing the most cost- effective strategies for procuring high-technology equipment. Procuring refurbished equipment offers significant cost savings, but little is known about the experiences—either positive or negative—with such equipment and individual refurbishers. Hospital directors often hesitate to buy such equipment because of concerns about its reliability. NAC has tried to address such concerns through its program to certify remanufacturers. Still, FDA’s limited oversight of refurbishers might hinder efforts to expand use of refurbished equipment. Another alternative to buying new equipment is transferring equipment within the VA system. With the planned integration and consolidation of VA hospitals, VA may have excess high-technology equipment. Hospital directors, however, may have the same concerns about the reliability of used equipment that they have about refurbished equipment. Although the Under Secretary’s Criteria for Potential Realignment of VHA Facilities and Programs calls for VISNs to purchase services from community providers when such services are equal in quality and lower in price, VISN plans indicate sharing agreements only between VA and the military hospital care system. Without assessments of underused capacity in the surrounding community, VA hospitals may purchase high-technology equipment that increases excess capacity. Similarly, where VA already has underused high-technology equipment, selling the excess capacity both to government and private-sector providers could generate additional revenues and help other health care facilities avoid procuring high-cost equipment that would probably increase excess capacity. For example, additional opportunities may exist for VA facilities to sell services to the Indian Health Service and Bureau of Prisons. Similarly, VA might be able to provide high-technology services to support community health centers in exchange for primary care services for veterans. Another approach being pursued by some VA hospitals is jointly procuring high-technology equipment with teaching affiliates, DOD hospitals, or community hospitals. Finally, VA has increased its sharing with both nongovernment and DOD health care providers. The following are among the challenges VA faces in implementing such agreements: VA must ensure that payments cover VA’s cost of providing the services. This is important primarily if VA is maintaining capacity expressly for selling it to CHAMPUS or TRICARE, in which case any deficit detracts from funds available for serving veterans. VA must ensure that sharing agreements do not detract from services available to veterans. As excess capacity grows, community hospitals are seeking ways to retain current users and attract new ones. Among the ways they are marketing their services and building market share are redesigning the hospital environment to be more homelike, conducting market research and patient satisfaction surveys, advertising their services, contracting with managed care plans and preferred provider organizations (PPO), and establishing service delivery arrangements with physicians to increase referrals. In general, VA has not as actively marketed its hospital services as the private sector. Its facilities generally lack the privacy and other amenities typical of community hospitals. In addition, VA does not pay for advertising to attract new users or enter into risk-sharing agreements with either managed care plans or physicians to build workload. VA is, however, beginning to change the way it markets its health care services; it is increasing its use of market research and patient satisfaction surveys and expanding efforts to sell its excess resources to DOD and others using its recently expanded contracting authority. If VA decides to try and preserve certain VA hospitals by competing with private-sector hospitals, then it will probably have to expand its marketing efforts. Among the decisions that VA would face is whether to revise its policy against using paid advertising and—if it decides to advertise— whether to use comparative or negative advertising. Similarly, VA would also have to decide on the extent to which it should (1) market its services to nonveterans, (2) enter risk contracts with managed care plans and individual physicians, (3) invest resources in improving privacy and amenities in VA hospitals, and (4) grant admitting rights to non-VA physicians with practices near a VA hospital. Community hospitals are increasingly marketing their services directly to patients. An important part of such marketing efforts is redesigning hospitals to provide a more homelike environment. Although VA has made some progress in improving the privacy and amenities offered by its hospitals, most VA hospitals cannot compete with community hospitals in these areas. People often view the comfort and appearance of hospital rooms as a reflection of a hospital’s attitude and concern toward patients.Designing the physical environment is important because patients and their families tend to judge a hospital by their first impression. For example, hospitals that appear old fashioned and run-down are not likely to instill confidence in the medical treatment. Unattractive facilities have also been reported to adversely affect patients’ psychological well-being. Patients already depressed about their health tend to become more depressed in a drab environment, slowing their recovery. Just as a drab hospital can adversely affect patients’ perceptions of the quality of care they receive and therefore their psychological well-being, a hospital designed to provide a bright, homelike feeling can instill patients’ confidence in a hospital and the quality of care it provides. Among the approaches community hospitals have used to make their facilities more appealing to patients and visitors are color, artwork, plants, attractive and comfortable furnishings, and textured walls. Following are examples of such approaches: Methodist Hospital in Omaha, Nebraska, renovated its hospital wards to create a more homelike environment. It created mini-nursing stations between every two to four patient rooms to locate nurses closer to the patients. It changed most of its semiprivate rooms to private rooms and added handicapped-accessible bathrooms. It added chairs that fold down into beds to patient rooms to accommodate family members. In addition, it established family lounges, nourishment stations with beverages and microwaves and a deli-style cafeteria to accommodate visitors. The hospital remodeled patient rooms to include clocks, plant shelves, and erasable “white” boards for leaving messages. To create a homelike environment, designers used light wood with drapes and wall coverings in soothing colors. The Samuels Planetree Model Hospital Unit in New York City, a 945-bed, not-for-profit tertiary care teaching hospital, remodeled patient rooms to include (1) patterned curtains, (2) soothing wall and hallway colors, (3) furniture that was both attractive and comfortable, (4) a special living room setting where patients and visitors could spend time together, and (5) a sleeper couch in the patient room where family members or friends could comfortably stay overnight. In addition to a television, the rooms include magazines and a videocassette recorder. The hospital also added a kitchen for use by patients’ family and friends. Baptist Hospital in Miami, Florida, redesigned its emergency room to create the ambiance of a hotel lobby. Natural light was filtered, artificial lights focused on architectural details, and high-tech machines hidden behind panels or camouflaged by soft fabrics. Because the hospital converts about 25 percent of its emergency room visits into admissions, it believes the calming and attractive design of its emergency room contributed to an increase in hospital admissions. Some community hospitals have focused on changes to attract certain types of patients such as the affluent. For example, Christ Hospital in Oak Lawn, Illinois, decorated rooms with 18th century furniture and began offering specially prepared meals served on china to attract affluent patients. Similarly, Century City Hospital in Los Angeles designed rooms with rich wood patterns, faux marble, and plaster moldings. The hospital’s luxury accommodations also include imported china, silver flatware, and antique artwork. Just as giving hospitals a more homelike appearance can influence patients’ overall perceptions, accommodating patients’ disabilities can increase patient satisfaction. For example, by lowering closet rods, hospitals can allow patients in wheelchairs to be more independent. Similarly, chairs designed to allow patients to rise without help can increase patients’ independence and reduce demands on nurses. VA hospitals have a distinct competitive disadvantage compared with community hospitals regarding privacy and hospital amenities. VA hospitals are often outdated and lack amenities comparable with private-sector hospitals. Most VA hospitals are more than 30 years old, some more than 50 years old. Although VA has some hospitals that are relatively new or have been updated, many still have four- and six-bed rooms and communal toilets and showers. In addition, many VA hospitals lack basic amenities, such as in-room televisions, that community hospitals have. Beyond amenities, older VA facilities face additional structural problems. For example, they often have inadequate space in clinics and nurses’ stations, poorly designed intensive care units, and inadequate ventilation systems. VA has, however, made progress in improving both privacy and amenities. For example, in response to recommendations from us and the Vice President’s National Performance Review, VA has installed bedside telephones in its hospitals. The lack of privacy in VA hospitals can create particular problems for women veterans. In 1982 and again in 1992 and 1994, we reported on VA facilities’ problems in accommodating women veterans. At the time of our 1982 report, women could not be accommodated in 10 of the 16 domiciliaries and in some inpatient psychiatric programs. By 1992, VA had made significant progress in improving the availability of services for women veterans; by that time, for example, VA could accommodate women in all VA domiciliaries. Still, VA had problems in meeting women’s privacy needs. For example, men and women still shared communal showers at many facilities. At our urging, VA surveyed all of its facilities to identify needed construction projects to ensure women adequate privacy. Medical centers identified almost $1.5 billion worth of projects. By October 1993, 131 of the 336 planned projects had been completed or funded at an estimated cost of over $672 million. VA expected most of the remaining projects to be funded by the year 2000. In a separate survey conducted in late 1993, VA facilities identified over $3.3 billion in construction and renovation projects it viewed as necessary to allow VA to effectively compete with the private sector. The Veterans Health Administration’s (VHA) Strategic Planning and Policy Office compiled a prioritized inventory of requested projects ranging from improving patient amenities to new bed towers. The Office requested more than 1,400 of these projects. Even this estimate, however, did not accurately portray the capital investment that would be needed to make VA competitive with community hospitals in the area of amenities. This is because the amount VA planned to spend on construction projects was capped at $3.3 billion. VA has not proceeded, however, with most of the projects. Because of the uncertainty about the future missions of and demand for care in many VA hospitals, VA, at our urging, has limited its major construction projects primarily to expanding outpatient capacity rather than building or renovating hospital capacity. For example, in its fiscal year 1998 budget submission, VA sought $79.5 million for major construction and renovation of medical facilities, of which $35 million is for seismic corrections at the Memphis, Tennessee, medical center. VA’s Prescription for Change does not address improving the appearance and amenities of VA hospitals to make them more attractive to potential customers. Nor do Veterans Integrated Service Network (VISN) strategic plans generally address improvements to hospital privacy and amenities. The VISN 6 (Durham) plan, however, discusses renovations to improve privacy particularly for women veterans and identifies planned projects at the Beckley, West Virginia, and Salisbury, North Carolina, medical centers. It notes that the acute medical and surgical wards at the Salisbury hospital include one-, two-, three-, and four-bed rooms, but less than 10 percent have toilets. The planned renovation would increase privacy and provide handicapped-accessible bathing and toilet facilities. The other VISN strategic plan that discusses amenities—VISN 18’s (Phoenix)—focuses on services rather than renovations. Specifically, this network plans to establish a guest services program that will use hotel-like amenities and services for its hospitals. To effectively market their services, hospitals need information on both current and potential users. For example, they need to know who is using their services and their motivation (convenience, reputation for quality, amenities, services, and the like) for using that particular hospital. Just as important, they need to know who is not using their services and why. They need information on the types of outreach efforts (newspaper, television, or direct mail) that will most effectively attract new users and retain current ones. Among the methods hospitals use to identify potential customers and retain current ones are patient satisfaction surveys and market research. VA, like community hospitals, is increasingly emphasizing customer satisfaction and market research to help keep current users and identify potential new ones. With decreasing demand and increasing competition, hospitals no longer assume that users will choose the same hospital in the future. Hospitals therefore are increasingly focusing on ways to improve customer service. An important way to identify what patients like and dislike about a hospital experience is the patient satisfaction survey. Both community and VA hospitals use such surveys extensively. Responses to patient satisfaction surveys tend to focus on interactions— either positive or negative—with hospital staff. The results can thus provide important information on needed changes in staff education and training to improve customer service. Such surveys can also identify other changes in the hospital that might attract users. For example, surveys that reveal frequent complaints about the food service, delays in answering call buttons, or drab decor can be used to target needed changes. Hospitals can conduct satisfaction surveys in several ways. For example, hospitals can call patients or send them a questionnaire after patients are discharged. Patient satisfaction surveys generally show a relationship between patient satisfaction and whether the patient will return to the same hospital. One approach to improving patient satisfaction, developed by the Cleveland Clinic Foundation, is a Patient Callback Program. The hospital calls patients 3 weeks after discharge to identify and resolve any clinical or service concerns. The hospital found that the program creates perceptions of higher quality care and contributes to more effective clinical care by identifying patients’ concerns. Other reported benefits of the program are identifying and resolving past and current problems and increased patient satisfaction, leading to a greater likelihood of future use by the patients as well as their family and friends. Although the hospital initially limited the program to patients discharged from surgical services, it subsequently expanded the program to include discharges from medical bed sections and outpatient surgery. Historically, veterans often complained about excessive waiting times for VA care and poor customer service. For example, participants in 14 focus group discussions we held with veterans nationwide during 1994 elicited frequent complaints about poor customer service, poor staff attitudes, excessive waiting times, and inadequate parking. Similarly, the Vice President’s National Performance Review made a series of recommendations in September 1993 intended to improve customer service throughout VA programs. Subsequently, VA established the National Customer Feedback Center and began revising the standard of care and therefore increasing patient satisfaction. In addition, VA published customer service standards for its medical facilities in October 1994. VA’s Prescription for Change identifies several planned actions to assess and improve patient satisfaction. For example, it provides that VHA will annually assess compliance with the customer service standards through patient surveys. In addition, it provides for the development and implementation of corrective action plans for those areas in which customer feedback or other data indicate a need for service improvement. VA’s fiscal year 1998 budget submission identified two performance measures based on the customer service standards. The first performance measure is to increase the percentage of patients reporting their care as very good to excellent by 5 percent annually, starting at 60 percent for both inpatient and outpatient care. VA reported that it met the goal for inpatient care but satisfaction with outpatient care increased by only 1 percent. The second measure tracks VISN improvements regarding nine customer service standards. VA’s goal is for 95 percent of its networks to improve performance on two-thirds of the customer service standards. VA will gauge progress on the basis of results of surveys mailed to veterans nationwide receiving VA care. VA reported that in fiscal year 1996, 86 percent of VISNs showed improvement on two-thirds or more of the customer service standards. Demographic information on users helps hospitals target marketing toward nonusers most likely to be influenced by such efforts. In other words, if a hospital has historically drawn users from a particular demographic group, such as the uninsured or elderly, it may want to target those demographic groups in its marketing efforts. In addition to identifying the demographics of the market area’s population, a hospital may want to elicit perceptions about it that might hinder efforts to attract new users and identify added services that might attract users as well as evaluate the competition. On the basis of such research results, hospitals develop marketing strategies that target advertising toward certain types of people or add services likely to generate new workload. One of the actions discussed in VA’s Prescription for Change is the use of focus groups and customer surveys to evaluate services. According to the Prescription, VA conducted focus groups and telephone market surveys in the referral networks of over 75 medical centers during 1994 and 1995. The studies targeted current and former users as well as nonusers to get a better understanding of VA’s current and potential customers, their perceptions about VA, and their individual needs. VISNs appear to be further expanding the use of market research. Almost all VISN strategic plans indicate that market surveys have been planned or completed, often through contracts with public polling firms such as the Gallup Organization. In the past, hospitals did not extensively advertise or otherwise market their services, relying instead on physicians to generate workload. As patients and their families have expanded their role in selecting hospitals, advertising has become an important marketing tool for community hospitals. Although VA directives do not permit the use of paid advertising to market health care services, VISNs and individual facilities may use a variety of other methods, such as newsletters and public service announcements, to inform veterans of their VA benefits. Historically, patients typically relied on their family physician to determine where they went for hospital care. Those patients choosing their own hospital generally did not have a family physician and tended to use the hospital emergency room as a physician’s office. In other words, their choice of hospitals was more a matter of necessity than preference. In the mid-1980s, an estimated 40 percent of patients (or their family members) chose their own hospital. By the 1990s, however, one report estimated that 90 percent of hospital inpatients were playing an active role in choosing their own hospital, often on the basis of others’ opinions. A logical outgrowth of this trend has been increased hospital advertising directed toward patients and their families. Advertising generally promotes the hospital’s services without criticizing other hospitals. Hospital advertisements have progressed from providing general information to advertising such distinct product lines as cardiology, psychiatry, and lithotripsy. Hospitals in major urban areas advertise more than hospitals in rural communities, and large hospitals advertise more than small ones. To be effective, hospital marketing programs must specifically target the correct individuals with appropriate messages to convince them to become hospital customers. Advertising campaigns often target specific groups of potential users gleaned from market research. For example, hospitals may target their marketing toward people between the ages of 50 and 60 because this age group accounts for nearly 60 percent of all health care spending. Others may target the elderly because they represent the fastest growing segment of the population and are the most intensive users of health care services. A third target of marketing efforts is people interested in the wellness movement: Some hospitals have developed programs targeted to attract individuals interested in exercise, diet, and preventive health programs. Unlike the private sector, VA is restricted in its ability to advertise its health care services to the general public. VA may prepare informational brochures and public service announcements, but it may not advertise in newspapers or on radio or television. VA regulations limit the use of paid advertising to personnel recruitment and certain loan guaranty activities; they specifically prohibit the purchase of advertising time and space to promote VA benefits and services. Although it may not generally use paid advertising, VA has express authority to conduct a Veterans Outreach Services program to ensure that all veterans are “provided timely and appropriate assistance to aid and encourage them in applying for and obtaining” VA benefits and services. According to two VA assistant general counsels, this authority requires VA to distribute full information to eligible beneficiaries on all services for which they might be eligible. This, according to VA’s Office of General Counsel, permits VA to advertise VA medical services using exhibits, photographic displays, and other visual educational information and descriptive material. In addition, the assistant general counsels concluded that although VA’s authority to conduct outreach does not specifically authorize VA to give information to veterans comparing VA services with those of other providers, VA could determine that to give veterans full information, it might be necessary to give them comparative information. The two assistant general counsels, however, concluded that VA may not, under its legislative authority, conduct negative advertising or, under VA policy, use paid advertising to promote its health care services. The assistant general counsels recommended that the VA policy be revised to explicitly authorize use of paid advertising. As of August 1997, the policy had not been revised. Neither VA’s Vision for Change nor its Prescription for Change contains specific initiatives about advertising and outreach. Many of the VISN strategic plans, however, discuss outreach efforts, including the following examples: VISN 3 (Bronx) established a network marketing implementation group and conducts direct mail outreach to service-connected veterans. VISN 4 (Pittsburgh) plans to mail promotional materials to and telephone targeted groups of nonusers. VISN 16 (Jackson) indicates that its medical centers are encouraged to use customer-centered advertising, including patient newsletters and promotional videos, health information fairs, and good media relations to reach its marketing goals. VISN 22 (Long Beach) plans to publish a quarterly newsletter and use public service announcements to inform veterans of their medical benefits. Another method community hospitals use to maintain or broaden market share is contracts and risk-sharing arrangements with managed care plans. Until recently, VA had no authority to either routinely treat nonveterans or contract with managed care plans. As a result, few VISN strategic plans identify efforts to contract with managed care plans other than DOD’s TRICARE managed care plan. Historically, community hospitals were fairly well insulated from risk. During the 1960s, both public and private insurance generally paid hospitals’ billed charges or actual costs. Although hospitals had a financial risk, they could raise prices to compensate. Hospitals assumed greater risk in the 1970s as insurers increasingly set limits on allowable charges or costs and developed utilization management tools to reduce unnecessary hospital use. It was not until Medicare developed a prospective payment system in 1983, however, that most hospitals had to assume direct risk for the cost of care provided to individual patients. That change, however, did not force hospitals to directly compete with each other for market share. The growth of managed care plans, however, has increasingly put hospitals in direct competition with each other for dwindling inpatient workload. With about 40 percent of hospital beds empty on any given day, managed care plans have strong bargaining power with hospitals. If a hospital charges too much, an HMO will merely contract with another hospital. Managed care plans typically pay hospitals on a per case or per diem basis to encourage efficient delivery of services and discourage the provision of unnecessary services. In return, the HMO typically guarantees a certain workload. Since the mid-1980s, the number of hospital contracts with HMOs has increased significantly. In 1985, only about one-third of community hospitals were providing care to HMO members. By 1990, the percentage of community hospitals contracting with HMOs or PPOs had increased to 63 percent. By 1994, three-fourths of community hospitals reported having such contracts. Unlike community hospitals, VA hospitals generally do not have formal relationships with HMOs or other managed care plans to serve either veterans or nonveterans. To become a preferred provider under some plans, VA would be required to accept discounted payments. Historically, VA has not been allowed to negotiate discounted payments. Before enactment of the Balanced Budget Act of 1997, VA was required to recover its full cost of providing care; it was not authorized to negotiate on the basis of price. The Balanced Budget Act shifted VA’s basis for recovering costs from that of a reasonable cost to a reasonable charge, giving VA greater flexibility to negotiate on the basis of price. VA already had such flexibility when seeking to participate as a provider of care to nonveterans. VA may use its recently expanded contracting authority, which allows it to negotiate payments in the best interest of the government, to sell services to managed care plans. HMOs and PPOs have little interest in VA’s providing services to their veteran policyholders. Because HMOs and PPOs typically pay only for care provided by hospitals that have negotiated provider agreements, they have no obligation to pay VA for care provided to their veteran policyholders as long as they do not accept VA facilities as participating providers. In other words, to the extent that managed care plans’ veteran policyholders obtain care from nonparticipating VA facilities, the plans’ profits will be higher. VA currently contracts with only one HMO—Dakota Care—in South Dakota but has been trying to negotiate with at least two other HMOs to become a participating provider. VA officials attribute their ability to obtain a provider agreement in South Dakota to the state’s rural nature and the limited number of providers. VA is succeeding somewhat more in negotiating provider agreements under its medical care cost recovery authority with point-of-service (POS) plans. Unlike HMOs and PPOs that may be able to avoid all payments to VA (other than for emergency care) by excluding VA as a participating provider, POS plans have less to gain by not accepting VA as a participating provider. This is because a POS plan is obligated to pay providers for nonemergent care, including those without a provider agreement. Since February 1995, VA’s General Counsel has reviewed and approved at least 32 provider agreements between VA facilities and POS plans. VA does not have readily available information on the number of such contracts. In the past, VA was not allowed to sell hospital services to managed care plans. It could sell any health care service to DOD and other federal agencies and specialized medical resources to hospitals, clinics, and medical schools. VA’s 1996 Prescription for Change recognized, however, the need to market specialized VA clinical services to other government health care providers and the private sector. It also noted that legislation was pending that would expand VA’s resource-sharing authority to allow VA to offer any health care resource to any public or private entity. Because of VA’s limited sharing authority, its Prescription focused primarily on increasing sharing with DOD and other government health care programs. For example, VA plans to implement contracts with regional TRICARE contractors and providers as DOD expands TRICARE nationwide. VA’s Prescription notes that a standard provider agreement has been negotiated with Foundation Health Corporation for medical and surgical care. Contracts with TRICARE are mentioned in the strategic plans of VISNs 5 (Baltimore), 14 (Grand Island), and 16 (Jackson). With the enactment of Public Law 104-262 later in 1996, VA received authority to sell hospital and other health care services to managed care plans and others. Because the legislation was passed after VA’s Prescription for Change was issued and during the development of the VISN strategic plans, these plans do not address expanding contracting with managed care plans. Community hospitals are also seeking to maintain or broaden their market share by purchasing physician practices and securing a patient base through various risk-sharing arrangements with physicians. VA does not have similar risk-sharing arrangements with private practice physicians but is establishing community-based outpatient clinics (CBOC) to encourage more referrals to VA hospitals. Physicians and hospitals see benefits from closer cooperation in an environment of higher financial risk. Hospitals see stronger linkages with primary care physicians as an important source of hospital admissions, particularly under managed care plans. They also see such linkages as allowing them to shift some financial risk to physicians. Individual and small group (physician) practices benefit because such arrangements allow them access to sophisticated information systems, medical technology, and personnel familiar with managed care contracting, marketing, and management without investing significant capital. Many community hospitals seek to increase their market share by obtaining control of physicians either by buying physician practices or providing them substantial subsidies. One study noted that the percentage of physicians practicing as employees rose from 24.2 percent in 1983 to 42.3 percent in 1994. During that period, the percentage of self-employed physicians in group practices fell from 35.3 percent to 28.4 percent. The study notes that most such change occurred during the last 6 years of the 12-year period and was most prominent among young physicians. Increased earnings of employee physicians compared with those of self-employed physicians accounts for the shift. The Prospective Payment Assessment Commission reported in 1996 that hospital-physician arrangements improve hospitals’ ability to secure managed care contracts, expanding market share and improving financial performance. The Commission noted that such arrangements subject both hospitals and physicians to increased financial risk but also create opportunities for greater profits. Concerns have been raised about such hospital-physician arrangements. For example, some are concerned that these arrangements may violate antitrust laws. In addition, some believe that an inherent conflict exists in hospital-physician arrangements because the two principals have different strategic needs. Hospitals and physicians often have opposing views on such issues as working environment, decision-making goals, and working and management style. Others have questioned whether the hospitals and other health care organizations acquiring physician practices are realizing a positive return on investment. One author notes that acquisitions often create excess capacity, raise costs, and reduce an organization’s ability to attract managed care contracts. Finally, concerns have been expressed about the methods used to value the physician practice and potential violations of the Medicare anti-kickback statute when physician practices continue to be affiliated with the buyers of those practices. Unlike community hospitals, which rely primarily on private-practice physicians to generate hospital admissions, VA hospital admissions come mainly from within the VA system. Only salaried VA physicians may admit and treat patients at VA hospitals. As of February 1998, VA had, however, opened 198 CBOCs since 1994, which have brought new users into the system. A CBOC is either a VA-operated clinic or a VA-funded or reimbursed private clinic, group practice, or single practitioner that is geographically distinct or separate from the parent facility. CBOCs provide only primary care and are expected to refer veterans to VA hospitals for inpatient and more specialized care. Unlike the hospital-physician arrangements emerging in the private sector, however, CBOC physicians have no financial incentive to refer patients to VA hospitals. VA has established a goal of increasing the number of VA users by 20 percent over the next 5 years to use its excess capacity. VA will need to address many issues, however, concerning the likely effect of this strategy on the use of its excess hospital capacity. Although VA appears capable of attracting new users through its plans to establish additional CBOCs, this approach is not likely to generate much new demand for VA inpatient hospital care. This is because new users are most likely to choose their local hospital rather than a distant VA facility and veterans’ use of VA hospital care decreases significantly at distances of over 5 miles from the hospital. In addition, to the extent that physicians at CBOCs have admitting privileges at nearby community hospitals, they will have little financial incentive to refer patients to a distant VA hospital. One option for increasing referrals from CBOC physicians would be to use physician incentive arrangements like those used by community hospitals. If VA decides to try to preserve certain VA hospitals by competing with private-sector hospitals, then VA might want to target its marketing efforts toward veterans and nonveterans living near its hospitals. One approach might be to grant admitting privileges to private practice physicians. This might increase referrals of veterans who routinely obtain needed health care services from private practice physicians. Such physician referrals are an important source of admissions to community hospitals. VA’s 1992 National Survey of Veterans found that most of the veterans surveyed (74 percent) indicated that they did not use VA hospitals because their private practice physicians would most likely send them to a specific hospital. Another approach for increasing hospital users would be for VA hospitals to become preferred providers under managed care plans. This might generate new hospital demand from both veterans and nonveterans who normally use other hospitals. The success of such efforts, however, would depend on many factors. The perceptions, if not the reality, that VA facilities are outdated, lack the patient amenities of private-sector hospitals, or provide inadequate care and customer service will probably affect the decisions of both veterans and nonveterans to use VA hospitals. Because most patients have a choice of whether to go to a VA or community hospital, considerable uncertainty surrounds VA’s ability to attract more hospital users. In addition, managed care plans may be unwilling to contract with VA for hospital care because of the lack of privacy and amenities comparable with what their members are accustomed to. Spending money to improve privacy and amenities in VA hospitals to attract additional hospital users would, however, be risky. Even if VA hospitals were to provide modern accommodations with private and semiprivate rooms, veterans may still have negative perceptions of the VA system and its quality of care. VA attributes such perceptions to its inability to use paid advertising to change people’s perception. This creates difficult policy choices. For example, should VA change its policy on use of paid advertising to attract new users? If so, what restrictions should be placed on such advertising regarding comparative and negative advertising? The ability of VA to attract new hospital users will also probably depend on the population VA targets. For veterans with limited resources and no health insurance, VA may be their only health care option. But VA wants to serve more higher income, Medicare-eligible veterans. Most such veterans either have Medigap insurance as well as their Medicare coverage or are enrolled in Medicare HMOs. As a result, these veterans incur no or minimal cost sharing regardless of where they obtain care. Medicare-eligible veterans have used VA hospital care less and less since the mid-1980s. Other individuals VA appears to be targeting as new users are those with private health insurance. Veterans with private insurance are, however, less likely to use VA hospitals than are those without insurance. Therefore, considerable uncertainty exists about the ability of VA to increase use of VA hospitals by targeting marketing efforts toward insured and higher income veterans. Pressures resulting from prospective payment, capitation, and utilization review have forced community hospitals to more closely monitor and manage the treatment of individual patients to ensure the cost- effectiveness of their care. Specifically, hospitals are implementing clinical guidelines to help physicians and other caregivers follow cost-effective courses of treatment; developing outcome measures to enable hospitals to evaluate their performance and that of individual physicians; performing tests and other procedures on an outpatient basis before, or as an alternative to, admitting patients; and discharging patients sooner to alternative settings such as nursing home, home health, and hospice care. VA’s Prescription for Change outlines ambitious plans for VA to expand the development and use of clinical guidelines, develop and implement outcome measures, and shift care from inpatient to outpatient and other more cost-effective settings. Veterans Integrated Service Network (VISN) strategic plans generally identify additional such efforts. Neither VA nor the private sector is sure about the extent to which clinical guidelines are being followed and to what effect. Similarly, both VA and the private sector are in the early stages of developing and using outcome measures. Some of VA’s early efforts to develop performance measures, however, have focused more on process than outcomes and appear to conflict with other VA initiatives such as the Veterans Equitable Resource Allocation (VERA) system. Finally, VA faces challenges in ensuring that its facilities shift care to other treatment settings when cost-effective. Both community and VA hospitals are increasing efforts to develop and implement clinical guidelines. Despite the rapid development of guidelines, little effort has been devoted to determining whether they achieve their intended effect. A clinical guideline explicitly states what is known and believed about the benefits, risks, and costs of a particular medical treatment intended to achieve a meaningful difference in patient outcomes. By identifying which services are beneficial (and which are not), guidelines can help patients get needed care and help them avoid the risks of unnecessary services. Guidelines can also support cost containment efforts by reducing unnecessary care and providing information on the benefits, risks, and costs of services. Such information can help patients, physicians, payers, and others make appropriate choices in an environment of limited resources. Without guidelines, attempts to contain health care costs may inadvertently result in patients being denied needed services. The Physician Payment Review Commission classifies clinical guidelines as either diagnostic, management, or service. Diagnostic guidelines establish procedures for evaluating patients with particular symptoms (such as chest pain) to effectively identify the source of the problem. Diagnostic guidelines can also be developed to guide providers in screening asymptomatic patients for early stages of disease. Management guidelines establish appropriate courses of treatment once a diagnosis has been made. Finally, service guidelines identify appropriate and inappropriate uses of particular diagnostic and therapeutic procedures (such as a chest X ray, colonoscopy, or administration of hepatitis vaccine). Service guidelines help in deciding whether a particular treatment or test should be administered. A guideline’s effectiveness is evaluated by the frequency with which it produces the desired patient outcome. For example, a diabetes guideline might be evaluated on the basis of its success in regulating patients’ hemoglobin levels. Similarly, a hypertension guideline might be evaluated using a longer term (over time) outcome measure, such as reduced morbidity and mortality from coronary artery and renal disease and stroke. The Congress created the Agency for Health Care Policy and Research (AHCPR) to sponsor clinical guidelines development and conduct research on medical outcomes to provide information needed for developing future guidelines. In March 1992, AHCPR issued the first of 18 clinical guidelines it developed—on acute pain management and urinary incontinence in adults. Multidisciplinary panels knowledgeable about managing certain conditions developed the guidelines. AHCPR chose these areas for guideline development because they permitted consideration of the following factors: the adequacy of scientific-based evidence; the number of people whose care the guidelines would affect; the likelihood of the guidelines’ reducing variation in prevention, diagnosis, management, and outcomes of the condition; the specific needs of Medicare and Medicaid beneficiaries; and the costs of treating the condition to all payers, including patients. Many others are also developing clinical guidelines. For example, in a 1991 report, we identified 27 medical specialty societies that had or were developing clinical guidelines. Similarly, a 1992 Physician Payment Review Commission report indicated that more than 1,000 guidelines, covering an array of topics, had been identified by the American Medical Association (AMA). The Commission reported that more than 50 organizations were developing clinical guidelines, including professional groups, payers, hospitals, academic medical centers, HMOs, government agencies, public and private researchers, and malpractice insurers. Hospital executives view guidelines as important in shaping the future of health care. Asked what key factors will influence health care delivery in the years ahead, 41 percent of executives in a 1995 survey cited clinical guidelines and outcome measures compared with just 22 percent of executives surveyed in 1990. Moreover, nearly two-thirds of the executives believed that costs can be successfully controlled by using monetary physician incentives if effective protocols and guidelines are developed. VA, like AHCPR, AMA, and the specialty societies, is developing and implementing clinical guidelines. Using AHCPR and other guidelines as a starting point, VA developed national guidelines for rehabilitation of stroke patients and treatment of amputees in June 1996. Other nationally developed guidelines cover major depressive disorders, diabetes, psychoses, and ischemic heart disease. National guidelines are under development for anxiety, gout, degenerative joint disease, asthma, and prostate disease, among others. In addition to these clinical guidelines, the Veterans Health Administration (VHA) has developed several pharmacological management guidelines. These guidelines, developed by VA’s Pharmacy Benefits Management Medical Advisory Panel, cover drug therapy for chronic obstructive heart disease, human immunodeficiency virus/acquired immunodeficiency syndrome, hyperlipidemia, hypertension, and noninsulin-dependent diabetes. Guidelines are being developed for congestive heart failure, depression, peptic ulcers, glaucoma, benign prostate hypertrophy, and degenerative joint disease. In his 1996 Prescription for Change, the Under Secretary for Health called for the increased use of clinical guidelines to both measure and improve care in the VA system. In response to his earlier Vision for Change, the Office of Policy, Planning, and Performance and the Office of Patient Care Services began distributing existing guidelines and efforts to develop a uniform process for developing and implementing clinical guidelines. Under the guidance issued in VA’s Prescription for Change, VISNs are expected to standardize clinical processes by using nationally developed clinical guidelines. In addition, the Prescription for Change indicated that VISNs are expected to delegate clinical care responsibility to nonphysician caregivers, when appropriate, through locally developed clinical pathways. VA’s Prescription also called for establishing minimal criteria for local development of clinical pathways and a mechanism for internetwork sharing of pathways. Subsequently, a clinical pathways networking group was established at the Quality Management Institute located at the Durham VA medical center. In 1995, the Institute published a directory of clinical pathways. Under its 1997 Network Directors’ Performance Measures, networks were expected to implement, by September 30, 1997, 12 nationally developed networkwide clinical guidelines, 2 of which must focus on special- emphasis populations. Our review of VISN strategic plans identified a wide range of actions to implement clinical guidelines and pathways: VISN 1 (Boston) indicated that it had developed clinical guidelines for eight health conditions, including diabetes, pneumonia, and congestive heart failure. VISNs 3 (Bronx) and 6 (Durham) indicated that they implemented five clinical guidelines in fiscal year 1996. VISN 5 (Baltimore) indicated that it has implemented 34 national clinical practice guidelines and plans to develop clinical pathways for the network’s top five diagnoses during fiscal year 1997. VISN 10 (Cincinnati) planned to complete development of 12 clinical pathways in fiscal year 1997, including pathways for stroke, acute and chronic back pain, major depressive disorders, and hypertension. Despite the intense efforts to develop clinical guidelines, little is known about how extensively they are followed and their results. For example, our 1991 study noted that only a few evaluative studies had been done on the effects of clinical guidelines. Similarly, the Physician Payment Review Commission noted in its 1992 report that little was known about the validity of clinical guidelines and that questions existed about how many physicians use or even know about the availability of such guidelines. A 1993 study of 59 published evaluations of clinical guidelines, however, concluded that explicit guidelines improve clinical practice. All but 4 of the 59 evaluations studied found significant changes in the care proposed by the guidelines. All but 2 of the 11 studies that evaluated patient outcomes found significant improvement. A Canadian researcher noted in 1995 that the ultimate success of clinical guidelines depends on routine evaluation. He also noted, however, that compared with efforts to develop guidelines, little effort is devoted to their evaluation. Similarly, neither VA’s Prescription for Change nor individual VISN strategic plans focus on determining the extent of the use of the guidelines being developed and their effect on patient care. VA does, however, assess the extent to which nationally recognized clinical guidelines are followed in treating certain high-cost/high-volume conditions such as diabetes and hypertension. VA’s draft strategic plan, developed under the Government Performance and Results Act, indicates that VA plans not only to expand the development and implementation of clinical guidelines, but also, in future years, to analyze how the guidelines are working to improve care processes and patient outcomes. According to the draft plan, by the year 2000, VA expects to be able to demonstrate improved processes resulting from six of its clinical guidelines. By the year 2002, it expects to be able to implement improvements in patient care or patient outcomes resulting from clinical guidelines. The private sector, the Health Care Financing Administration (HCFA), and VA are developing outcome measures to compare the performance of hospitals, physicians, and health plans. Outcome measurement is the assessment of the results or consequences of a medical intervention.Typically, comparative analysis is used to determine whether a course of treatment or medical intervention had its intended effect. For example, a patient’s condition at the end of a course of treatment is compared with his or her condition before treatment. Similarly, mortality rates for a specific surgical procedure may be compared with some baseline. Whether comparing hospitals, health plans, or physicians, outcome measures must compare like procedures and like patients. For example, it is meaningless to compare mortality rates following a heart transplant with mortality rates following the setting of a broken arm. It is also important to compare similar hospitals and patients. For example, mortality rates for a teaching hospital that accepts the most complex surgery cases should not be compared with those of a small rural hospital performing only minor surgery. Similarly, mortality rates for 25-year-old males should not be compared with those for 75-year-old males to assess effectiveness of care. Severity determinations attempt to group diseases (and patients) of similar intensity to make outcome comparisons meaningful. For example, the rate of patient deaths following open-heart surgery may be compared with rates in other hospitals or with some national average. Similarly, patient satisfaction can be compared over time. Attempts to assess hospitals’ performance using outcome measures have been under way for several decades. These assessments have been performed by federal and state inspectors, private accrediting agencies, and health care organizations. But specific results of these activities have been generally kept confidential. Other than informal communication or knowledge of an organization’s accreditation or license, corporate and individual health care purchasers had no method for determining which organization provided the best care. Outcome measures are intended to (1) provide hospital managers, managed care plans, and physicians information on the relative effectiveness of their treatment programs, allowing them to focus changes on problem areas; (2) provide consumers with meaningful data to use in making health care choices on the basis of quality as well as price; and (3) allow regulators to identify and sanction physicians and hospitals providing substandard care. Employers and consumers are increasingly seeking outcome data to help guide their selections of hospitals, health plans, and other providers. As employers negotiate for lower premiums or limit employees’ access to providers, they want to ensure that their employees still receive quality care. Individual consumers want assurance that they have access to quality providers and that they make the right health care decisions. As a result, both employers who purchase health care and individual consumers have demanded more information about quality. The first widespread public disclosure of quality assessment using outcome measures took place in 1987 when HCFA reported on the observed and expected mortality rates in hospitals performing coronary artery bypass graft surgery. Although the data were intended to be used only by peer review organizations and hospitals for quality assessment purposes, the news media obtained the data through a Freedom of Information Act request and ranked hospitals from the best to worst. HCFA officials continued to release the data until 1993, when they stopped the practice, citing problems with the reliability of their methods for adjusting the data to account for the influence of patient characteristics on the outcomes. In the mid-1980s, health policy experts advised corporate purchasers that health care costs could be contained if purchasers considered both cost and quality of care information when they made their health care purchases. Early efforts by corporate purchasers, however, progressed slowly as providers and purchasers tried to agree on what performance indicators would be useful. Increasingly, state and federal officials advocated publication of quality of care results, believing that such data could help contain health care expenditures. Both health plans and governmental entities have started to inform the public about the quality of care hospitals and health plans furnish. Summaries of hospital and health plan performance, often referred to as “report cards,” are being developed and published. For example, Pennsylvania, New York, and California have published report cards about hospital services provided in their states. In 1993, the Pennsylvania Health Care Cost Containment Council published the Hospital Effectiveness Report on care provided in 175 Pennsylvania hospitals for each of 53 diagnostic categories during 1991. For each of the 175 hospitals, this report provided data about the number of patients admitted, average severity of illness of those patients when admitted, percentage of patients aged 65 and older, actual and expected number of deaths and complications, average length of stay, and average charge per patient. In addition, health plans, providers, and corporate purchasers working under the auspices of the National Committee for Quality Assurance (NCQA) have been developing and promoting the use of standardized performance measures. NCQA developed a consensus list of performance measures—the Health Plan Employer Data and Information Set (HEDIS)— that could be used by corporate purchasers to assess health plan value. Released in 1993, HEDIS 2.0 includes over 60 indicators that describe performance in five areas—quality, access and patient satisfaction, membership and utilization, finance, and health plan management activities. HEDIS 2.0 indicators measure health plans’ process and structure. Developers did not include indicators that directly measure the longer term results or outcomes of care. They believed that (1) outcomes measurement was not yet an established field of study and (2) many outcomes may not have been meaningful until a lengthy period had elapsed after an intervention. HEDIS developers expect to include outcome measures in future revisions. HEDIS 3.0, released in 1997, features measures that are less process oriented. Working with the developers, HCFA was able to add the functional status of enrollees over age 65 as a measure of the effectiveness of care. This will be HEDIS’ first outcome measure that will track and measure functional status over time. HCFA now requires Medicare managed care plans to use HEDIS to facilitate comparison of plan performance and to hold plans accountable for the care they provide. In addition, HCFA has other efforts under way to develop outcome measures. First, it is working with the Foundation for Accountability (FACCT) to develop quality outcome measures for depression, breast cancer, and diabetes. Second, HCFA and HHS’ Assistant Secretary for Planning and Evaluation recently contracted with the RAND Corporation, a nonprofit research organization, to refine and test three sets of outcome measures to be implemented in 1998. Finally, HCFA plans to administer, through an independent vendor, a uniform Medicare beneficiary survey— the Consumer Assessment of Health Plans Study—to enrollees in Medicare managed care plans. Although significant efforts to develop and implement outcome measures have taken place, a former HCFA Administrator said that getting potential users to use outcome measures has been more difficult than anticipated. In her view, however, it is only a matter of time before such measures are widely used. Just as purchasers are slow to adopt outcome measures, so too are hospitals slow to use outcome measures to improve quality. A 1991 evaluation of 31 hospitals that were using the same outcomes measurement system found that the system alone does not create hospital accountability. Specifically, the evaluation found that 14 (45 percent) of the hospitals were using outcome measures solely to maintain the status quo. The goal of such hospitals was to be within the norm and hope that the changing marketplace would not affect them. The evaluation found that another 35 percent of the hospitals were using outcome measures to achieve financial success rather than financial survival. Administrators at these hospitals were using outcomes information internally to improve resource consumption and to ensure that quality remained within the norms. The evaluation found that only 20 percent of the hospitals made quality their top priority and presented outcomes information, including both clinical and cost data, to physicians for comparison. VA, like HCFA and the private sector, is aggressively developing and using outcome measures. VA expects outcome measures to help it demonstrate the quality and value of its services, assess new and existing technologies, educate patients, improve provider-customer relations, and assess the effects of changes under way in the VA health care system. Many of VA’s efforts are outlined in a March 1997 primer, Using Outcomes to Improve Health Care Decision Making, prepared by VA’s Management Decision and Research Center. The primer identifies several ways in which VA is using outcomes measurement. First, it is developing and using outcome measures as part of the performance contracts between VA central office and VISN directors. VA expects such performance measures to ultimately allow comparison of medical centers within VISNs, among VISNs, and with similar medical centers nationwide. VA also expects to develop performance measures that will permit comparisons of VA and non-VA providers. As part of this effort, VA is developing new methodologies to adjust for differences among patients to facilitate such comparisons. VA also expects to use the results of outcome measures in developing, revising, and distributing national clinical guidelines. The primer identifies a number of outcomes research projects being conducted by VA facilities that could be used for such purposes. These efforts include identifying key variables that could be used to assess the quality of care for patients with hypertension, diabetes, and chronic obstructive pulmonary disease; studying the appropriateness and necessity of cardiac catheterization, coronary angioplasty, and coronary artery bypass graft surgery to determine the appropriateness of their use; examining the necessity of surgery for aneurysms that are not large or studying, in collaboration with the National Cancer Institute, the effects on patient health status and overall costs of alternative treatments for prostate cancer; and studying how the organization and processes of a cardiac services unit are affecting outcomes in open-heart surgery. VA also envisions use of outcome measures to establish performance monitoring systems and mechanisms for distributing best practices systemwide. Finally, VA plans to explore the use of report cards, especially for chronic diseases. VA is discussing with NCQA, which oversees the development and updating of HEDIS, the possibility of developing and applying measures that assess processes of care similar to those in HEDIS. One of the outcome measures VA currently uses is its chronic disease index, intended to assess the quality of services provided to outpatients in high-volume/high-cost diagnostic categories such as diabetes and hypertension. The individual disease-specific measures in the index determine the degree to which VA is following nationally recognized clinical guidelines. VA’s first assessment using the chronic disease index, completed in 1996, found compliance with the guidelines to be 46 percent. VA established a goal to increase compliance to 95 percent in fiscal year 1998. Changes in how hospitals are paid have created financial incentives for community hospitals to admit a patient later or release a patient sooner than medically necessary. Community hospitals have increasingly established separate outpatient departments and shifted many diagnostic and other tests to these departments to avoid unnecessary days of care for elective admissions. Similarly, hospitals often avoid admitting patients altogether by providing services in outpatient departments. For many years, VA lagged behind the private sector in shifting care to outpatient settings in part because its resource allocation methods rewarded hospitals for higher inpatient use. During the past several years, however, VA has aggressively sought to shift more care to alternative settings as reflected in the 20-percent decrease in bed-days of care (BDOC) in fiscal year 1996. The 1986 Annual Report of the Prospective Payment Assessment Commission noted that hospitals may shift services previously performed on an inpatient basis to alternative settings to maximize profits. It noted that hospitals can generate additional profits by providing care in outpatient settings such as outpatient clinics and surgery departments, emergi-centers, dialysis centers, and diagnostic centers. It also noted that this strategy is particularly attractive for vertically integrated hospitals because it allows them to not only reduce the length of inpatient stays, but also capture at least some of the revenues from a patient from preadmission through postdischarge care. Outpatient departments in community hospitals have grown significantly since the 1983 introduction of Medicare’s prospective payment system and the growth of managed care during the 1980s and 1990s. After increasing slightly from 1975 to 1985, the number of visits to hospital outpatient departments nearly doubled between 1985 and 1995. During the same period, the number of days of inpatient hospital care steadily declined (see fig. 11.1). The Prospective Payment Assessment Commission reported that since fiscal year 1983, Medicare expenditures for outpatient services, excluding those for physician services, have risen an average of 14 percent annually, reaching $16.3 billion in fiscal year 1995. An estimated 70 percent of those payments were to hospitals for services provided in outpatient departments. The Commission noted that payment for hospital outpatient services under Medicare is fragmented and provides little incentive for providing care in the most efficient way. According to the Commission, most services are paid on the basis of costs or charges, meaning that lower costs or charges would mean correspondingly lower payments. One reaction of hospitals to Medicare’s prospective payment system and other limits on hospital payments was to provide as many services to patients as possible on an outpatient basis before admission. This is because hospitals could obtain separate payment for every outpatient test and procedure; if they waited until after admitting the patient to perform the tests, they would have to absorb the costs of such services. Services shifted to outpatient settings include both testing and laboratory work and patient education. Medicare subsequently changed its rules for inpatient prospective payment to include tests and laboratory work performed within 72 hours of admission. Nevertheless, hospitals still find it more cost-effective to perform as many tests and as much patient education on an outpatient basis as possible. Following are programs established by community hospitals to increase preadmission testing and education: The Hospital Center at Orange, New Jersey, developed a preadmission testing program that includes laboratory work, electrocardiograms, social and rehabilitative service referrals, patient education, and a nursing assessment. The hospital uses specially trained registered nurses to conduct the preadmission testing. The testing program has reduced costs, increased patient and physician satisfaction, and decreased idle time for both patients and staff. Sarasota Memorial Hospital, in Florida, developed a pre-anesthesia collaborative care track to address problems in preparing patients for surgery. Under the program, the registered nurse anesthesia coordinator ensures that appropriate clinical data are available to avoid last-minute delays and cancellations of scheduled surgical procedures. Delays in performing surgery resulting from the unavailability of needed clinical data are costly to hospitals and distressing to patients. Just as prospective payment gave community hospitals incentives to perform tests and laboratory work on an outpatient basis before scheduled hospital admissions, managed care and preadmission certification programs encouraged hospitals to avoid admitting patients altogether who could safely be treated as outpatients. Community hospitals established outpatient surgery, chemotherapy, renal dialysis, and diagnostic testing programs to shift care to outpatient settings. According to the Health Insurance Association of America (HIAA), by 1993, 83 percent of community hospitals had outpatient departments providing outpatient surgery, examination, diagnosis, and treatment for a variety of nonemergency medical conditions. HIAA notes that hospitals now offer more procedures and treatments on an outpatient basis than in the past and that occupancy in community hospitals continues to decrease in part because of this trend. In addition to traditional medical/surgical care, by 1993 community hospitals were offering a variety of other outpatient services, including substance abuse treatment, AIDS diagnosis and treatment, psychological services, and rehabilitation. (See fig. 11.2.) VA, without the financial incentives of community hospitals, was initially slow to shift care to outpatient settings. VA has long had authority to (1) conduct preadmission tests and provide postdischarge care on an outpatient basis (1960) and (2) provide outpatient care to any veteran if doing so would obviate the need for inpatient care (1973). Studies by the VA Inspector General, VA researchers, and us have found, however, that VA had not effectively used this authority to shift more care to outpatient settings. During the past several years, VA has increasingly focused on providing care in more cost-effective outpatient settings. VA hospitals, like community hospitals, have had steadily increasing outpatient workloads and correspondingly decreasing inpatient hospital- days of care. Much of VA’s increase in outpatient demand, however, can be attributed to eligibility expansions and opening of new clinics rather than shifting care from inpatient to outpatient settings. In its fiscal year 1975 annual report, VA noted the relationship between the “progressive expansion of legislation expanding the availability of outpatient services and increased outpatient workload.” Among the eligibility expansions occurring between 1960 and 1975 were actions to authorize (1) pre- and posthospital care for treating nonservice-connected conditions (1960) and (2) outpatient treatment to obviate the need for hospitalization (1973). Workload at VA outpatient clinics increased from about 2 million to 12 million visits during the 15-year period. Just as these eligibility expansions increased outpatient workload, VA efforts to improve the accessibility of VA care resulted in more demand for outpatient care. Between 1980 and 1995, the number of VA outpatient clinics increased from 222 to 565, including many mobile clinics that bring outpatient care closer to veterans in rural areas. Between 1980 and 1995, outpatient visits provided by VA clinics increased from 18 million to 27.5 million as inpatient days of care were steadily decreasing (see fig. 11.3). As previously discussed, as recently as the early 1990s, the VA Inspector General was reporting that much of the surgery performed in VA hospitals on an inpatient basis could have been performed on an outpatient basis if VA had established outpatient surgery capability at its medical centers. Similarly, studies by VA researchers consistently found that over 40 percent of the days of care in VA hospitals were non-acute. For example, a 1991 VA-funded study of admissions to VA acute medical and surgical bed sections estimated that 43 percent (+ or –3 percent) of admissions were non-acute. Under the study, non-acute admissions in the 50 randomly selected VA hospitals ranged from 25 to 72 percent. The study found that the most common reason for non-acute medical admissions was that care could have been performed on an outpatient basis. All of the surgical admissions determined to be non-acute were found to (1) be procedures that VA had determined could be done on an outpatient basis and (2) lack documented risk factors indicating a need for inpatient care. The study concluded that, on the basis of medical necessity, a large proportion of acute medical/ surgical care in VA medical centers could be shifted to outpatient and long-term care settings. Among the reasons the study cited for the high rate of non-acute admissions were the absence of financial incentives for VA hospitals to shift care to outpatient settings; the absence of formal mechanisms, such as mandatory preadmission review, to control non-acute admissions; and VA’s significant social mission that may influence use of inpatient resources. In a separate article, the same authors estimated that 48 percent (+ or –2 percent) of the days of care at the 136 VA medical centers providing acute medical and surgical care were non-acute, ranging from 38 to 72 percent. Yet another study, this one published in 1993, found that (1) 47 percent of the admissions and 45 percent of the days of care in VA medical wards were non-acute and (2) 64 percent of surgical admissions and 34 percent of days of care in VA surgical wards were non-acute. The Under Secretary for Health’s 1996 Prescription for Change identified a series of planned actions to shift more of VA’s care from hospital to outpatient settings. These actions include increasing VA’s outpatient capacity to accommodate the workload shifted from inpatient to outpatient settings; requiring each network to develop hospital admission, utilization, and length of stay criteria; requiring each network to implement preadmission screening programs; increasing outpatient surgery and diagnostic procedure capacity and utilization; and increasing temporary lodging and residential care capabilities to accommodate patients needing housing but not acute hospital care while being diagnosed or treated. Many of these actions, such as establishing preadmission screening programs, temporary lodging, and outpatient surgery programs, address the specific problems identified in the above-mentioned studies. VA established performance measures to gauge its progress in implementing some of the actions identified in its Prescription. For example, its fiscal year 1996 performance measures for VISN directors set the expectation that at least 50 percent of surgeries and other invasive procedures would be performed on an outpatient basis; to be considered exceptional, 65 percent or more of surgeries would have to be performed on an outpatient basis. All but eight VISNs met the minimum requirement for fully successful performance; VA determined that each of the eight had made statistically significant improvement. Another performance measure required VISNs to reduce their BDOC by 20 percent during fiscal year 1996. Although seven VISNs did not meet the goal, all had made statistically significant progress. Three VISNs—4 (Pittsburgh), 5 (Baltimore), and 7 (Atlanta)—reported 29-percent reductions in BDOC. Finally, the performance measures required all VISNs to establish, by September 30, 1996, (1) temporary lodging capacity to accommodate 10 patients, (2) a VISN-wide preadmission screening program, (3) admission and discharge planning programs, and (4) a telephone liaison program. VA reported that all VISNs have complied with these requirements. In its 1997 performance measures, VA revised its performance measure for the percentage of surgeries and invasive procedures performed in an outpatient setting to link the goal to HCFA data. To be assessed as fully successful, a VISN must perform 65 percent of the surgeries and diagnostic procedures that HCFA will reimburse in outpatient settings in such settings. In its assessment of mid-year performance for 1997, VA reported that 10 VISNs had met or exceeded the goal. All VISNs, however, improved from fiscal year 1996. Just as prospective payment encouraged hospitals to reduce the length of patient stays by performing tests and patient education on an outpatient basis before admission, it provided incentives for community hospitals to discharge patients sooner to other care settings such as home health and nursing home care. The 1986 Annual Report of the Prospective Payment Review Commission noted that hospitals may shift services previously performed on an inpatient basis to alternative settings such as nursing homes, other long-term care facilities, and home health care. The Commission also noted that some cases requiring extra days of care may be transferred to another acute care hospital. It noted that such transfers may lower the quality of care and lead to higher costs. VA researchers found in a 1990 study that the number of transfers from community hospitals to VA hospitals increased substantially following implementation of the Medicare prospective payment system. The study suggested that some of the savings attributed to prospective payment may simply have been a shifting of costs from Medicare to the VA system. As previously discussed, hospitals are expanding into the post-acute care market. From 1991 to 1995, the number of Medicare-certified, hospital- based skilled nursing facilities increased 59 percent, hospital-based rehabilitation facilities increased 19 percent, and hospital-based home health agencies increased 52 percent. The number of free-standing facilities grew similarly (see fig. 11.4). The Prospective Payment Assessment Commission reported that Medicare payments for post-acute care skyrocketed between 1988 and 1994. In 1988, post-acute care accounted for only about 8 percent of Medicare part A payments; by 1994, they accounted for 25 percent. Although growth of post-acute payments has since slowed, payments to these providers are growing twice as fast as total part A spending. The Commission noted that many services now provided in outpatient and post-acute settings were previously provided in acute hospitals. It also noted, however, that several other factors, including medical advances and changing practice patterns, also affect the increased demand for post-acute services. We made similar observations in a December 1996 report. As discussed, VA hospitals lagged behind community hospitals in shifting patients from inpatient to post-acute care settings even though such settings have long been a part of the VA health care system. The Under Secretary for Health’s Prescription for Change identifies a series of planned actions to discharge patients sooner to other, more cost-effective settings. These actions include requiring each network to develop utilization and length of stay criteria; requiring each network to implement discharge planning programs; expanding VA’s hospital-based home care program to include home intravenous therapy, total parenteral nutrition, and other services; expanding VA’s continuum of clinical service settings so that patient care can be provided in the most cost-effective clinically appropriate setting; and expanding use of noninstitutional long-term care when clinically appropriate and financially sound. None of VA’s fiscal year 1996 or 1997 performance measures, however, specifically addressed increased use of post-acute care as an alternative to inpatient hospital care. Nor did VISN plans address the subject. Our work identified several issues and challenges concerning VA’s efforts to monitor patient care and shift care to alternative settings. First, regarding efforts to develop and implement clinical guidelines, little information is available either in VA or the private sector on the extent to which physicians and other caregivers are following clinical guidelines and to what effect. In addition, VA’s development and evaluation of clinical guidelines rely heavily on successful completion of efforts to improve its management information and financial management systems. Thus, VA, like the private sector, faces significant challenges, in developing clinical guidelines, evaluating their effectiveness, and ensuring their appropriate use. The second major challenge is in developing and using outcome measures. For example, outcome measures will probably have little effect on hospital operations and individual provider performance without VA’s effectively distributing the results of assessments and monitoring corrective actions. Similarly, the effectiveness of outcome measures will depend heavily on VA’s ability to identify and develop meaningful ways to compare VA and other health care providers and programs as well as VA facilities and providers. VA must take care, however, to ensure that the results portrayed by outcome measures reflect differences in performance rather than differences in the populations studied. Effective case mix comparisons are, however, difficult to develop. One of VA’s initial efforts to develop outcome measures is its performance measures for VISN directors. These measures are process oriented, however, such as the number of surgeries shifted to outpatient settings and the reduction of BDOC, rather than outcome oriented. As discussed in chapter 6, VA’s 1997 performance measures present a view of VISN efficiency that conflicts with that portrayed by VERA. For example, VA began setting its goals for reducing BDOC on the basis of Medicare days of care per 1,000 beneficiaries by census division. Under this performance measure, VERA identified four of the seven VISNs required to reduce BDOC by 20 percent or more as comparatively more efficient VISNs. The VISN required to reduce BDOC by the greatest percentage—39 percent—was determined under VERA to qualify for one of the larger increases in funding on the basis of its perceived efficiency. Similarly, another performance measure set VISN-specific goals for increasing the number of mandatory care category users. Generally, however, the VISNs needing the smallest increases in new users to meet their goals were those receiving the largest increases in funding under VERA. Because of the apparent inconsistencies between the performance measures and VERA analyses, VA faces a significant challenge in determining (1) the underlying causes of variation in the rates of hospital use and (2) to what extent the variation can be reduced without jeopardizing patient care. An important part of such an assessment is developing baseline data on each VA facility. VA studies show that although all VA hospitals studied had significant amounts of non-acute care, the percentages varied from about 25 percent to over 70 percent. Baseline data on VA’s surgery programs showing the percentages of surgeries needed to be done on an inpatient basis would provide a sound basis for establishing goals for reducing inpatient surgeries. Setting performance measures without such baseline data could require some facilities to jeopardize patient care to meet the goals, while other facilities could meet the goals and still provide extensive non-acute care. VA is gathering the types of baseline data that could be used to establish facility-specific performance measures through its preadmission screening program. A third challenge VA faces is in evaluating the effectiveness of VA initiatives, such as establishing temporary lodging in VA hospitals, in reducing costs. For example, little is known about how much it costs VA to provide temporary lodging because such initiatives are recent. VISNs and individual hospitals face significant challenges in determining when it would be less expensive to purchase care from a hospital or outpatient clinic closer to a veteran’s home rather than pay for additional nights of lodging to provide care at a VA facility. Although the temporary lodging program should be less expensive than admitting a patient earlier or keeping a patient in a hospital longer than medically necessary, providing lodging in a hospital using VA hospital staff may not always be the lowest cost alternative. In arranging for temporary lodging, VA could explore many other alternatives, including using nearby commercial lodging and hiring an outside contractor to operate a temporary lodging unit. The use of temporary lodging also raises several policy issues. For example, to what extent should veterans, rather than the government, be expected to pay for temporary lodging incident to direct patient care? To the extent that providing free lodging encourages longer and more frequent stays, it could offset the savings achieved by using fewer hospital beds. Similarly, to what extent should temporary lodging be made available to family members? Finally, should temporary lodging be provided to veterans traveling significant distances for outpatient services? Neither performance measures nor VISN strategic plans focus on efforts to shift care to post-acute settings when medically appropriate. The effectiveness of such actions depends on many factors such as the adequacy of discharge planning efforts, efforts to ensure that patients are not discharged before medically appropriate, the extent to which patients receive appropriate follow-on care, and the extent to which the cost of home health or other post-acute care services exceed the cost that would have been incurred through continued institutional care. The overall effect of VA efforts depends as well on the extent to which VA facilities shift the costs of post-acute care to other payers such as the Medicare home health program. To the extent that such shifts occur, higher costs under Medicare and Medicaid will offset any savings VA achieves through efficiencies. Teaching hospitals’ medical education missions have changed significantly. Until recently, both nonfederal and VA teaching hospitals had steadily increased their use of medical residents partly because residents were a lower cost labor source. Because of increasing concern that the growing number of medical residents contributes to the oversupply of physicians and increased health care costs, the Congress has provided financial incentives to hospitals to reduce the number of residency positions. Both nonfederal and VA teaching hospitals are also changing the focus of their residency programs to increase the number of primary care residencies in response to the growth of managed care. Finally, nonfederal teaching hospitals are offering significant discounts to managed care plans; VA hospitals, however, are not. Several issues and challenges surround VA’s future role in medical education. For example, should financial incentives similar to those provided to non-VA teaching hospitals through the Balanced Budget Act of 1997 be provided to VA to encourage reductions in residency positions? Furthermore, how does the declining demand for VA hospital care affect the viability of the medical education program? Finally, VA is likely to find it increasingly difficult to assert its independence from its affiliated medical schools as tough decisions about the future of hospitals and residency programs are debated. Graduate medical education (GME) refers to the period following the completion of medical school in which physicians, as residents, receive further training in fields such as family practice, general surgery, or anesthesiology. GME takes place in federal (including VA) and nonfederal teaching hospitals. Although over 1,000 U.S. hospitals had at least one teaching program in 1996, about 80 percent of residents train in large tertiary care hospitals belonging to the Council of Teaching Hospitals. In 1996, the Council had about 400 member hospitals. Nonfederal teaching hospitals pay for GME through a combination of inpatient revenues (both hospital payments and faculty physician fees) and a complex mix of federal and state government funds. The federal government is the largest single source of financing for GME through the Medicare program and through its support of residencies in VA and DOD hospitals. From its inception in 1965, the Medicare program has reimbursed teaching hospitals for its share of the costs of training interns and residents. When Medicare adopted its prospective payment system in 1983, it developed new policies. Medicare now recognizes the costs of GME under two mechanisms: direct medical education payments and an indirect medical education adjustment to prospective payment rates. GME’s direct costs include residents’ stipends, supervising faculty salaries, administrative expenses, and institutional overhead allocated to residency programs. Hospitals receive additional payments to cover Medicare’s share of these direct costs. In addition to payments for direct costs, teaching hospitals receive an indirect hospital-specific percentage adjustment (based on the ratio of interns and residents per bed) to their total diagnosis-related group payments to compensate them for their relatively higher costs. The adjustment has been a critical source of revenue for teaching hospitals, particularly those serving large low-income and uninsured populations. In fiscal year 1991, Medicare paid approximately $1.5 billion in direct GME payments and $2.9 billion in indirect adjustments to prospective payment rates. In fiscal year 1997, it is estimated that Medicare paid approximately $2.5 billion in direct GME payments and $4.6 billion in indirect adjustments to prospective payment rates. Medical education is one of VA’s four core missions. Since 1946, VA facilities have been authorized to enter into agreements with medical schools and their teaching hospitals. Under these agreements, VA hospitals provide training for medical residents and students and appoint medical school faculty as VA staff physicians to supervise resident education and patient care. Over half of the nation’s physicians received some of their training through VA programs. In 1997, 130 VA facilities had affiliation agreements with one or more medical schools; 105 medical schools had affiliation agreements with the Veterans Health Administration (VHA). More than 34,000 medical residents and 21,000 medical students receive some of their training in VA facilities every year. VHA supports about 8,900 residency positions, about 8.7 percent of those in the United States. Almost one-third of U.S. residents rotate through VA in any given year. In addition to training medical residents, VA is affiliated with schools of dentistry, optometry, podiatry, nursing, and other associated health professions. All told, VA was affiliated with over 1,000 educational institutions and provided all or some of the training provided to about 107,000 medical and other students in fiscal year 1996. About 95 percent of the associated health students being trained in VA facilities receive no compensation. Table 12.1 shows the number of residents and students rotating through VA and the number of paid VA positions in fiscal year 1996. Teaching hospitals, including those operated by VA, save money by using medical residents and other students as a lower cost supply of physicians, physician assistants, and nurse practitioners. For many years, both Medicare’s hospital reimbursement policies and VA’s stipends encouraged hospitals to expand the use of medical residents. Some health policy experts believe, however, that teaching hospitals’ demands for medical residents are contributing to an oversupply of physicians and to higher health care costs. As a result, both Medicare and the VA health care system have acted to reduce the number of residency positions. Reducing the number of medical residents by substituting other health care personnel, however, is estimated to increase teaching hospitals’ operating costs significantly. Medical residents long represented a low-cost source of labor for teaching hospitals because (1) residents work long hours in exchange for relatively small stipends to offset their living costs and (2) Medicare and other programs’ reimbursement methods provide financial incentives to use residents to perform functions that could be done by physician assistants or nurse practitioners. Medicare financing for direct GME creates an incentive for nonfederal hospitals to employ residents instead of highly skilled nonphysician practitioners or fully trained salaried physicians. Residents are expected to work long hours in exchange for a stipend that can largely be passed on to Medicare through direct GME payments. A nurse practitioner or physician assistant, in contrast, may be able to provide comparable service on a medical ward or in the operating room but commands a higher salary, works fewer hours, and does not generate additional Medicare payments. Medicare makes both direct and indirect payments to hospitals on the basis of the number of residents they employ, making Medicare GME, in effect, an uncapped entitlement. In other words, Medicare pays hospitals for as many residents as they employ. The Congressional Budget Office estimated that Medicare paid teaching hospitals an average of $88,000 per resident in 1993. By increasing residents, hospitals may raise their total Medicare teaching payments by substantially more than the direct salary and benefit costs they incur. Residents also provide patient care services to hospitals; therefore, hospitals have a strong incentive to hire more of them. Like the private sector, VA benefits financially because its residents represent a low-cost source of labor. For example, VA estimates that it pays residents stipends of $34,000 a year compared with $100,000 for a physician and $60,000 for a nonphysician provider. The difference in cost per hour, however, is even greater because residents typically work 60 hours weekly compared with 40 hours for physicians and other providers. Unlike community hospitals, however, VA hospitals do not receive additional payments from Medicare to support their GME programs. VA does, however, through the Veterans Equitable Resource Allocation (VERA) system, allocate additional funds to its Veterans Integrated Service Networks (VISN) to compensate them for the higher costs of their medical education missions. Due in part to Medicare’s funding of the costs of GME programs, the total number of medical residents more than doubled between 1965 and 1990, from 31,898 to 82,902. That growth has continued in the 1990s. The American Association of Medical Colleges reported 103,640 residents in the 1994-95 academic year. (See fig. 12.1.) VA hospitals also increased their use of medical residents. Between 1975 and 1995, the number of VA part-time residents increased 366 percent, from 5,329 to 19,872. (See fig. 12.2.) Although the number of part-time residents rotating through VA has increased nearly 80 percent since 1987, VA’s Residency Realignment Review Committee reported that the number of VA resident positions increased only 2.9 percent between 1987 and 1995. An official from VA’s Office of Academic Affairs did not know the reason for the differences between the number of part-time VA residents at the end of the fiscal year and the number of paid residency positions. He suggested that some residents may not have been removed from the rolls at the end of their VA tour of duty. Teaching hospitals’ demands for medical residents, according to some health policy experts, may have contributed to an oversupply of physicians. This oversupply is, in their view, a major factor in rising health care costs. The number of active U.S. physicians more than doubled between 1970 and 1993. (See fig. 12.3.) Active physicians per 10,000 population increased from 15.7 to 25.1 during that period. The Pew Health Professions Commission recommended dramatic reductions in the training of new doctors, including a reduction of 20 to 25 percent in the number of students entering U.S. medical schools.Eliminating residency positions, however, would result in losing not only the direct medical education payment, but also the indirect medical education payment, creating a major financial loss for teaching hospitals. Similarly, the Council on Graduate Medical Education recommended an overall reduction in the nation’s physician supply and the number of physicians in training. Reducing the number of medical residents would, however, force teaching hospitals to seek alternative professionals to substitute for providing the care that resident physicians now provide. Although some substitution is occurring now, teaching hospitals are concerned about the potential cost of increased substitution as the number of residents declines. Using nonphysician providers would mean employing a variety of providers at a higher cost than teaching hospitals have had to incur in the past by using medical residents. An analysis of the potential cost of replacing residents with midlevel practitioners in New York City has highlighted the significant amount of money teaching hospitals have been able to save by using residents in the past. In New York state, residents’ salaries were fully covered by federal and state direct medical education payments. Teaching hospitals in the state received $2.9 billion in GME payments in 1995—roughly $188,000 per resident. Hospitals losing residency positions would thus not only lose those payments, but would also incur new costs to hire additional physicians, nurse practitioners, and physician assistants to perform their duties. The analysis estimated that, on average, hospitals would need to hire three midlevel practitioners to replace each resident. The salary costs of replacing all residents with midlevel practitioners were estimated to range from $242 million to $600 million. In a survey of teaching hospitals, 178 (62 percent) of the responding medical directors reported that they already used substitution involving physician assistants and nurse practitioners to some extent at their hospitals. They reported that they used substitution in a wide range of services, including surgery, primary care, and medical specialties. Almost all survey respondents expressed satisfaction with the substitution, including physicians, nurses, residents, and patients. Recent actions by the Congress, HCFA, and VA indicate that the number of residents will probably decline in the future. For example, the Balanced Budget Act of 1997 froze the number of residency positions Medicare will fund at a hospital at the number of full-time equivalent (FTE) interns and residents in the hospital in 1996. New York hospitals sought and received from HCFA a program that rewards them for reducing residency positions. In February 1997, HCFA approved a demonstration project proposed by the Greater New York Hospital Association. Under the project, HCFA will provide incentive payments totaling $400 million over the next 5 years to 42 New York teaching hospitals. The goal of the project is to reduce the number of residents trained by the 42 hospitals by up to 25 percent over the 5-year period. The Balanced Budget Act of 1997 authorized similar incentive payments to hospitals in other states that participate in plans for voluntarily reducing the number of resident positions. Essentially, participating hospitals may receive “hold harmless” payments if they agree to reduce the number of residents by specified amounts. For example, a hospital with more than 750 residents would qualify for the incentive payments if it submitted to HCFA an acceptable plan to reduce the number of residents by 20 percent over a 5-year period. The hold harmless payments would decline over the 5-year period. In late 1995, VA established a Residency Realignment Review Committee to make recommendations for possibly realigning VA’s residency programs to ensure that VA’s GME program meets VA’s current and future needs. In its May 1996 report, the Committee recommended eliminating 250 residency positions in disciplines other than primary care and reallocating 750 positions from specialties to primary care. The Committee estimated that it would cost VA almost three times as much to replace a resident with a physician or nonphysician provider. The VISN strategic plans, however, contain little information on implementing the Committee’s recommendations. The growth in the number of medical residents between 1989 and 1995 can be attributed to increasing numbers of residency positions established for graduates of foreign medical schools. Residency positions for graduates of U.S. medical schools have actually declined since 1989. By 1996, graduates of foreign medical schools accounted for over one-fourth of residency positions. VA, like community hospitals, uses foreign medical school graduates extensively. Between 1989 and 1995, the number of foreign medical school graduates in U.S. residency training programs more than doubled, from 12,259 to 24,982. During the same period, the number of U.S. medical school graduates in residency training declined slightly, from 73,071 to 71,053. To reduce the number of physicians, some policymakers are calling for using fewer foreign-trained physicians and for restrictions on their training. Efforts to restrict the arrival and impede the permanent residence of foreign-trained physicians are under way. For example, the Pew Health Professions Commission and the Institute of Medicine issued high-profile statements about reshaping the physician workforce by using fewer foreign-trained physicians. More recently, the Association of American Medical Colleges, the American Medical Association (AMA), and other national professional associations issued a consensus statement calling for restrictions on training. Others, however, caution that limiting the number of foreign medical school residency positions could reduce services in medically underserved areas. Although the nation has a surplus of physicians, some communities have had a chronic physician shortage. Hospitals in such communities have used residency programs and the associated Medicare GME funds to attract and pay resident physicians for essentially providing clinical care. In some cases, hospitals in poor communities do not have teaching programs attractive enough to U.S. medical students. Therefore, the communities have hired foreign medical graduates willing to provide care to uninsured individuals. In such instances, Medicare GME payments have helped communities address significant physician shortages. Some have expressed concern that limiting Medicare GME payments or the use of foreign medical residents might adversely affect the ability of such communities to meet their health care needs. VA officials estimate that 18 to 20 percent of its residents graduate from foreign medical schools. According to VA officials, VA does not have a specific policy on using foreign medical school graduates; it tries to recruit the best candidates regardless of where they attended school. VA officials also indicated, however, that VA hires foreign medical graduates because the supply of U.S. medical school graduates does not meet its demand for first-year resident positions. U.S. medical schools supply only about 100 graduates for every 140 jobs VA has available. The increased emphasis on managed care has fostered an increased demand for primary care physicians. Meanwhile, as more of the diagnosis and care are provided in outpatient settings, teaching hospitals have increasingly recognized that physicians need to obtain some of their training in outpatient care settings rather than hospitals. Recent changes in Medicare payment policies have encouraged increased training of primary care residents and authorized training in outpatient settings. VA is both increasing the percentage of its residency positions in primary care and providing more of its training in outpatient care sites. The growth of HMOs and other managed care plans has generated increased demand for physicians trained in primary care. As in private- sector managed care plans, VA’s efforts to restructure its health care system are increasing demand for primary care physicians. Like the private sector, VA has too many specialists and too few primary care physicians. The director of one VA medical center told us that VA needs a ratio of 60 percent generalists to 40 percent specialists but has a ratio of about 20 percent generalists to 80 percent specialists. Consistent with the increased demand for primary care physicians, one recent study reported that the number of jobs advertised for physician specialists has declined considerably over the past 5 years with the exception of pediatric specialists. The number of jobs advertised for internal medicine specialists declined most dramatically—by 75 percent since 1990. The study found that four times as many jobs were advertised for specialists in 1990 as for generalists. Only 5 years later, however, the ratio of advertised positions for specialists compared with those for generalists dropped to 1 to 8. Both the Congress and VA have acted to increase the number of physicians trained in primary care. After the Physician Payment Review Commission reported in 1992 that the share of residents in generalist fields was dropping while medical specialties were constituting a larger proportion of residents, the Congress made changes in Medicare payments for GME that discouraged excessive specialty residencies. Specifically, the Omnibus Budget Reconciliation Act (OBRA) of 1993 created separate hospital-specific payment rates for primary care and nonprimary care residents. The law permitted rates for primary care (and obstetrics and gynecology) residents to be adjusted on the basis of the consumer price index, while freezing rates for other residents in fiscal years 1994 and 1995. Similarly, VA’s Office of Academic Affairs started a program to increase training in primary care. As a result, the number of VA residency positions in primary care increased from 2,920 in 1992 to 3,306 in 1995. In addition, the Residency Realignment Review Committee, in recommending a 250-position decrease in the number of VA-funded residency positions, indicated that the reductions should come from disciplines other than primary care. The Committee also recommended that 750 residency positions be shifted from specialties to primary care. It estimated that implementing the recommendations would increase the percentage of VA residency positions in primary care from 34 percent in 1987 to 49 percent upon completion of the phased implementation in 2001. Among the approaches VA is using to increase training in primary care is the Primary Care Education (PRIME) program. Created in 1993 by the Office of Academic Affairs, PRIME funds trainee awards to VA facilities providing primary and managed care to veterans using a multidisciplinary team approach. In academic year 1996-97, PRIME included 445 medical resident positions at 80 sites and almost 1,000 associated health trainee positions. Most of the residency positions were in internal medicine. VISN strategic plans have generally contained no substantive discussion of plans to increase training of primary care physicians. As the focus of health care shifts from hospitals to physicians’ offices and outpatient clinics, some of the training provided to medical residents needs to be shifted to such settings. Before the enactment of the Balanced Budget Act of 1997, however, Medicare payment policies discouraged teaching hospitals from supporting such shifts. In contrast, VA has long provided medical education through its outpatient clinics. The importance of training medical residents in an outpatient setting is increasing for several reasons. First, diagnosis and treatment—critical components of medical education—are increasingly provided in outpatient settings. As a result, patients now admitted to hospitals tend to have more complex and acute needs than in the past, and more patients are admitted to hospitals just for specialized procedures. Second, because lengths of stay are shorter, residents have less time to think through a clinical plan and establish rapport with their patients. Although inpatient training remains a critical part of medical education, the Physician Payment Review Commission has expressed concern that residents have too few opportunities to learn about outpatient care, such as how to (1) provide a continuum of care that includes health promotion and preventive medicine, (2) manage chronic disease, (3) decide when hospitalization is necessary, (4) care for patients after discharge, and (5) develop personal relationships with patients and their families. The Commission noted that the technical skill, judgment, and processes of medical decision-making required to provide these services are important to physicians both in primary care and specialty care practices. The Commission also noted that the financing of GME primarily through inpatient sites has obstructed changing training sites. Considerably less financing has been available for training in outpatient sites, and compensation for outpatient faculty is recognized only if the hospital incurs all or substantially all of the costs of training. This discouraged expansion of training to group practices, nursing homes, and other nontraditional sites. Furthermore, even residency programs that sought to expand outpatient training programs in hospital-owned sites faced financial barriers because direct costs were based on 1984 costs rather than current costs. Finally, Medicare would not pay for indirect costs in nonhospital sites. The Balanced Budget Act of 1997 authorized the Secretary of Health and Human Services to establish rules for payment to qualified nonhospital providers for their direct costs of medical education. Nonhospital providers include federally qualified health centers, rural health clinics, and other providers the Secretary determines appropriate. Non-VA teaching hospitals, which typically have higher costs than other community hospitals, increasingly offer deep discounts to managed care plans. Before enactment of the Balanced Budget Act of 1997, however, teaching hospitals had no assurance that they would receive Medicare GME payments for care provided to managed care enrollees. This presents no problem for VA teaching hospitals, however, because VA receives a direct appropriation to cover the costs of its medical education program and has no contracts with managed care plans. The trend toward managed care could effect significant changes in non-VA teaching hospitals’ ability to fund their medical education missions. First, managed care organizations do not usually want to pay the higher costs associated with teaching hospitals. They typically negotiate deep discounts from teaching hospitals because the market has far more capacity than needed, and nonteaching hospitals can provide services at lower costs because they lack teaching and research missions. Second, as Medicare recipients increasingly enroll in HMOs, teaching hospitals may lose the direct Medicare GME payments. Although Medicare factors such payments into the capitation rates it pays HMOs, the HMOs have no obligation to pass those payments on to the teaching hospitals or, for that matter, to contract with the higher cost teaching hospitals. The Balanced Budget Act of 1997, however, requires HCFA to provide additional payments to hospitals for the direct costs of GME related to Medicare risk-contract managed care enrollees. The provision applies to services provided after December 31, 1997. Unlike private-sector hospitals, VA hospitals have, until recently, been unable to sell services or negotiate prices with HMOs and other managed care plans. Historically, VA facilities have been permitted to sell hospital and other services in only a few situations. Other than sharing agreements with DOD and other federal hospitals, VA has been limited to the sale of specialized medical resources to health care facilities, such as hospitals or clinics, medical schools, and certain research centers. The Veterans’ Health Care Eligibility Reform Act of 1996, however, expanded the types of providers and services with whom VA may contract for care services. VA may sell patient care services to both public and private entities, including managed care plans. In addition, VA may now negotiate prices for services sold to HMOs and other managed care plans. These provisions apply mainly to sales of services to be provided to nonveterans because services provided to veterans with private health insurers are still governed by separate medical care cost-recovery provisions of the law. Medical education has played a vital role in improving the quality of care in VA hospitals for over 50 years. Similarly, VA has played an important part in training a large proportion of the nation’s physicians. With a growing number of physicians, however, and a steadily declining veteran population, the Congress and the administration face difficult decisions about the future of affiliation agreements. For example, should VA hospitals receive the same kinds of incentives to reduce the number of residency positions that the Congress provided non-VA hospitals through the Balanced Budget Act of 1997? Actions taken through the Balanced Budget Act of 1997 to reduce residency positions in teaching hospitals have significant implications for VA and its medical education mission. To the extent that teaching hospitals respond to incentives to significantly reduce their residency positions, VA and rural hospitals should be better able to compete for graduates of U.S. schools. One way to lessen the effect of reducing residency positions on U.S. medical schools would be for teaching hospitals to target the reductions toward foreign medical school graduates. With fewer residency positions in non-VA teaching hospitals, VA might decide to use more of its available residency positions for graduates of U.S. medical schools. Although VA’s Residency Realignment Review Committee recommended reducing the number of residency positions in VA hospitals, the planned reduction is much smaller than that sought from non-VA teaching hospitals. While non-VA hospitals are being encouraged to reduce residency positions by 20 to 25 percent by the year 2005, VA is planning a reduction of less than 3 percent in its residency positions. Changes in the veteran population also affect VA’s ability to support its medical education mission. Because the veteran population is both declining and aging, VA may no longer provide enough of a variety of patients to support its medical education mission. This same problem prompted Australia to open its veterans hospitals to nonveterans to broaden the patient mix and ultimately close or transfer hospitals over to the states or the private sector. Of particular concern is the ability of VA hospitals to support surgical residencies. As previously discussed, surgical workloads have declined more than 50 percent. VA hospitals with inpatient surgery programs had an average of less than 25 beds occupied on any given day; many had fewer than 10 beds occupied. An important challenge facing VA and its affiliated medical schools is determining when to end a residency program. VA’s Residency Realignment Review Committee began this process by recommending that 750 residency positions in specialties be converted to primary care residencies. Another important challenge facing VA is maintaining its independence from the affiliated medical schools for making decisions about the future of VA hospitals and their residency programs that are best for all stakeholders. Maintaining this independence is difficult because many medical school faculty and managers play decision-making roles at VA medical centers. Medical schools faced with decreasing residency positions in non-VA teaching hospitals could seek to increase such positions in VA hospitals rather than reduce the size of their teaching programs. VA Chiefs of Staff with dual appointments could find themselves in the difficult position of trying to support two opposite goals: the medical schools’ goal to increase residency positions in VA to compensate for decreased positions in other hospitals and VA’s own goal to reduce residency positions. The potential for conflict increases when decisions involve potential hospital closings. Because VA hospitals serve as major sources of support for residency positions for medical schools, the schools clearly have an interest in VA hospitals staying open. Although those interests must be considered, achieving the proper balance between VA’s primary mission— serving the health care needs of veterans—and one of three other missions—support for medical education—will be difficult. VA must take care to prevent medical schools from overly influencing the future direction of its health care system. Historically, both VA and non-VA teaching hospitals relied mainly on federal funds to support their medical research programs—VA on a separate research appropriation and non-VA hospitals on grants from the National Institutes of Health (NIH). As competition for these limited funds increases, however, teaching hospitals are diversifying their funding sources. Both VA and non-VA teaching hospitals are increasing efforts to obtain research funding from pharmaceutical and biomedical companies. Non-VA hospitals are also increasing the amount of research they conduct in areas of interest to managed care plans to attract contracts from those plans. VA already conducts such research but obtains funding from foundations and other federal agencies rather than from managed care plans. The development of alternative funding streams for medical research raises several issues and challenges. For example, if academic medical centers reduce the amount of basic research they conduct to obtain additional funding from managed care plans and pharmaceutical companies, should VA do the same or fill the void by increasing its support for basic research? In addition, policy decisions will have to be made about (1) the extent to which the government shares in any profits resulting from collaborative research and (2) what agreement should be reached about delaying distribution of research findings. As VA develops multiple funding sources for its research programs, it will need strong internal control systems to prevent program abuse. Historically, the federal government, through NIH, has supplied the most direct funding for both basic and applied research. NIH, the clearinghouse for federal medical research funding, in addition to conducting its own research, provides about 85 percent of its funds to teaching hospitals through research grants. In fiscal year 1996, NIH awarded about $8.9 billion in research grants to both VA and non-VA teaching hospitals. NIH research grants convey prestige because they are more competitive and the research proposals are reviewed by peers. NIH grants fund basic as well as applied research and place few restrictions on distributing research findings. Non-VA teaching hospitals receive research funding from NIH; however, they have several other research funding sources. These include industry- and foundation-sponsored research grants, internal cross-subsidies (such as use of surplus patient treatment income, tuition, and endowments), and third-party insurance payments to reimburse the cost of health care provided to patients participating in research protocols. Medical research—both basic and applied—is one of VA’s four core missions. The current research program was established shortly after the end of World War II and has been included in VA’s authorizing legislation since the late 1950s. Although VA hospitals, like other teaching hospitals, obtain NIH research grants, VA research is funded mainly by VA appropriations. Of the approximately $923 million in budgetary resources VA had available for medical and prosthetic research in fiscal year 1996, $591.4 million came from VA appropriations ($256.7 million from the medical and prosthetic research appropriation and $334.7 million in medical care support from the medical care appropriation). The remainder of VA research funds in fiscal year 1996 came from federal grants (mainly from NIH) totaling $209.5 million, other grants (mainly from voluntary agencies) totaling $105.9 million, and DOD reimbursements of $16 million. Teaching hospitals are finding it increasingly difficult to maintain their historic funding sources for several reasons. First, they can no longer count on increases in federal research funds. Such funding grew at the rate of 8 to 10 percent annually during the late 1970s and early 1980s, while inflation in biomedical costs ranged between 4 and 5 percent. In fiscal year 1996, however, NIH funding grew by only 5.7 percent, and the Congress considered cutting NIH’s budget. In addition, some concern exists over future federal funding amid debate about the proper role of the federal government in funding medical research. Second, managed care has made it more difficult for teaching hospitals to use profits from patient care to pay for medical research. According to the Association of American Medical Colleges, teaching hospitals are losing about $1 billion a year due to managed care’s shift to use of lower cost community hospitals. To help prevent such losses, many teaching hospitals have cut the prices they charge HMOs and preferred provider organizations (PPO) and adopted intensive cost-reduction efforts. Obviously, lowered prices mean fewer resources for subsidizing research projects. Third, teaching hospitals face increasing competition from contract research organizations. Industry-sponsored medical research, which was mainly conducted by academic medical centers before 1980, is increasingly being conducted by for-profit contract research firms. The use of academic investigators to conduct industry-sponsored research trials dropped from 82 percent in 1989 to 68 percent in 1993. Fourth, the managed care industry has increasingly established its own research centers, drawing both public and private research dollars away from teaching hospitals. HMOs, which provide comprehensive services to a defined population in a real-life environment, can test the results of trials that were conducted in more controlled environments. Teaching hospitals have increasingly turned to pharmaceutical and biomedical companies for funds for two reasons. First, the availability of federal research funds is becoming more uncertain. Second, in 1988 pharmaceutical companies spent an amount on research and development that exceeded that of the entire NIH budget. Private industry supports a growing portion of teaching hospitals’ research. Private industry (39 percent) and NIH (38 percent) supported roughly the same percentage of medical research in 1984, according to the Association of American Medical Colleges. Ten years later, however, private industry supported over half ($17 billion) of the $33 billion spent on research, while NIH contributed 31 percent ($10.2 billion). Some teaching hospitals are actively seeking to expand their use of private industry funds. For example, George Washington University now gets more than half of its funds for medical research from private industry. Similarly, Columbia University actively markets its research capabilities to corporations, and the University of California, San Francisco, created a special center to attract industry-supported research. One concern raised about involving pharmaceutical and biomedical companies in funding research at teaching hospitals is the potential delay in sharing research findings. Companies sometimes ask researchers to agree not to disclose the results of their research for as long as 10 years.This allows them to develop and market their products for longer periods before their patents expire. A second concern about relying on private-sector funding is pharmaceutical and biomedical companies’ focus on clinical trials and applied research that can quickly lead to marketable new drugs and devices. This focus could, many researchers fear, reduce the amount of basic or fundamental research. Like other teaching hospitals, VA is concerned about the future availability of federal funding for its research activities and is increasingly seeking alternative sources for funding research. In addition to obtaining more NIH funds, it is establishing nonprofit corporations to raise funds for research. Like NIH funding, the growth in VA’s medical and prosthetic research appropriation funding has slowed in the 1990s, growing at a rate of 2 to 5 percent per year, meaning little growth in funding after inflation. VA reports that research funding declined as a percentage of the overall medical care appropriation from 2.0 percent in 1980 to 1.2 percent in 1996. These figures do not, however, include funds VA obtained from other sources. Between 1990 and 1996, nonfederal research funding increased from about $176 million to over $315 million. (See fig. 13.1.) In May 1988, the Congress authorized VA to establish nonprofit research corporations for a limited time period to provide an additional funding mechanism for VA-approved research (P.L. 100-322). Public Law 104-262 reauthorized the corporations through 2000. A March 1997 VA Office of Inspector General report identified 83 nonprofit research corporations. VA reported to the Congress that contributions to the nonprofit research corporations were $38 million in 1994 and $63 million in 1996. An advantage provided by its nonprofit corporations is that they generally have low indirect costs, which ensures more resources for research. According to VA officials, the administrative overhead rates for VA’s nonprofit corporations averaged 12.5 percent in 1995 compared with university and private foundation rates averaging 50 percent. Therefore, a greater percentage of VA research funds may be available to actually support research and related activities. Several of the Veterans Integrated Service Network (VISN) strategic plans discuss efforts to establish additional nonprofit research corporations: VISN 8 (Bay Pines) has set a goal of increasing the total non-VA research funds by 10 percent by establishing nonprofit corporations. It currently has such corporations at its Bay Pines, Miami, and San Juan medical centers. VISN 17 (Dallas) established its third nonprofit research corporation in March 1996 to increase non-VA funding. VISN 4 (Pittsburgh) is exploring the possibility of establishing a nonprofit research corporation. VISN 6 (Durham) expects its main research effort to be overseeing and coordinating the operations of nonprofit research corporations. Both university-based academic medical centers and VA are conducting collaborative research efforts with others. Academic medical centers are focusing on collaborative efforts with managed care plans, while VA is focusing on collaborative efforts with other government agencies and manufacturers of high-cost/high-tech equipment. Academic medical centers are beginning to align their research agenda with that of the managed care industry. In the past, academic medical centers favored basic research and research on relatively rare diseases and therapies. HMOs, on the other hand, were more interested in applied research that identified the most cost-effective way to treat common, expensive, or high-risk conditions. Because HMOs and other managed care plans have financial risk for their patients’ care, they want to know which medical treatments are most cost-effective. To gain support for their research programs from managed care plans, academic medical centers are emphasizing cost-effectiveness and outcomes research and strengthening their ties with schools of public health. Consequently, managed care plans’ health research centers are conducting collaborative projects with teaching hospitals. For example, Group Health Cooperative of Puget Sound collaborated with the University of Washington. Similarly, Prudential’s Center for Health Care Research collaborated with the Harvard Medical School. According to some researchers, the full potential for collaborative efforts has not been realized because of mutual distrust. In their view, academic medical centers often see managed care plans as overly concerned with cost cutting, while managed care plans complain of teaching hospitals’ academic arrogance. Researchers note that academic medical centers could benefit from access to managed care plans’ enrolled populations and their information systems that identify and track patients with specific conditions for conducting research on outcomes of specific treatments. Similarly, academic medical centers could, they believe, offer managed care plans an unbiased research environment, access to trained investigators, and well-equipped research infrastructures. Finally, an affiliation with an academic medical center could give HMOs a marketing advantage for managed care plans by making it easier to attract enrollees. One of the key objectives in VA’s Prescription for Change is expanding collaborative investigative efforts with both government and nongovernment entities. An official from VA’s Office of Research told us he did not know of any such collaborative research efforts with managed care plans but that such efforts might be pursued by individual facilities or VISNs. As a nationwide system, VA has the capability to design and implement large-scale cooperative trials. For example, in the 1950s, VA developed cooperative studies to investigate the effectiveness of therapies for treating tuberculosis. Similarly, it completed cooperative studies documenting the benefits of hypertension treatment and coronary artery bypass surgery. The Cooperative Studies program now has designated coordinating centers (comprising epidemiologists, biostatisticians, and data analysts) whose sole mission is to help investigators design and implement multicenter studies of clinical and health services interventions. Some examples of this research are studies of angina, symptomatic human immunodeficiency virus infection, and clinically localized prostate cancer. VA’s ability to do nationwide studies helps it develop collaborative efforts. For example, VA established a Diabetes Research Initiative with the Juvenile Diabetes Foundation: For a 5-year period, VA and the Foundation will each contribute $7.5 million to fund VA diabetes research centers of excellence. In addition, VA signed a memorandum of understanding to plan future collaborative research efforts with the Agency for Health Care Policy and Research and the University Health Systems Consortium. Finally, VA’s Prescription for Change indicates that it plans to actively pursue collaborative research efforts with manufacturers of high-cost/ high-technology equipment. None of the VISN plans has identified plans to conduct collaborative efforts with managed care plans. Three VISN strategic plans did, however, identify planned actions that might make VA research programs more attractive to managed care plans: VISN 1 (Boston) has a Research Advisory Council responsible for building stronger linkages between research efforts and clinical practice. The Council is also responsible for identifying additional revenue streams to support research. VISN 17 (Dallas) will emphasize research consistent with national trends toward primary care, systems analysis, outcomes research, and development of clinical guidelines. The network convened a Research and Development Subcommittee to, among other things, promote collaborative research. VISN 18 (Phoenix) has a major collaborative research project to search for a breast cancer vaccine involving the Amarillo, Texas, VA medical center, Pantex plant (Department of Energy), and Duke University. VA faces many challenges and policy decisions as it seeks to develop alternative funding streams for medical research. For example, as a matter of policy, to what extent should the government share in the financial benefits resulting from new products or treatments developed through collaborative research efforts with drug and biomedical companies? Similarly, VA will have to make policy decisions about how research results are distributed and when they are publicized. Finally, VA will need to decide to what extent it should follow the lead of academic medical centers and seek collaborative research efforts with HMOs and other managed care plans. VA has successfully developed alternative revenue streams to supplement its research appropriation. The proliferation of VA nonprofit research corporations and other sources of nonappropriated research funds, however, creates new challenges. For example, VA will need accounting systems and internal controls to track the many revenue streams supporting individual projects. Without such systems and controls, researchers might receive funding exceeding the project’s cost. For example, accounting systems need to be able to determine whether a researcher receives a grant funding more than 100 percent of the researcher’s time. Similarly, the systems and controls need to be able to ensure that teaching physicians do not inappropriately collect research funds from both VA and the medical school. In addition to the direct appropriation for medical and prosthetic research, VA’s research efforts also received funds from the medical care appropriation. VA reported receiving $335 million from this additional appropriation in fiscal year 1996. Under the Veterans Equitable Resource Allocation (VERA) model, VA allocated $399 million among VISNs for medical research support on the basis of the proportional amount of funded research reported by each VISN in fiscal year 1995. It is not clear, however, how the $399 million will be allocated within the VISNs or the extent to which the higher patient care costs associated with VA’s research mission will affect its ability to sell its excess capacity to managed care plans or others without offering discounts like those offered by some academic medical centers. Another challenge facing VA and its hospitals is balancing the longer lengths of stay frequently associated with medical research with performance measures that call for significantly reducing bed-days of care. For example, should performance measures, like VERA, have adjustments to allow for the more frequent admissions and longer lengths of stay for patients in research protocols? Finally, the changing focus of academic medical centers’ research efforts has important policy implications for VA research. If academic medical centers increasingly shift from supporting basic to applied research to attract additional research funds, should VA do the same? Or should VA fill the void created and increase its support for basic research? One action community hospitals reportedly take to improve profitability is reducing the amount of uncompensated care (defined as the sum of charity care and bad debt) they provide. Despite growing numbers of uninsured people, the amount of uncompensated care provided by community hospitals has reportedly declined in the 1990s. Many nonprofit hospitals are acting more like their for-profit competitors by seeking to reduce the amount of uncompensated and charity care they provide and focusing on attracting paying customers. Others are converting to for-profit status or selling out to for-profit chains. As a result, some believe that the burden of providing uncompensated care has increasingly shifted to public, and particularly public teaching, hospitals. As increasing numbers of public hospitals convert to nonprofit or for-profit ownership, will the health care safety net shrink even more? On average, VA serves a larger proportion of uninsured people than even public teaching hospitals. Many of VA’s restructuring efforts, however, create incentives for Veterans Integrated Service Networks (VISN) and individual VA facilities to model for-profit health plans and hospitals and focus less on VA’s traditional safety net mission. In addition, VA, like many nonprofit hospitals, has established strategic goals that focus on increasing market share rather than meeting the health care needs of uninsured veterans. The apparent changes in focus of both community and VA hospitals raise significant issues about the future direction of the VA health care system. For example, to what extent should VA use its excess capacity to target the market segment—low-income and uninsured people—that many for-profit and nonprofit hospitals are apparently abandoning? Who should pay for such services? Similarly, to what extent should VA’s strategic goals focus specifically on its safety net mission and improving the health status of uninsured veterans? The burden of serving patients with no health insurance falls disproportionately on VA and public teaching hospitals. About 21 percent of veterans using the VA health care system have no public or private health insurance compared with about 5 percent of patients using nonteaching hospitals. Similarly, public teaching hospitals serve a percentage of hospital patients who have no insurance that is three to four times higher than that served by private academic medical centers. (See fig. 14.1.) Public and particularly public teaching hospitals provide disproportionate and increasing amounts of uncompensated care, according to many studies. For example, urban public hospitals are reported to provide one-third of the nation’s uncompensated care, even though they only have about one-sixth of the hospital market. Between 1990 and 1994, their burden of uncompensated care increased. First, their percentage of total costs devoted to uncompensated care increased from 11.8 to 12.8 percent. Second, public hospitals accounted for 36.8 percent of total hospital uncompensated care in 1994, up from 33.4 percent in 1990. Among public hospitals, major teaching hospitals’ share of uncompensated care is reportedly three times larger than their share of the hospital market. In 1994, almost 20 percent of their expenses were reportedly devoted to providing uncompensated care. Although public hospitals provide a disproportionate share of uncompensated care, private-sector hospitals still provide most uncompensated care. Private hospitals, however, vary widely in the amount of uncompensated care they reportedly provide. For example, about 240 private hospitals reported uncompensated care burdens averaging 15 percent of total operating expenses in 1994. The remaining approximately 3,600 private hospitals reported uncompensated care burdens averaging 8 percent or less of operating expenses. These findings are consistent with our 1990 analysis of the role of nonprofit hospitals in providing uncompensated care.Government-owned hospitals provided a disproportionate amount of the uncompensated care in each of the five states in our review. Both nonprofit and for-profit hospitals provided a smaller share of the state’s uncompensated care than they provided of general hospital services. Moreover, the burden of uncompensated care was not distributed equally among the nonprofit hospitals in the five states. Large urban teaching hospitals had a greater share of the uncompensated care expense than did other nonprofit hospitals. Generally, the nonprofit hospitals with the lowest rates of uncompensated care also served fewer Medicaid patients and had higher profit margins than did the large urban teaching hospitals providing most of the uncompensated care. In other words, the nonprofit hospitals with the most resources for financing uncompensated care were often those providing the least amount of such care. About 15 percent of the nonprofit hospitals we studied reported providing uncompensated care valued at less than the benefits of their federal and state income tax exemption. Excluding bad debt and examining only the provision of charity care, however, revealed that 57 percent of the nonprofit hospitals in our study provided charity care valued at less than the benefits of their tax liability. Another study reported an apparent correlation between market penetration of managed care plans and decreased levels of uncompensated care. Hospitals in metropolitan statistical areas where managed care plans had captured large shares of the health care market tended to provide less uncompensated care. A hospital’s goals and policies influence the amount of uncompensated care it provides. In the five communities we visited during our 1990 study, the strategic goals of some nonprofit hospitals excluded the health needs of the poor or underserved in their communities. Instead, the goals most often focused on increasing the hospitals’ share of patients in their market area, resembling the goals of investor-owned institutions. Other goals concerned maintaining the hospitals’ financial viability, improving their competitive positions, expanding services and facilities, or developing employee skills and personnel practices. Furthermore, physician staffing and charity admission policies discouraged admission of those unable to pay, except in emergency cases. In communities without a government-owned or major teaching hospital, uncompensated care costs present problems in providing services to the indigent and could eventually cause service gaps for entire communities. In two of the communities we visited for our 1990 study, the uncompensated care costs were relatively high, and the nonprofit hospitals providing most of this care were seeking ways to reduce these costs. For example, hospitals in San Diego were trying to restrict their indigent care expenses. One nonprofit hospital that traditionally treated indigent patients was investing in a new facility in a suburb to increase its market share of patients able to pay. Another nonprofit hospital planned to downgrade its emergency room, closing it to ambulance traffic to reduce its indigent care workload. Of the 5,768 hospitals operating in 1990, about 9 percent (532) had changed ownership during the preceding decade. Ownership changes continued in the 1990s; 3 percent of the hospitals operating in 1993 had changed ownership since 1990. Over half of the ownership changes between 1980 and 1990 involved converting public hospitals to nonprofit or for-profit status (see fig. 14.2). Because public hospitals serve a higher proportion of uninsured patients than either private nonprofit or for-profit hospitals, this raises concerns about the future availability of charity care in the affected communities. Public hospitals have been converting to nonprofit or for-profit ownership. Between 1980 and 1990, more than 15 percent of public hospitals changed control, most often (75 percent) to nonprofit status. Another 3 percent of public hospitals changed control between 1990 and 1993 and 88 percent, to nonprofit status. Conversions of nonprofit and for-profit hospitals to public hospitals partially offset conversions of public hospitals to nonprofit status. The unwillingness of local governments and communities to provide continued tax support for public hospitals reportedly played a major role in the conversions. Conversions were also seen as a way to free the hospitals from government procurement and hiring rules. In addition, nonprofit hospitals have had many ownership changes, but the overall number of nonprofit hospitals increased between 1990 and 1993 mainly because of the conversion of public hospitals to nonprofit hospitals. Between 1980 and 1990, 175 nonprofit hospitals converted to either for-profit (110) or public (65) ownership. The rate of conversion of nonprofit hospitals increased between 1990 and 1993. Hospitals are converting to for-profit status because of concerns about their future. Policy analysts have identified several reasons for hospital ownership conversions: Conversions can provide nonprofit hospitals access to the capital they need to restructure operations. Nonprofit hospitals may seek to improve efficiency through merger or acquisition. Weaker hospitals, faced with closure, may see sale of their assets to or a joint venture with a for-profit firm as the best option for survival. Nonprofit hospitals may convert to for-profit status to avoid regulatory constraints placed on nonprofits limiting their flexibility in compensating executives, staff, and partners. Personal financial gain may motivate the decisions of the insiders of some nonprofit hospitals to sell or convert to for-profit status. In addition, some for-profit hospitals are also changing ownership. Conversions of for-profit hospitals to nonprofit or public ownership accounted for only 11 percent of conversions between 1980 and 1990 but 31 percent of conversions between 1990 and 1993. Some believe such conversions may reflect increased concern about the long-term commitment of for-profit owners to the health care needs of the community. VA’s restructuring efforts create many of the same types of incentives for VISNs and individual hospitals to reduce services to veterans with no health insurance that have resulted in less charity and uncompensated care in nonprofit and for-profit hospitals. And, like many nonprofit hospitals, VA has established strategic goals focused more on increasing market share than on fulfilling its safety net mission. This year, VA sought and obtained approval to retain nonappropriated revenues generated through recoveries from private health insurance and collection of veteran copayments. VA essentially sought to divide the veteran population into two distinct groups: nonrevenue-generating veterans and revenue-generating veterans. This latter group has several potential target populations for VA: lower income veterans with private health insurance; higher income veterans subject to copayments but with no health insurance; and higher income, privately insured veterans subject to copayments. The last group has the least need for VA services but represents the greatest revenue-generating potential because VA can generate revenues from both insurance and copayments. Allowing VA to retain recoveries from private health insurance and copayments creates an incentive for VA to market its services to attract revenue-generating rather than nonrevenue-generating veterans. This incentive could affect several aspects of VA services, including where VA decides to locate new community-based outpatient clinics (CBOC). For example, VA recently proposed locating a CBOC in a homeless shelter that it expects could attract 2,040 new users in need of VA’s safety net and therefore not likely to generate revenue. In contrast, VA has also proposed opening a clinic in one of the country’s more affluent counties. Although the clinic is intended to improve access for current users, VA also expects it to attract patients who could ultimately generate revenue. Similarly, VA’s new resource allocation method, the Veterans Equitable Resource Allocation System (VERA), could lead VISNs and individual facilities to act more like for-profit HMOs. VA developed VERA in response to Public Law 104-204, which directed VA to prepare a resource allocation system that would ensure similar access to VA care for veterans who have similar economic status and eligibility priority. The system, which VA began implementing in April 1997, is based on calculations of the cost per veteran user in each VISN. VISNs that have the highest costs per veteran user will lose funds; VISNs with the lowest costs per veteran user will get additional funds. VERA creates both positive and negative incentives. On the positive side, it moves toward creating the kinds of incentives needed to increase efficiency that HMOs have long had. On the negative side, it creates the kinds of incentives HMOs have to (1) focus marketing efforts on attracting the types of users who use fewer health care services, such as younger veterans, and, conversely, (2) make continued use of VA services unattractive or unavailable to veterans with extensive health care needs. HMOs are often criticized for their efforts to attract and retain users with minimal health care needs. These negative incentives could be heightened in the VA system because, unlike HMOs, the VISNs have no contractual obligation to provide comprehensive care to any veteran, making it easier for VA facilities to artificially increase efficiency by providing less intensive services or attracting healthier users. On the other hand, also unlike most for-profit HMOs, VA physicians have no financial stake in the care they provide. Because VA physicians receive a salary, they would not personally gain by reducing the amount of services they provide. Nevertheless, VERA and the retention of third-party recoveries could provide VISNs and individual facilities financial incentives to focus marketing efforts on veterans most likely to use fewer services and those not likely to generate additional payments. VA’s strategic goals for its health care system, like those of many nonprofit and for-profit hospitals, focus on increasing market share rather than on improving the health status of service-connected or uninsured veterans. Specifically, under the broad goal to “improve the overall health care of veterans,” VA’s plan sets an objective to increase its number of users by 20 percent by 2003. It also sets a performance goal of increasing the number of Category A veterans (primarily veterans with service-connected disabilities or low incomes) by 500,000 and Category C veterans (primarily veterans with no service-connected disabilities and higher incomes) by 125,000 by 2003. The stated purpose of the increase in users is to “preserve the viability of the health care system” rather than to meet the health care needs of service-connected or uninsured veterans. Beyond setting a goal to serve more Category A veterans, VA does not differentiate between serving a service-connected veteran with no health insurance and a low-income veteran with health insurance. Although the Congress established specific priorities for enrolling veterans in the VA health care system, VA’s strategic goals do not reflect those priorities. VA also linked its strategic goal to enactment of the proposed legislation to allow it to retain recoveries from private health insurance and Medicare. It noted that it could treat a significantly larger number of veterans—up to 20 percent more—only if its medical care cost recovery and Medicare reimbursement proposals were enacted. Our review of VA’s 1998 budget submission, however, found that to meet its revenue projections, VA would probably have to focus its marketing efforts on attracting veterans with fee-for-service private health insurance. In addition, VA proposes to collect about $557 million from Medicare in 2002 for services provided to about 106,000 additional higher income veterans covered by Medicare. As stated, the Congress authorized VA to retain recoveries and collections from private health insurance and veterans’ copayments but did not authorize VA to obtain recoveries from Medicare. The Administration and the Congress face difficult decisions concerning the future direction of VA’s safety net mission and the role VA’s hospitals should play in meeting the hospital care needs of the uninsured. For example, one important decision facing VA is determining the extent to which it should use its expanded contracting authority to purchase hospital care for veterans, particularly those with service-connected disabilities or no health insurance who cannot get care from a VA hospital because of geographic inaccessibility. Many veterans cannot get needed health care because of the distance from their homes to a VA facility. Our analysis of 1992 National Survey of Veterans data estimated that fewer than half of the 159,000 veterans who did not obtain needed hospital care lived within 25 miles of a VA hospital. By comparison, we estimated that over 90 percent lived within 25 miles of a private-sector hospital. One option for improving the health status of such veterans would be for VA to use its expanded contracting authority to purchase hospital and other services for uninsured veterans that live far from a VA hospital. On average, the VA health care system provides a higher proportion of care to patients with no health insurance than any category of community hospital, including public teaching hospitals. Still, many veterans without health insurance have reported that they have not used VA health care services and that they have unmet health care needs. In 1990, 9 out of 10 veterans reported having public or private health insurance. That meant, however, that about 2.6 million veterans had neither public nor private health insurance. Without a demonstrated ability to pay for care, individuals’ access to health care is much more limited in the private sector, decreasing their ability to receive needed care. Lacking insurance, people often postpone obtaining care until their conditions become more serious and require more costly medical services. In the past, most veterans who lacked insurance coverage could get needed hospital care through public programs and VA. Still, VA’s 1992 National Survey of Veterans estimated that about 159,000 veterans could not get needed hospital care in 1992, and about 228,000 could not get needed outpatient services. By far, the most common reason veterans cited for not obtaining needed care was that they could not afford to pay for it. So, if VA is to fulfill its safety net mission, it will have to ensure that VISNs and individual facilities do not react to incentives to generate revenue by reducing services to uninsured veterans and those with service-connected disabilities. Similarly, monitoring will be needed to ensure that facilities do not inappropriately bill insurers for services provided to service-connected veterans to generate additional revenues. Moreover, the incentive to target programs toward revenue-generating veterans is greatest if the facility providing the care retains the funds. Such an arrangement, however, would also provide the greatest incentive for operating an effective program. VA faces the challenge of identifying and applying the appropriate balance. In addition, ownership conversions of public and nonprofit hospitals could affect the ability of low-income or uninsured veterans to obtain services from such hospitals. With community hospitals’ support for the medically indigent apparently decreasing, should VA follow their lead? Or, should VA try to fill the void left by those providers? For example, veterans without health insurance often have families without health insurance. Should VA hospitals use their excess capacity to serve veterans’ uninsured dependents? If so, how should such care be financed? For example, should recoveries from private health insurance be earmarked for use in providing services to the families of uninsured veterans? Both VA and community hospitals are struggling to survive. Demand for hospital care, which increased for much of the century, has steadily declined since the 1980s in community hospitals and since the 1960s in VA hospitals. Although many factors have contributed to the declining demand, VA has been less affected by the effects of payment and other reforms than have community hospitals. Therefore, further reductions in use of VA hospitals are likely as VA tries to shift more of its care to outpatient and other more cost-effective settings. In addition, VA, unlike community hospitals, has a declining target population. One of the most crucial decisions facing the Congress and the administration as they plan for the future of the veterans’ health care system is the extent of effort that should be spent to preserve VA’s direct delivery infrastructure and the process that should be followed to effect change. VA, amid a massive restructuring of its health care system, has made efficiency improvements to the system. These actions have focused heavily on shifting patients from inpatient hospitals to outpatient and other more appropriate care settings—actions taken by community hospitals during the 1980s. The efforts’ success, however, has further reduced the workload of VA hospitals, increasing the cost of serving the remaining patients and heightening the need to address the future of the hospitals. Because fixed costs are dispersed over fewer patients, the declining use of VA hospitals increases the cost of providing hospital care to remaining patients. Community hospitals, also faced with declining workloads, have tried many approaches to reducing their costs, including increased use of part-time and intermittent employees and use of nurse extenders and other unlicensed assistive personnel. With the exception of efforts to integrate and consolidate patient care services and administrative functions of VA hospitals in close proximity, VA has not emphasized improving the efficiency of some hospital operations as much as community hospitals have. For example, VA could not pursue contracting for patient and nonpatient care services to the same extent as community hospitals. Not everyone accepts all of the changes taking place in community hospitals, however. For example, some view the use of some patient- centered care with skepticism because they are concerned about hospitals’ cutting costs by reducing nursing staff. Decisions will have to be made about which community hospital initiatives VA should pursue and to what extent. In fact, many of VA’s actions to improve the efficiency of its health care system, such as the Veterans Equitable Resource Allocation (VERA) system and preadmission screening, come from private-sector initiatives. These actions differ, however, from their private-sector counterparts because they lack the same financial incentives and risks. Nonetheless, individuals differ about the appropriate risk that is assumed by individual providers. A provider’s assuming too much risk or having too strong a financial incentive could adversely affect patient care. Too little risk, in contrast, could limit the effectiveness of the initiative. VA thus faces difficult decisions about the extent to which it should use financial incentives and risks to change practice patterns. The reduced use of VA hospitals associated with efficiency improvements, coupled with the declining veteran population and continued enrollment growth in managed care plans, makes preserving VA hospitals exceedingly difficult. About 46 percent of the beds in VA hospitals have been closed, and over 80 percent of the remaining beds might become excess within the next 5 to 10 years if VA’s efficiency improvement efforts succeed. This gives VA two basic options: attract significant numbers of new users or close hospitals. VA’s current efforts to attract new users, however, are unlikely to generate significant demand for hospital care. Its efforts legitimately focus more on improving the accessibility of outpatient care for veterans who live far from a VA clinic than on generating demand for VA hospital care. If VA hospitals are to remain exclusively for veterans, VA will have to attract a much larger and ever-increasing proportion of the veteran population. Other countries, such as Australia, have opened their veterans hospitals to nonveterans to build workload. Allowing VA hospitals to treat more nonveterans could increase VA hospital use and broaden VA’s patient mix, strengthening VA’s medical education mission. Without better systems for determining the cost of care, however, such action could result in funds appropriated for veterans’ health care being used to pay for care for nonveterans. In addition, if VA opened its hospitals to nonveterans, it would be expanding the areas in which it directly competes with private-sector hospitals in nearby communities. Essentially, every nonveteran coming into a VA hospital would be one fewer patient for a private-sector hospital. Thus, expanding VA’s role in providing care to nonveterans could further jeopardize the fiscal viability of private-sector hospitals. If VA decides to compete directly with community hospitals for both veteran and nonveteran patients, then it will subsequently have to decide the extent to which it should adopt private-sector practices on advertising and adding amenities, areas on which VA, up to now, has not focused. Similarly, decisions would have to be made about whether to market services to managed care plans and, if so, how to price them to compete with community hospitals. Several factors, including its medical education and research missions, currently limit VA’s ability to compete with community hospitals on the basis of price. Closing some VA hospitals, on the other hand, could make more funds available for expanding the use of contract hospitals for providing services to veterans who have service-connected disabilities or lack public or private insurance and do not live near a VA hospital. Now, the cost of maintaining its hospitals limits VA’s ability to meet the hospital care needs of some veterans with no public or private health insurance. This is because VA hospitals have more than enough capacity to serve all veterans seeking care, regardless of their finances. In other words, insured veterans living close to a VA hospital have better access to VA-supported care than do uninsured veterans who live far from a VA hospital. Maintaining VA hospitals in markets with declining demand could result in funds being used to pay for hospital care provided to veterans in the discretionary care category, while the hospital care needs of uninsured veterans in other areas are unmet. Other countries have successfully closed veterans hospitals, while improving veterans’ access to hospital care by contracting with community hospitals. The declining use of community hospitals and VA’s vast purchasing power could allow VA, like HMOs and other managed care plans, to negotiate significant discounts from community hospitals. This could improve the accessibility of VA-supported hospital care for uninsured veterans and veterans with service-connected disabilities. Contracting could help improve the financial status of some community hospitals by increasing patient workload. Because they serve a large proportion of uninsured and low-income patients, VA hospitals are more like public hospitals than either nonprofit or for-profit community hospitals. Many of the actions VA is taking, however, threaten to divert it from its traditional safety net mission to more directly competing with community hospitals for revenue-generating patients. Therefore, the Congress and VA have important decisions to make about the extent to which VA should focus its strategic goals on its safety net role. Finally, medical education has played a vital role in improving the quality of care in VA hospitals for over 50 years. Similarly, VA has played an important part in training a large proportion of the nation’s physicians. With a growing surplus of physicians, however, and a steadily declining veteran population, the Congress and VA face difficult decisions about the future of affiliation agreements. For example, should VA hospitals receive the same kinds of incentives to reduce the number of residency positions they support that the Congress provided non-VA hospitals through the Balanced Budget Act? Decisions about the future of VA hospitals, whether it be to close hospitals or open them to nonveterans, have significant implications for veterans, VA employees, affiliated medical schools, community hospitals, and taxpayers. It is therefore important that the Congress and the administration have available sufficient information to properly weigh the potential effects of VA health care system infrastructure changes on all affected stakeholders. In a letter dated March 5, 1998, VA’s Assistant Secretary for Policy and Planning said that this report extensively assesses the VA health care system from its inception to the present and accurately depicts the dynamic reengineering of the Veterans Health Administration (VHA) into the type of organization necessary to ensure that VA patients receive the care they need. The letter states that VA considers the report a valuable tool for helping the Department as it develops strategic initiatives to provide seamless health care services to veterans. VA stated that although it may agree with the issues and challenges identified in the report, it does not necessarily agree with the report’s conclusions on VA’s approach to the issues, the effect of continued reengineering on veterans, and the direction of VA’s health care system. Our report, VA stated, often focuses on issues from past reports that VA believes are either no longer relevant, have been resolved, or are already being addressed in conjunction with its reengineering program. This, in VA’s opinion, leads the report to conclusions about the future that are not certain and that the Department is not prepared to acknowledge as the only or most probable ones. Our report is intended to identify and analyze the implications of different approaches to restructuring the veterans health care program, not to draw conclusions about the direction of the program. We believe our focus on issues raised in past reports is appropriate both for documenting the progress VA has made in its restructuring efforts and the lessons learned along the way. VA also contends that many of the issues we cite as not being addressed in the first submission of Veterans Integrated Service Network (VISN) plans are addressed in VHA’s guidance for the plans submitted in October 1997 and that future versions of the guidance will continue to address these issues and others. We recognize that the VISN plans we reviewed were the networks’ first attempt at developing strategic/business plans. We reviewed the plans in detail, however, because efforts to obtain information from VA’s central office yielded few specifics on the extent to which VA was implementing initiatives like those of many community hospitals. Our review of the plans was linked to VA’s guidance on preparing the plans and to the Under Secretary’s Prescription for Change. Our review, however, was not intended to criticize VA’s efforts to develop strategic plans. Nor are we suggesting that VA should necessarily adopt all of the community hospital initiatives. VA stated that the current national health care climate, as our report acknowledges, remains unsettled, and VA’s vision of future health care delivery scenarios is based on trends that continue to emerge. This report, VA stated, clarifies that VA is at a watershed and that among the issues pertinent to the future of both VA and non-VA health care are (1) how VA can best provide services to an aging population with multiple health care needs and function as a safety net provider, (2) whether VA should continue to provide services directly, and (3) how new technologies will affect VA health care. In addition, VA indicated that it agrees that recent and proposed changes in VA and other programs make the future demand for both VA and non-VA hospital care uncertain. It noted that outpatient care, coupled with intensive care services, is a probable future model of U.S. health care. According to VA, it is therefore logical that in the future both VA and non-VA hospitals will change and some may close. VA agreed with our observation that decisions about whether to close or consolidate hospitals or services, change missions, sell excess capacity, or identify enhanced uses of excess space will require that the effect on all stakeholders (veterans, VA employees, community hospitals, medical schools, and individual communities) be fully considered without undue political influence. VA also stated that because VHA is in the midst of reengineering the health care system, significant uncertainty and ultimately no clear answers exist to the many questions this report raises. According to VA, improving its information and financial systems will be critical to answering these questions and will enable VA to demonstrate good value not only in cost, but also in quality, service, patients’ functional status, accessibility, and satisfaction. VA stated that by (1) following through with the transformation already occurring in its infrastructure and processes; (2) continuing to improve its strategic planning and resource allocation; and (3) implementing and monitoring clinical guidelines, performance measures, and outcomes, it will be able to successfully address these questions and other stakeholders’ needs. Regarding its safety net mission, VA said that it disagrees with our contention that eligibility reform and changes in contracting and resource allocation will cause VA to focus less on serving service-connected veterans and on its safety net role regarding low-income or uninsured veterans and enhance marketing to high-income, insured veterans. VA stated that contrary to the impression left by this report, approximately 95 percent of VA patients are veterans who meet congressional mandates for care, including veterans with service-connected disabilities and those with no service-connected disabilities who have the lowest incomes and poorest insurance coverage. According to VA, VERA focuses not simply on dollars per user but on dollars per mandatory user. The report does not contend that VA will focus less on serving service-connected veterans or its safety net role regarding low-income or uninsured veterans. We recognize that VA’s strategic goals and performance measures call for increasing VA’s market share of mandatory veterans. Even some veterans within the mandatory care group, however, have a greater need for or more right to care than others. First, veterans seeking care for service-connected disabilities should have the highest priority for care. Similarly, lower income veterans who lack other health care options, such as public or private health insurance, have a greater need for VA health care services than other veterans. We are concerned that VERA and the new medical care cost recovery provisions could at least in the short term provide financial incentives for individual facility managers to focus on serving revenue- generating veterans—those with higher incomes or private health insurance—rather than veterans with service-connected disabilities or no health insurance. We are also concerned about the extent to which VA can recover its costs for treating nonmandatory veterans to permit it to maintain or increase services to mandatory veterans. In addition, unless VA improves its medical facilities’ determination of which care category—mandatory or discretionary—a veteran is placed in, it will be difficult to accurately determine whom VA is serving. A discrepancy continues to exist between the care categories assigned by VA medical facilities, which, according to VA, report that less than 5 percent of both inpatient and outpatient users were discretionary in fiscal year 1995 and our prior work, which found that about 15 percent of the veterans using VA facilities who have no service-connected disabilities have sufficiently high incomes that would place them in the lowest priority category under the new patient enrollment system. Additional VA comments and technical corrections have been incorporated in this report as appropriate. See appendix X for a copy of VA’s comments.
Pursuant to a congressional request, GAO reviewed major issues and challenges that Congress and the administration will face in the next few years concerning Department of Veterans Affairs (VA) hospitals, focusing on: (1) how VA and community hospitals' care evolved during the twentieth century, including changes in supply and demand; (2) factors contributing to the declining demand; (3) the extent of excess capacity; (4) actions taken to increase efficiency and compete for patients; and (5) changes in hospitals involved in training the nation's physicians and conducting medical research. GAO noted that: (1) both community and VA hospitals are struggling to survive; (2) demand for hospital care abruptly reversed and has steadily declined since the 1980s in community hospitals and since the 1960s in VA hospitals; (3) although many factors contributed to the reversal, medical advances and changes in health insurance mainly drove changes to community hospitals; (4) VA hospitals, however, were mainly affected by declining numbers of veterans and the improving health care options available to veterans through Medicare and other insurance; (5) GAO's work, and studies by others, suggest that if trends continue, 60 percent or more of community hospital beds and over 80 percent of VA hospital beds may not be needed in the next 15 years; (6) if such reductions occur, many hospitals will cease operation; (7) VA's current strategy for attracting new users may not generate the demand needed to preserve VA hospitals; (8) new users have indicated they are more likely to choose their local hospitals rather than a distant VA facility; (9) if VA decides to directly compete with community hospitals for market share, then it will have to subsequently decide whether to adopt private-sector marketing techniques; (10) both VA and community hospitals are fundamentally changing the ways they operate; (11) such changes include the hospitals' basic structure and management; reinvention of basic work, procurement, and supply processes; development of new marketing strategies; and methods and procedures of monitoring and delivering patient care; (12) teaching hospitals' use of medical residents as a lower cost labor source is often seen as contributing to the oversupply of physicians; (13) Congress, through the Balanced Budget Act of 1997, gave non-VA teaching hospitals financial incentives through the Medicare program to reduce residency positions; (14) both VA's strategic goals and the incentives it is creating through some of its restructuring efforts suggest that VA, like many community hospitals, is focusing its marketing efforts on attracting revenue-generating patients; (15) decisions on the future of VA hospitals, whether they mean closing hospitals or opening them to nonveterans, have significant implications for veterans, VA employees, affiliated medical schools, community hospitals, and taxpayers; and (16) therefore, Congress and the administration must have sufficient information for properly assessing the potential effects of VA's health care system changes on all stakeholders.
ISTEA made it U.S. policy to develop a national intermodal transportation system that “provides the foundation for the Nation to compete in the global economy, and will move people and goods in an energy efficient manner.” In terms of freight transportation, an intermodal shipment is one that moves by two or more modes during a single trip. Although intermodalism is not defined in ISTEA, an example of an intermodal freight project would be a port improvement project that facilitates the transfer of cargo from ships to trucks or rail. However, DOT has not established an all-encompassing definition of what constitutes an intermodal freight project. While ISTEA required that DOT develop a data base that included investments in public and private intermodal transportation facilities, it contained no requirement for states to use a specific category of funds for intermodal projects. The majority of ISTEA funding for surface transportation improvements is provided to states through such categories as the Surface Transportation Program or the National Highway System, which have historically been directed to highway construction. However, ISTEA authorized specific “priority intermodal” projects, some of which were freight related. Title V of ISTEA established within DOT an Office of Intermodalism and required that the Director of this office, through the Bureau of Transportation Statistics (BTS), develop, maintain, and make publicly available a data base that includes “information on public and private investment in intermodal transportation facilities and services.” To date, the data base on investment in intermodal facilities and services has not been developed, and comprehensive data on investment in public and private investment in intermodal transportation facilities and services do not exist. Moreover, DOT does not track ISTEA expenditures on intermodal facilities. DOT officials gave us the following reasons why they have not developed the data base: (1) DOT has a limited role in managing how funds are allocated because states are given primary responsibility for allocating funds according to broad program categories; (2) the term “intermodal” is subject to interpretation, and projects may not be identified consistently among states; and (3) intermodal projects may be financed from multiple sources, including federal, state, and local funds, and it may be difficult to identify ISTEA funds used for this purpose. Nonetheless, DOT has not sought legislative relief from this ISTEA requirement. States have provided DOT with detailed information about the use of ISTEA funds on a project-by-project basis; this information has been entered into DOT’s computer information system. In an attempt to identify the extent to which states used ISTEA funds for projects that facilitated intermodal freight movement, we reviewed thousands of pages of DOT data and interviewed public sector officials. Our review was based on identifying the use of the term intermodal in project descriptors. We verified with DOT officials that each project we identified involved the movement of freight. We found that only 10 states used ISTEA funds for intermodal freight projects. A total of 23 projects obligated $35.6 million from two ISTEA funding categories. We also reviewed the status of ISTEA-designated “priority intermodal” projects (of the 51 projects designated in legislation, 20 were freight related, according to DOT). DOT officials said that $191.8 million was provided for these 20 freight-related projects in 9 states. As of December 31, 1995, $68.4 million, or 36 percent, had been obligated by the states for these projects. Our review of available information in the CMAQ, STP, and “priority intermodal” funding categories found that federal intermodal freight project funding obligated in roughly the first 4 of ISTEA’s 6 fiscal years totaled $104 million. Because most intermodal freight movement is done by private companies, it is likely that the private sector would be responsible for a large portion of investment in intermodal freight facilities. While some limited information on funding for intermodal projects can be discerned from available information within DOT, DOT has not collected in a data base public and private investment information on intermodal facilities and services, as required. Without such data on funding for intermodal freight projects, decisionmakers can not ascertain if progress is being made toward ISTEA’s goal of improving intermodal connections. In our review of how several local and regional areas are attempting to address intermodal freight transportation needs, we found that MPOs have been given considerable responsibility for a wide range of transportation concerns. ISTEA not only requires that MPOs increase public involvement in the planning process but also that MPO officials prioritize projects and determine their financial feasibility before submitting them to state transportation officials for inclusion in the statewide transportation improvement plan. In addition to these broader concerns about transportation planning, ISTEA specified 15 factors that MPOs were to consider in preparing local plans, two of which relate to intermodal freight: (1) “methods to enhance the efficient movement of freight” and (2) “access to ports, airports, intermodal transportation facilities, major freight distribution routes . . . .” A broader perspective on the extent to which MPOs consider freight issues in their planning activities is provided in a survey that the National Association of Regional Councils (NARC) conducted in 1993 with the nation’s 342 MPOs. Of the 259 MPOs that responded to that survey, 78 (30 percent) reported conducting freight-related planning activities. MPOs reported that they took into account the following specific aspects of freight-related planning (which have implications for intermodal freight movement) in performing their activities: truck (65 MPOs); rail (56 MPOs); air (40 MPOs); maritime/port facilities (27 MPOs); and border crossings (17 MPOs). In 1995, a survey of how MPOs deal with freight issues was conducted by the Freight Stakeholders National Network (a group of industry associations). According to that survey, 90 percent of the nation’s largest MPOs responding to the survey reported that they lacked sufficient data to conduct adequate freight planning. While survey results indicate that intermodal freight-related planning is not widespread among MPOs responding to the NARC survey, it does show that freight issues are being considered. According to public and private officials we interviewed, the transition to an intermodal planning environment is a new way of thinking that is taking time to percolate through the public sector. One reason for this is that planning has traditionally been done by a single mode of transportation (e.g., highways), and planning has been structured in that manner. Another is that intermodal freight innovations have often originated in the private sector. Consequently, much of the intermodal expertise resides with private officials. Several public sector officials mentioned that ISTEA planning requirements spurred them to develop intermodal planning tools.These same officials found that developing these tools had required time and money. For example, California’s DOT officials stated that their intermodal management system took 2.5 years to complete and cost $1.9 million in outside contracts. Of the 259 MPOs responding to the National Association of Regional Councils survey, 39 percent reported having an ISTEA intermodal management system. In our visits to states that have local and regional areas that handle large volumes of freight—California, Illinois, New York, and Texas—public and private officials told us how intermodal freight bottlenecks near ports and rail yards can affect traffic and freight movement. In part, our discussions with these officials focused on the implications of such bottlenecks for goods movement at the local, regional, and national levels as well as specific projects proposed to address such problems. However, in these visits we did not evaluate intermodal freight projects. Besides handling large volumes of passenger and intermodal freight traffic, Chicago and Los Angeles are also crucial links in what has been termed the nation’s “land bridge” between Asia and the northeastern United States. The following examples outline the intermodal problems each city faces and the short- and longer-term solutions proposed to address them. Chicago is a major hub for national and international freight movement because it is where the nation’s eastern and western rail carriers meet. Nearly half of the nation’s intermodal rail shipments originate, terminate, or connect there. The Chicago Area Transportation Study, the local MPO, has identified 23 major intermodal (rail/truck) yards plus 2 lumber transfer points, 3 automobile transloaders, and 5 clusters of freight facilities that serve ships in the Chicago metropolitan region. The “typical” truck-rail intermodal freight facility generates considerable activity, with over 200,000 container transfers from rail to truck or vice versa per year; the largest facility has 670,000 transfers, which represents a reported average 1,000 to 1,400 trucks entering and leaving the facility a day. It is a year-round, around the clock industry. According to the MPO, the resulting traffic contributes substantially to local and regional traffic congestion and is concentrated on a small number of routes between the rail yards. Such congestion can impede national and international freight movement, according to industry officials. To address these problems in the short term, Chicago’s MPO officials are seeking funds to permit improved connections between intermodal facilities and nearby highways that are part of the recently designated National Highway System. The MPO has not yet developed intermodal freight projects with ISTEA funds with the exception of one CMAQ project approved in 1995 to make improvements at a major rail yard. However, according to a MPO official, a call for projects in February 1996 resulted 47 new project proposals. In 1992 we reported that the intermodal freight traffic problems facing Chicago may require a longer-term solution such as a multiuser intermodal terminal located near or in the city that would permit rail-to-rail connections, thus eliminating crosstown drayage. The Southern California Association of Governments, the Los Angeles regional MPO, faces what it termed a problem of “national significance.” This region has the nation’s largest concentration of intermodal freight container movements, with 20,000 truck trips and 29 train trips per day from the port area to Los Angeles intermodal facilities (25 percent of all trade entering the United States by sea passes through the Los Angeles and Long Beach ports). As a result, the region experiences traffic congestion that is linked to air quality problems and passenger and freight delays. The proposed intermodal solution to these problems is called the “Alameda Corridor” project. This project involves consolidating 90 miles of rail track owned by 3 different rail companies into one 18-mile rail corridor to transport intermodal freight from the Los Angeles and Long Beach ports to distribution centers in Los Angeles. This is expected to ease traffic congestion by taking trucks off the road and eliminating delays at rail crossings. The project, expected to be completed in 2001, is budgeted at $1.8 billion (ISTEA-authorized “priority intermodal” funds: $55.4 million). Shippers we met with supported the project, noting that these ports are significant links with Pacific Rim nations as well as with emerging Latin American markets. They indicated that their companies are experiencing 4 to 7 percent annual growth in shipping volume through these ports. To meet the growth in shipping, planners at the Southern California Association of Governments are already thinking beyond the Alameda Corridor. Specifically, they are examining options to consolidate three rail freight lines operating between downtown Los Angeles intermodal facilities, where the corridor will terminate, and the eastern end of the Southern California Basin (the San Bernardino area). According to a 1995 MPO-commissioned report, this consolidation is motivated by two broad public policy objectives: to (1) enhance the region’s ability to manage the flow of international trade goods and (2) reduce emissions resulting from idling vehicles at railroad grade crossings. However, from the perspective of the rail and shipping companies whose operations would be influenced by consolidation, these public policy objectives must be balanced against the potential loss of control over shipping schedules. In addition to the planning done by MPOs, public and private sector officials are also identifying and addressing intermodal freight movement issues that transcend state boundaries. For instance, at a roundtable discussion we had with 12 public and private officials at the Port Authority of New York and New Jersey headquarters, an official from the New Jersey DOT suggested that while ISTEA was good at delegating authority to local planning officials, for some transportation problems it might be better to view the nation as a series of regions. We identified several initiatives where states are attempting to incorporate a regional perspective into the planning process by identifying freight concerns that cross state lines. The recently formed Western Transportation Trade Network, comprised of 16 western states, is identifying high-priority freight (air, land, rail, and marine) corridors and intermodal facilities throughout the western United States based on input from officials from state DOTs, MPOs, and the private sector. This information will be used to assess the performance of the region’s freight corridors and intermodal facilities as well as coordinate a regional approach in addressing emerging intermodal freight needs. The New England Transportation Initiative (NETI), made up of six northeastern states, has been cited by DOT as an example of how regional intermodal planning can function. NETI’s goals include improving the region’s mobility of persons and goods and promoting its economic competitiveness. In visits to several local and regional areas that handle a large volume of freight, officials emphasized two impediments that hinder intermodal freight transportation planning. One concerns whether public sector officials should have access to data on freight movement that may be considered proprietary. Another impediment concerns differing planning horizons—the private sector’s tend to be more short term while the public sector’s often require longer time lines to initiate projects. We also found examples of efforts to bring together public and private officials to identify and address specific problems concerning intermodal freight transportation. Two DOT publications discuss other intermodal freight impediments not discussed in this report. These include operational problems at intermodal facilities (compatibility among freight tracking systems); regulatory and institutional barriers (the lack of standardized transportation regulations); and financial constraints (inadequate funding for intermodal improvements). Some transportation companies may consider specific data on private freight movement to be proprietary. However, public planners can use these data to identify heavily traveled highways or intersections in order to mitigate intermodal freight bottlenecks. A representative from an ocean shipping company we met with in southern California explained why he believes industry officials are sometimes reluctant to disclose data. He said that when public officials ask for “everything” on a subject (such as port use by a particular shipper), rather than specific information, company officials are unsure how the information may be used. He suggested that public sector requests for such information should be more focused; this might allay private sector doubts about how it would be used. In some cases, public sector officials compile data on intermodal freight activity from a combination of inputs. For example, in Chicago, the MPO developed figures on transfers at key intermodal yards through various means, including traffic counts, direct observations, and informal interviews with workers and gate guards to present information to company executives for verification. This information was used to understand how intermodal freight shipments can affect local traffic patterns. A second impediment to improving intermodal freight transportation concerns differing public and private sector planning horizons. According to several MPO officials, their planning horizon extends over longer-term periods, such as 25 years. Such a planning time frame is necessary to conduct impact studies or obtain funding. Private officials we met with in visits to California, New York, and Texas, on the other hand, spoke of the difficulty of thinking long term when short-term needs are pressing. The freight industry is also subject to fluctuations in demand for its services because of economic conditions. Likewise, ongoing business mergers sometimes make it difficult for private officials to predict their company’s infrastructure needs in 15 to 20 years because they are unsure whether their company will be active at that time in a particular market. An example that highlights the problem involves Chicago. There, MPO officials commented that when a major shipping company relocated from the downtown area to a nearby suburb where rail service would be more convenient, they were concerned about how the move would potentially influence regional traffic patterns. In light of the volume of goods that is expected to move through the company’s new facility and its likely impact on future traffic patterns, the MPO’s longer-term planning task was affected. In this case, the shipping company’s move was prompted by its current business situation, while MPO officials had to plan for how the company’s move would influence the region’s long-term intermodal freight needs. Because intermodal facilities are a nexus where public and private interests intersect, bringing these groups together to plan or cooperate on a project that neither could complete independently has helped achieve intermodal goals. In visits to four areas that handle large volumes of freight, we found several examples of such efforts: The Alliance Facility, located north of Fort Worth, Texas, is a 7,500-acre intermodal transportation complex that began as a partnership of city, state, and federal governments; private businesses; and individuals (total federal investment: $55 million). Key to this effort was federal funding for construction of a 9,600 by 150-foot runway that serves industrial, business, and general aviation users (private airliners) rather than commercial airliners. The Alliance complex also has an intermodal rail terminal that the Santa Fe railroad built. This rail facility can perform an estimated 300,000 rail-to-truck transfers per year. New highway interchanges and access routes serving the facility and intermodal terminal have been built and were financed by Texas DOT and private investors. The Alliance complex opened in 1989, prior to the enactment of ISTEA. According to a representative of Alliance Air Services, the complex has experienced increased industrial development since 1994. Additional business is expected in 1997 when a major shipping company is scheduled to open a southwestern hub at Alliance. The Chicago Area Transportation Study’s Intermodal Advisory Task Force convenes regular meetings between public and private officials where major issues are discussed. One tool used to help focus members’ attention on bottlenecks in intermodal transportation is a computer-based geographic information system designed to highlight intermodal freight problems and then help the members establish priorities for repairing them. The National Freight Partnership, coordinated by DOT, consists of public and private representatives who work at the national level to identify major bottlenecks in the nation’s transportation system. The Partnership provides a forum for private sector officials concerned with freight movements to apply their expertise to national problems and establish a dialogue with public sector leaders. American Trucking Associations representatives we met with told us about the recently formed Freight Stakeholders National Network. The Network is made up of eight national associations that represent the freight transportation modes and manufacturers. Through this effort, they hope to identify and build support for transportation improvements, provide policy support and technical resources to make local freight coalitions with MPOs successful, and promote best practices for dissemination to other cities. We recommend that the Secretary of Transportation (1) establish a definition of freight intermodal projects and (2) ensure that the data base on intermodal investments required by title V of ISTEA be developed and maintained in accordance with the statute. In commenting orally on a draft of this report, DOT officials indicated that they (1) have collected some basic information on where and how goods were shipped in the United States, (2) have efforts underway to collect information on long distance passenger travel by all modes, and (3) are currently developing information on roads that link intermodal facilities and the National Highway System. DOT officials acknowledged that the investment data they are collecting do not meet the requirements established by ISTEA, emphasizing the difficulty inherent in collecting information on private investment in intermodal facilities that is part of the ISTEA requirement. We believe a reasonable approach toward meeting the ISTEA requirement would be to first establish a definition of intermodal freight projects and develop the data base on public investment, and then incorporate data on private investment that is already available or could be readily ascertained. To obtain information for this report, we (1) reviewed ISTEA and its legislative history; (2) interviewed DOT headquarters and regional officials; (3) interviewed state, local, and private sector officials; (4) interviewed representatives of major transportation organizations; (5) reviewed DOT data from fiscal years 1992 to 1995 showing the funding status of ISTEA-authorized priority intermodal projects; and (6) reviewed volumes of DOT data highlighting projects funded with the two categories of ISTEA money that DOT officials believed states would most likely use to fund intermodal freight projects. We identified projects based on the use of the word “intermodal” in project descriptions. We did not independently verify DOT data, but we confirmed that these were intermodal freight projects by interviewing DOT officials at headquarters and in selected DOT regions. Our findings may not be comprehensive because of limitations in DOT data. We visited four states—California, Illinois, New York, and Texas—that transportation officials and reports identified as having local and regional areas that handle large volumes of intermodal freight and as having considered projects to address such problems. Our site visits included interviews with state and local government transportation officials and meetings with private officials to discuss their perspectives on intermodal transportation. In addition, in each state we visited intermodal rail, truck, or port facilities to see firsthand intermodal problems, bottlenecks, and areas targeted for specific projects to address these problems. However, in these visits we did not evaluate existing or potential intermodal freight projects. To obtain additional information on local planning efforts, we analyzed data from the National Association of Regional Councils’ 1993 national MPO survey. We also reviewed state transportation plans and other materials relevant to intermodal freight transportation planning and attended professional meetings where intermodal freight issues were discussed. Our state visits were complemented by interviews in Washington, D.C., with a range of individuals at DOT, including officials from the following offices: DOT’s Office of Intermodalism, the Federal Highway Administration (FHWA), the Maritime Administration (MARAD), and the Federal Rail Administration. We also interviewed officials representing the Transportation Research Board, the Intermodal Association of North America, the American Trucking Associations, and the American Association of State Highway and Transportation Officials. Moreover, we met with several private sector transportation consultants. In addition, we reviewed recent literature on intermodal transportation. We conducted our review from February 1995 to February 1996 in accordance with generally accepted government auditing standards. More detailed information on how states used ISTEA funds for intermodal freight transportation is presented in appendix I. Information on trends that influence intermodal transportation is presented in appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no distribution of this report until 14 days after the date of this letter. We will then send copies of this report to the Secretary of Transportation as well as other interested parties. Copies will also be made available to others on request. Please contact me at (202) 512-8984 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Based on our review of Department of Transportation (DOT) data and interviews with public and private sector officials, we attempted to identify intermodal freight projects financed with the Intermodal Surface Transportation Efficiency Act (ISTEA) funds not specifically targeted for priority projects (see table I.1). The 23 projects in 10 states represented a range of improvements to facilitate intermodal freight transportation. Some projects were funded with as little as $40,000, others with as much as $11 million. For instance, state officials in New York used $6 million in ISTEA funds to purchase a barge and to improve operations between the Red Hook container barge terminal in Brooklyn, New York, and Port Elizabeth, New Jersey. These projects were expected to enhance the competitiveness of the bistate port facilities as well as eliminate an estimated 54,000 truck trips from the major regional highways of New York and New Jersey annually, thus reducing traffic congestion and improving air quality. Section 1108 of ISTEA authorized funds to various states for “priority intermodal projects,” commonly referred to as “demonstration” projects.For projects specifically related to improving intermodal freight transportation, ISTEA authorized $191.8 million for 20 projects in 9 states. The projects include a variety of improvements to interchanges and other roads in locations such as the Alameda Corridor in southern California and improvements to airport access in such cities as Detroit, Michigan; Pittsburgh, Pennsylvania; and Jackson, Mississippi. According to a DOT official, the priority projects are in various stages of development. The official told us that ISTEA authorized only enough money to start the projects and that it is the responsibility of each state to obtain funding to complete the projects. While some states have provided financing for these projects, others have not. Table I.2 contains a breakout of the total number of priority intermodal freight projects in each state, the total contract authority for the projects, and the amount of funds obligated, as of December 31, 1995. Several factors have transformed the nation’s intermodal freight transportation industry over the past 20 years; these factors are expected to influence it in the future. Among them are (1) the need to reduce costs and streamline production using improved inventory management, (2) the partial deregulation of the U.S. rail and trucking industries, and (3) the use of computer-based technologies. Demands will continue to be placed on the nation’s transportation system for efficient freight movement so that companies can compete in the global marketplace. For instance, the time it takes warehouses to fill orders is expected to decrease by 15 to 20 percent during the next 5 years, and transit times are expected to be reduced between 5 and 10 percent. Moreover, inventory turnover is expected to increase by about 10 percent, and the percent of products shipped “just in time” is expected to grow from 28 to 39 percent. According to the Intermodal Association of North America and the National Industrial Transportation League, the estimated intermodal market share of trailerload shipments moving 500 miles or more increased from 10 percent in 1991 to 18 percent in 1994 and is projected to rise to 25 percent by 1997. Overall, however, trucking is currently the most frequently used freight transportation mode because trucks provide convenient pickup and delivery of shipments. Partial deregulation of the transportation industry in 1980 has also influenced the intermodal freight industry. One outcome of deregulation that continues to influence the freight industry is strategic alliances among carriers that have been made to capitalize on each mode’s strength. For example, truckload carriers provide door-to-door access to businesses, while rail carriers—particularly double-stacked intermodal containers—provide low-cost, long-distance service. A 1995 study discussed what these transport alliances portend for intermodal shipping, taking into consideration the business environment that stresses flexibility in suppliers and product lines, more frequent shipments of goods in smaller lot sizes, and a more diverse mixture of commodities in each shipment. The study concluded that the use of intermodal containers will expand for both domestic and international shipments. Apart from noting consolidation among domestic companies, shipping officials we interviewed in southern California mentioned ongoing mergers among the world’s major ocean carriers. As consolidation continues, companies are seeking greater economies of scale by purchasing ships capable of carrying larger loads. While current ships carry 3,000 to 4,000 20-foot equivalent container units (TEU), shippers said that 5,000 TEU vessels are on order. The implication is that port gate structures will have to be improved in order to accommodate the larger vessels. Further, the loads these larger ships will carry will place increased demands on the infrastructure surrounding ports because of the pressure to unload ships quickly and move cargo to its destination. Technological innovations linked to computers and satellites have also influenced how intermodal freight shipments are handled. These innovations include bar coding that allows shipments to be verified and tracked, electronic data interchange that permits on-line transmission of business data and documents, and in-vehicle navigation systems that identify the most direct routes to avoid congestion and delays. Improved intermodal freight transportation can result in economic benefits such as lower transportation costs. This, in turn, can enhance the productivity and competitiveness of U.S. businesses. According to transportation planners, other benefits from intermodalism include improved air quality and environmental conditions through reductions in energy consumption and traffic congestion. Other benefits might include increased employment from jobs associated with constructing intermodal facilities and greater employment at intermodal facilities themselves. Richard Burkard The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed intermodal freight transportation issues, focusing on: (1) the Department of Transportation's (DOT) efforts to track how states use Intermodal Surface Transportation Efficiency Act (ISTEA) funds to facilitate intermodal transportation; (2) the nature and extent of ISTEA funds used by states for intermodal freight projects; (3) how some local and regional areas that handle large volumes of freight have considered intermodal freight transportation issues as part of their planning processes; (4) impediments some areas face in improving intermodal freight transportation; and (5) trends in intermodal freight transportation. GAO found that: (1) DOT has not developed the statutorily required database on public and private intermodal transportation investments, or tracked how states use ISTEA funds for such projects; (2) DOT says that its limited role in allocating funds, states' inconsistent identification of projects, and intermodal projects' multiple financing sources make establishing the database difficult; (3) as of September 1995, 10 states had obligated about $35.6 million in ISTEA funds for 23 intermodal-freight-related projects; (4) as of December 1995, 9 states had obligated $68.4 million, or 36 percent of the $191.8 million in ISTEA funds authorized, for 20 priority intermodal freight projects; (5) the total amount of funds obligated for intermodal freight projects through the first four ISTEA fiscal years equals less than 1 percent of ISTEA funds apportioned to the states for highways and other nontransit infrastructure projects during the same period; (6) metropolitan planning organizations have to balance intermodal freight issues with a wide range of other transportation needs; (7) public transportation planners lack experience and planning tools for intermodal transportation, but states are slowly developing such expertise and tools; (8) local and regional planners are addressing problems specific to their areas; and (9) impediments to improving intermodal freight transportation include obtaining necessary proprietary information on freight movements and coordinating public and private-sector planning, but public-private partnerships may help overcome such impediments.
DOD defines UAS as a powered aircraft that does not carry a human operator; can be land-, air-, or ship-launched; uses aerodynamic forces to provide lift; can be autonomously or remotely piloted; can be expendable or recoverable; and can carry a lethal or nonlethal payload. Generally, UAS consist of the aircraft; a flight control station; information and retrieval or processing stations; and, sometimes, wheeled land vehicles that carry launch and recovery platforms. UAS carry a payload including sensors for intelligence, surveillance, or reconnaissance to provide real-time intelligence to battlefield commanders. When used on an intelligence, surveillance, or reconnaissance mission, generally, the aircraft carries a sensor payload capable of detecting heat, movement, or taking photographs or video of ground-based targets. This information is then transmitted to ground stations or satellites via a communications payload for retransmission to forces needing the information to support operations. Unmanned aircraft can also be armed for offensive strike missions and be used to attack ground-based targets. UAS require adequate intra- or intertheater communications capabilities using the electromagnetic spectrum to permit operators to control certain aircraft, and also permit communications equipment to transmit the information obtained by the sensor payload to ground commanders or other users. Effective joint operations are critical because combatant commanders operate in a joint environment by applying military force appropriate for their operational circumstance using the unique capabilities of each of the services. In a changing security environment, joint operations are becoming more important given the complex nature of military operations. This importance is being driven by the combatant commands’ need to combine the capabilities of multiple services to address the global threat, as well as the growing interdependence of capabilities among the services. Moreover, effective joint operations permit combatant commanders to leverage the capabilities associated with each service to accomplish operational missions. As with manned aircraft, UAS provide another capability that can be applied by combatant commanders in joint operations. Initially, UAS were seen as complementary systems that augmented existing war fighting capabilities. However, UAS are also evolving into more significant roles, for which they can provide primary capability. For example, the Global Hawk UAS may eventually replace the U-2 reconnaissance aircraft, and the Unmanned Combat Aerial System may eventually perform electronic warfare missions currently performed by the EA-6 Prowler aircraft as well as offensive deep strike missions. Moreover, UAS are figuring prominently in plans to transform the military into a more strategically responsive force and are expected to be an integral part of this information-based force. For example, the Army is developing the Future Combat System and a new generation of unmanned aircraft and other systems to enable information to flow freely across the battlefield. Since 2001, DOD has significantly increased its planned expenditure for UAS and associated systems, and, more recently, the systems have continued to be heavily used in Afghanistan and Iraq. In fact, over 10 different types of UAS have been used in Afghanistan and Iraq. According to the UAS Planning Task Force, as of August 2005, DOD had approximately 1,500 unmanned aircraft in Iraq and Afghanistan. In addition, the budget request for UAS grew significantly between fiscal years 2001 and 2005, from about $363 million to about $2.2 billion, and further growth is likely. These figures do not include any supplemental appropriations. Fewer than half of the UAS in Iraq and Afghanistan at the time of our report had reached full-rate production or initial operating capability. They were still considered developmental, and consequently were covered by DOD Directive 5000.1, The Defense Acquisition System and DOD Instruction 5000.2, Operation of the Defense Acquisition System, both issued in May 2003. The directive mandates that systems, units, and forces shall be able to provide and accept data, information, materiel, and services to and from other systems, units, and forces, and shall effectively interoperate with other U.S. forces, among other things. The instruction implements the directive and is intended to provide DOD officials with a framework for identifying mission needs and technology to meet the needs, as the basis for weapons system acquisitions. Finally, the 2002 Roadmap emphasizes the need for interoperable unmanned aircraft and payloads by identifying a number of existing standards that are to be complied with in systems’ development in such areas as common data links, interoperable data links for video systems, and electromagnetic spectrum frequencies that should be used for data transmission under a variety of circumstances. In March 2004, we reported that DOD’s approach to planning for developing and fielding UAS does not provide reasonable assurance that its investment will facilitate efficient integration into the force structure and avoid interoperability problems, although DOD had taken some steps to improve UAS program management. For example, in 2001, DOD established the Joint Unmanned Aerial Vehicles Planning Task Force (now known as the UAS Planning Task Force) in the Office of the Undersecretary of Defense (Acquisition, Technology, and Logistics). To communicate its vision and promote commonality of UAS, the Task Force published the 2002 Unmanned Aerial Vehicle Roadmap, which described current programs, identified potential missions, and provided guidance on emerging technologies. While the Roadmap demonstrated some elements of a strategic plan, neither it nor other key documents represented a comprehensive strategic plan to ensure that the services and DOD agencies develop systems that complement each other, perform all required missions, and avoid duplication. Moreover, the Task Force served in an advisory capacity to the Undersecretary, but had little authority to enforce program direction. For their part, service officials told us that they developed service-specific planning documents to meet their own needs and operational concepts without considering those of other services or the Roadmap. In consequence, we concluded that without a strategic plan and an oversight body with sufficient authority to enforce program direction, DOD risked interoperability problems, which could undermine joint operations. Thus, in our 2004 report, we recommended that DOD establish a strategic plan and assign an office authority and responsibility to enforce program direction communicated in the plan to promote joint operations. DOD partially concurred with our recommendation to establish a strategic plan and nonconcurred with our recommendation to assign an office with authority and responsibility to enforce program direction. DOD asserted that the Undersecretary had sufficient authority to integrate UAS into joint operations and that the Task Force had been established to promote payload commonality, develop and enforce interface standards, and ensure multiservice coordination. Moreover, DOD indicated that the Joint Capabilities Integration and Development System process focuses on developing integrated joint warfighting capabilities and thus would avoid interoperability problems that we believed were likely. DOD has achieved certain operational successes with UAS including collecting intelligence with unmanned aircraft sensor payloads and conducting offensive strike missions with weapons payloads in Afghanistan and Iraq. Nonetheless, U.S. forces employing UAS have encountered certain communications and payload interoperability problems (called payload commonality problems), electromagnetic spectrum constraints, and inclement weather groundings of unmanned aircraft during recent operations. While DOD has acknowledged the need to improve UAS interoperability and address bandwidth and weather constraints that undermine unmanned aircraft operations, little progress has been made. DOD has achieved certain operational successes from its use of a variety of unmanned aircraft and their sensor, communications, and armaments payloads. In operations in Iraq or Afghanistan since 2002, U.S. forces have used UAS in integral roles on intelligence, surveillance, reconnaissance, and offensive strike joint or service-specific missions. For example: The Air Force used its Predator unmanned aircraft with sensor or armaments payloads on over 5,800 sorties or totaling more than 80,000 hours of flight on a variety of intelligence, surveillance, and reconnaissance; close air support; armed strike; and other missions in Iraq and Afghanistan from 2002 through 2005. For example, the Predator’s sensor and communications payloads have provided video images to ground forces to support their operations or to strike enemy targets with Hellfire missiles. Certain Air Force units used the Global Hawk unmanned aircraft’s sensor payloads to identify 55 percent of the time-critical targets to defeat enemy air defenses in Iraq in March and April 2003. To enhance joint operations, the Air Force developed procedures and tactics to allow the Global Hawk’s sensor payloads to provide more direct support to ground force missions. In 2004, an Army force used its Hunter unmanned aircraft and sensor payload to locate an enemy antiaircraft artillery weapon that had been firing at coalition force aircraft. Then the Air Force sent a Predator armed with a Hellfire missile to attack the enemy weapon. Within minutes of the Predator strike, the Army unit sent its Hunter back to transmit information needed for battle damage assessment. In 2004, an Army force operating an I-Gnat unmanned aircraft in Iraq detected a potential ambush of Marine Corps forces and the Army unit used information from the I-Gnat’s sensor payload to successfully adjust mortar fire onto the enemy position. Recently, the Air Force, Army and Marine Corps forces have used their unmanned aircraft and their sensor and communications payloads to locate numerous targets in Iraq and Afghanistan to permit U.S. forces to destroy the targets. While achieving certain successes with the use of unmanned aircraft and their payloads, certain interoperability challenges have also emerged during recent operations despite certain DOD directives requiring interoperability and the emphasis on interoperability in the 2002 Roadmap. First, DOD Directive 5000.1 specifies that systems, units, and forces shall be able to provide and accept data and information to and from other systems and shall effectively interoperate with other U.S. forces. Second, the Roadmap specifies five data standards for formatting data, a communication standard to ensure adoption of a common data link, and a variety of file transfer, physical media, and other standards applicable to unmanned aircraft or their sensor and communications payloads. However, the 2005 edition of the Roadmap indicates that the detailed standards for interoperability have not been developed. In effect, the absence of such standards has led to the development of UAS that are not interoperable. In operations in Afghanistan and Iraq, interoperability problems have emerged. Specifically, during operations, DOD has learned that unmanned aircraft sensor and communications payloads and ground stations were not designed to common data standards and thus are not interoperable, even within a single service in certain circumstances. For example: Army forces operate both the Shadow and Hunter unmanned aircraft and associated ground stations but discovered that these systems are not interoperable. Specifically, while the Shadow’s sensor and communications payload is able to transmit information to its own ground station, it is unable to transmit to a Hunter ground station. Similarly, the Hunter’s sensor and communication payloads are able to transmit to a Hunter ground station but not Shadow’s. Onward transmission to forces needing the information is equally constrained if they do not have compatible equipment for receiving the information. As a result, the Army has missed an opportunity to effectively leverage the technology inherent in either system for the benefit of operational forces that need the information. At the time of our review, the Army had begun an initiative to make the Shadow and Hunter unmanned aircraft ground stations compatible with either aircraft. When communication systems are incompatible, operating forces may be prompted to operate their own UAS, thus increasing the numbers of systems operating in the same area. To permit the sharing of tactical intelligence obtained by unmanned aircraft sensor payloads, the services or combatant commands have developed certain technical patches permitting compatibility but slowing data transmission. As we pointed out in 2003, in some cases, DOD needs hours or days to transmit information to multiple services. However, slow intelligence data transmission can undermine U.S. forces’ ability to attack time-critical targets or allow the targets to escape. U.S. Central Command acknowledges that timely data dissemination is critical to combat operations. Communications interoperability problems are a long-standing problem. In 2001, we reported that each of the military services plans, acquires, and operates systems to meet its own operational concepts but not necessarily the requirements of joint operations in spite of the DOD directive requiring interoperability. In our 2004 unmanned aerial vehicle report, we reported that the services engaged in little coordination in developing their unmanned aerial vehicle roadmaps and that they did not view the UAS Planning Task Force’s 2002-2027 Roadmap as a strategic plan or an overarching architecture for integrating UAS into the force structure. In the absence of adequately developed and implemented standards and in contravention of the DOD guidance, the services have continued to develop their unmanned systems to their own standards, but without regard to the others’ standards. At the same time, DOD continues to develop and field UAS without adjusting the standards, likely causing the problem to become even more widespread. Moreover, the UAS used in current operations were built before the Joint Capabilities Integration and Development System became fully operational and thus has had little impact on the problem. Consequently, the information collected cannot always be quickly transmitted to users needing it, undermining joint operations and potentially leading to future costly initiatives to modify existing unmanned aircraft, sensors and communications payloads, and ground stations to overcome interoperability problems. In addition to communications interoperability problems, payload interoperability (commonly referred to as “payload commonality”) problems also exist. DOD has developed at least six different sensor payloads each able to collect different types of information. These sensor payloads are attached to an unmanned aircraft and flown over operational areas to observe activity of interest on the ground in a target area and to transmit observations to ground or air forces or other users as tactical intelligence. As an example, figure 1 displays a Predator unmanned aircraft with a sensor payload attached underneath. However, many sensor payloads can be attached to only one type of unmanned aircraft because DOD has not adopted a payload commonality standard even though this problem was identified nearly 20 years ago. As a result, commanders may have to delay missions if the appropriate sensor is available but no unmanned aircraft is able to carry it. We discussed this problem in 1988 when we reported that DOD had not adequately emphasized payload commonality for unmanned aircraft and that Congress had stressed the need for DOD to consider payload commonality in 1985. The 2002 Roadmap acknowledged the need for sensor payload commonality where practical, but limited progress has been made. In addition to the flexibility inherent in the communications standards, according to U.S. Central Command based on its experience in Persian Gulf operations, unmanned aircraft development has been service-centric and lacks an overarching employment doctrine to shape development to achieve aircraft and sensor interoperable communications and payload commonality. Furthermore, a Joint Forces Command official told us that combatant commanders can not take full advantage of the dissimilar unmanned aircraft or the sensor payload data produced due to the interoperability problems. Unmanned aircraft and their sensor, armaments, and communications payloads depend on reliable access to the electromagnetic spectrum. However, the spectrum is increasingly constrained, potentially undermining joint operations by requiring delays in an unmanned aircraft flight or, if the problem worsens, cancellation. Unmanned aircraft operators use the electromagnetic spectrum to maintain contact with the aircraft to control its flight, fire its weapons if armed, and receive information collected by the sensor payloads. Certain spectrum frequencies are sometimes referred to as bands and the amount of the spectrum needed to permit transmission of information is referred to as bandwidth. DOD officials told us that more bandwidth is needed to transmit video and other information obtained by sensor payloads than to maintain flight control of the aircraft. Numerous weapons also use electromagnetic spectrum and share it with UAS but they can interfere with each other during operations if they operate on the same frequency at the same time. The military services have experienced bandwidth capacity constraints, limiting both the number of UAS and other systems that can be effectively operated simultaneously and the amount of available data that can be transmitted from the unmanned aircraft communications payload. For example, insufficient bandwidth limits U.S. forces’ ability to download video and radar images via satellite from more than one aircraft at a time. As a result, data transmission and relay are delayed, undermining U.S. forces’ ability to engage time-critical targets and possibly permitting the target to escape, unless alternative information sources are available on a timely basis. Army officials informed us that data link limitations are due primarily to frequency congestion. Table 1 displays the bands used by 12 different unmanned aircraft or models of unmanned aircraft for flight control and sensor payload data transmission. As shown in the table, several UAS rely on the C-band for their data transmission capability, and only 2 of the 12 UAS can be reprogrammed to another band. The 2002 Roadmap established a goal of modifying the Army’s Shadow UAV to permit it to operate a common tactical data link in Ku-band and not the more congested C-band. This goal had not been met at the time of our review and the Shadow unmanned aircraft still operated in C-band. Similarly, the 2002 Roadmap established a goal of moving the Air Force’s Predator unmanned aircraft video sensor payload from C-band to Ku-band for line of sight operations. However, the goal had not been met at the time of our report. Moreover, the problem cannot be easily overcome without potentially costly modifications to existing systems because DOD has not required unmanned aircraft or sensor payloads to be reprogrammable from one band to another and therefore has not established such standards. As a result, most have been designed and built without the flexibility to operate in differing frequencies or bands to avoid congested frequencies, thus sometimes preventing timely information transmission or delaying their flight without interfering with or experiencing interference from other UAS or other weapons systems. Unmanned aircraft are more likely to be grounded by inclement weather than manned aircraft due in part to their lighter weight. Dust storms, strong winds, rain, or icing prevent some unmanned aircraft from flying, thus denying U.S. forces critically needed information unless alternative data collection or offensive strike capabilities are available. Specifically, winds up to 80 miles per hour in Iraq and Afghanistan have reduced the availability of most unmanned aircraft and dust storms have undermined the use of some sensor payloads. Moreover, the 2002 Roadmap indicates that icing has been a primary factor in two accidents involving the Hunter unmanned aircraft and three crashes of the Predator unmanned aircraft. The Roadmap established a goal to incorporate all-weather capabilities into future UAS. However, little progress has been made because DOD has not adopted standards for all-weather capability to be considered in development, despite the Roadmap’s stating the goal. As a result, systems have been developed without it. At the same time, according to a UAS Planning Task Force official, developing unmanned aircraft with all- weather capabilities may result in some degradation in performance, such as a reduced flying range. At the time of our review, DOD had not determined whether all-weather capability was worth the trade-off of potentially degraded performance. While DOD has acknowledged the need to improve UAS interoperability and address bandwidth and weather constraints that undermine unmanned aircraft operations, little progress has been made. On the one hand, to begin to address the problems, DOD has taken a number of steps as listed below: In August 2005, DOD issued an updated version of its roadmap, entitled 2005 Unmanned Aircraft Systems Roadmap, to guide acquisition and interoperability. Among other things, the 2005 Roadmap establishes the goal of enhancing joint service collaboration as a means to improve joint operations. At the time of our review, the Office of the Secretary of Defense was preparing an action plan to address a number of shortfalls including interoperability and other problems within U.S. Central Command’s area of responsibility, although the plan was limited to just this command and would not necessarily solve the problems that UAS might encounter elsewhere. DOD plans to reemphasize the role that the Joint Capabilities Integration and Development System could play in all new UAS developments by trying to ensure that DOD develops systems to support joint operations, achieve commonality to the extent practical, and identify gaps in DOD’s ability to carry out its warfighting missions. U.S. Joint Forces Command has developed certain initiatives to improve UAS interoperability by conducting experiments to demonstrate aircraft modifications and new concepts of operations, although such modifications can be costly. In addition, on June 1, 2005, DOD’s Joint Requirements Oversight Council established a new Joint Unmanned Aerial Vehicle Center of Excellence and a Joint Unmanned Aerial Vehicle Overarching Integrated Process Team. The Joint Unmanned Aerial Vehicle Overarching Integrated Process Team has subsequently been renamed the Joint Unmanned Aircraft Systems Material Review Board. These joint forums will help the services manage development of new UAS or modifications to existing UAS, and they will help the services to develop new or revised concepts of operations for more effective use. At the same time, the UAS Planning Task Force will try to ensure that the services’ UAS acquisition programs are coordinated, and a Task Force representative is to be a member of the Joint Overarching Integrated Process Team. DOD views these changes as means to more effectively manage service UAS programs. While these changes appear to be steps in the right direction, it is too early for us to tell if they will solve the interoperability and other problems that we identified. Furthermore, payload commonality, interoperability of communications and data transmission systems, and inclement weather flying capabilities that we identified as impacting recent operations, had been identified previously as problems already occurring or likely to occur. First, our 1988 unmanned aerial vehicle report indicated that DOD had not adequately emphasized payload commonality for these aircraft. Second, our 2001 report found interoperability problems due to the services’ continued practice of acquiring systems to support their own operations but not necessarily that of the other services. Third, DOD’s guidance requires interoperability but the detailed standards have not been developed. Lastly, the 2002 edition of the Roadmap identified the need to improve interoperability of communications systems for UAS and also identified inclement weather capability as a problem undermining UAS operations and established goals to address it. Despite all the emphasis, problems related to communications and payload interoperability, and all-weather capability problems remain. DOD acknowledges that it (1) did not foresee the rapid technological development experienced with unmanned aircraft, sensor or communications payloads, and ground stations; (2) has provided unmanned aircraft and payloads rapidly to deployed forces to meet forces’ demands for them; and (3) has not always adopted new or enforced existing standards that might have prevented or mitigated some of these problems. As a result, while DOD has issued a directive, instructions, guidance, and roadmaps, and established at least five different organizations to promote UAS interoperability and address other unmanned aircraft and payload developmental needs, no organization has or has exercised sufficient authority to enforce program direction, or ensure that the standards and guidance are in concurrence. As a result, the services continued to develop and field these systems without fully complying with the interoperability requirements stated in key guidance or addressing known payload commonality problems. DOD’s approach to evaluating joint UAS performance on operational deployments is unsound because it has not implemented a systematic approach to evaluating such performance. Instead, DOD has relied on systems for evaluating performance that are not focused on joint operations and are nonroutine, and as a result the department has little assurance that the information that has been collected represents the key performance indicators needed to assess performance on joint operations. DOD has not implemented a systematic approach to evaluating joint UAS performance on operational deployments. As we previously noted in our 2004 report, the Government Performance and Results Act’s strategic planning framework specifies that results-oriented performance measures can be used to monitor progress toward agency goals and that such performance measurements should be developed and used to monitor progress. At the time of our report, DOD was only beginning to decide on key indicators of performance that would be used to assess unmanned aircraft, payload, and ground station performance on joint operations. To date, DOD has relied on service-specific information that addressed certain UAS performance. For example, some forces filed after-action reports and maintenance reports addressing UAS performance. While producing some useful information, these reports have not necessarily been specifically targeted to joint UAS operations, nor do they systematically identify key indicators for collection which could be used to develop joint operational performance baselines and permit performance measurement against the baseline. Thus, DOD has little assurance that the information that has been collected represents the key performance indicators needed to assess joint operations performance. DOD officials told us that they have tried to keep pace with operating forces’ demands for more unmanned aircraft and their payloads, and therefore the services have deployed them while still under development within the DOD acquisition system. These deployments have often occurred before identification of key performance indicators that would need to be collected to be used to evaluate performance. In effect, the services have bought and deployed unmanned aircraft, sensor and communications payloads, and ground stations and tried to evaluate their effectiveness all at the same time. On the one hand, this has permitted DOD to provide operating forces with new capabilities represented by the aircraft and their payloads. On the other hand, it has also resulted in DOD and the services sometimes learning of joint performance problems based on reporting from actual operations only if after-action reports or other reporting mentioned the problem. Nonetheless, without appropriate performance measures and baselines against which to assess performance on joint operations, even anecdotal information can have limited utility because officials are less likely to be able to assess the magnitude of the problem, or even become aware of it if no reports identify it. DOD has acknowledged the need to develop specific performance indicators for unmanned aircraft and their payloads on joint operations and had begun to develop them at the time of our report. First, the Army recently began an initiative to develop performance indicators and a baseline against which to assess performance. However, while this approach may produce useful information on which to assess the performance of Army-operated unmanned aircraft, payloads, and ground stations, it was not designed to address joint performance. The other three services had not started to develop specific performance indicators and baselines for unmanned aircraft at the time of our review. Second, in May 2005, DOD assigned U.S. Strategic Command responsibility for the development of joint performance indicators but the effort was just getting started at the time of our review. In addition to anecdotal performance reporting, DOD has not established routine performance reporting mechanisms for UAS operations but instead has relied on sometimes short-duration study teams to gather relevant joint operational performance information. For example, in November 2004, DOD established a group known as a “Tiger Team” to identify opportunities for improving the joint operational effectiveness of UAS. However, this team was established on a temporary basis and had a limited mission to identify improvements only in the U.S. Central Command area of responsibility. The Tiger Team did identify a number of areas needing improvement. For example, it determined that forces in the region need Full Motion Video capability to provide images of actual events as they occur. The team also determined that a need exists to address electromagnetic spectrum limitations hampering UAS operations. However, the team identified the electromagnetic spectrum problem only after the UAS had been deployed and U.S. forces had tried to use them on operational missions. In addition, also in 2004, another DOD short- duration study team evaluated the operational performance of the Shadow unmanned aircraft. Lastly, the Army conducted a one-time comprehensive review of the effectiveness of its UAS in theaters of operations. While these teams developed useful performance information, the approach does not represent a systematic or long-term means to obtain joint UAS performance information since the teams are not permanently established and they did not use consistent study parameters. Finally, even in the instances where some ongoing processes were used, the information obtained was relevant only on a service-specific but not a joint basis. For example, the Marine Corps uses its Operational Advisory Group process to determine needed improvements in its UAS operations. While this group has developed useful information that may assist the Marine Corps in enhancing its ability to effectively use UAS in operations, the information developed is likely to have limited utility for joint operations. DOD acknowledges that the speed with which unmanned aircraft, payloads, communications, and associated technology are being developed, along with the imperative to provide emerging technologies quickly to operating forces, have resulted in the deployment of developmental systems before adequate performance reporting systems have been established. Consequently, while the systems are being successfully used in overseas operations, DOD does not have reasonable assurance that it is well informed on opportunities to further enhance the ability of operational forces to take advantage of UAS capabilities. DOD has achieved certain operational successes with UAS but certain challenges have also emerged that have hampered joint operations or prevented effective employment of UAS. These challenges are caused by the limited attention paid to interoperability standards for UAS and the lack of detailed interoperability standards. Development and implementation of appropriate interoperability, payload commonality, and other standards help to ensure that such problems are addressed during development and any problems are fixed prior to deployment. Moreover, until DOD assesses the extent to which a lack of detailed standards undermines the purpose of the broad standards by allowing development of noninteroperable systems and enforces common standards among the services, problems are likely to continue and possibly be repeated and made more widespread as new unmanned aircraft, sensor and communication payloads, ground stations, and related equipment are developed and fielded. In addition, costly modifications might be needed later. The unsoundness in the approach DOD has taken to assessing joint UAS performance in operational deployments was due to a lack of accepted performance indicators and a routine system for collecting performance information. Until DOD develops specific indicators of UAS joint operational performance, establishes appropriate baselines against which to measure performance, and communicates which indicators operating forces should systematically collect and report to appropriate users, DOD will lack reasonable assurance that it is adequately informed on UAS performance on joint operations. Moreover, DOD may also be poorly informed as to its progress in addressing interoperability and other problems and may therefore be less likely to avoid the same problems in future UAS development and fielding. Lastly, in our 2004 report, we recommended that DOD establish a strategic plan and an office with sufficient authority to enforce program direction to avoid interoperability problems and for other purposes. In nonconcurring with our recommendation to assign an office with sufficient authority to enforce program direction, DOD indicated that the UAS Planning Task Force and Joint Capabilities Integration and Development System had sufficient authority and would address interoperability, payload commonality, and other problems. However, these problems persist. Consequently, we continue to believe that sustained management attention is warranted. Without such attention, DOD continues to risk undercutting the benefit of its continued investment in UAS. Consequently, we continue to believe that our prior recommendation has merit, but we are not reiterating it because DOD indicated that it will not implement it. To address the challenges emerging in joint operations, we recommend that the Secretary of Defense direct the Undersecretary of Defense (Acquisition, Technology, and Logistics), the Chairman of the Joint Chiefs of Staff, the service secretaries, and other appropriate organizations to work together to take the following four actions develop or adjust communications interoperability standards and electromagnetic frequency reprogramming capabilities standards and ensure that they are applied to new or modified unmanned aircraft, sensor and communications payloads, ground stations, and related equipment; develop sensor and other payload commonality standards where practical and enforce such standards when modifying existing unmanned aircraft or payloads and developing new ones; develop appropriately detailed UAS interoperability standards; and determine whether unmanned aircraft need all-weather flying capabilities, identify any performance degradation associated with all- weather flying capabilities, and obtain all-weather capabilities where appropriate. To improve joint operational performance reporting, we recommend that the Secretary of Defense direct the Commander of the U.S. Strategic Command to ensure that the performance measurement system being developed by the command at a minimum measures how effectively UAS perform their missions by identifying quantifiable goals and comparing results with desired outcomes; identifies the specific performance indicator information that needs to be collected to adequately assess joint performance; develops indicators that assess communications and payload interoperability, and the extent to which electromagnetic spectrum congestion is undermining joint operations; establishes baselines and applies the identified indicators against the baselines to gauge success in joint UAS performance; and develops a way to systematically collect identified performance information and routinely reports it to organizations that develop and field UAS. DOD provided written comments on a draft of this report. These comments are reprinted in their entirety in appendix II. We made five recommendations and DOD fully or partially concurred with them. It also provided technical comments, which we incorporated into our report as appropriate. First, DOD concurred with our recommendation for the appropriate DOD organizations to work together to develop or adjust communications interoperability standards and electromagnetic frequency reprogramming capabilities standards and ensure that they are applied to new or modified unmanned aircraft, sensor and communications payloads, ground stations, and related equipment. In concurring, DOD indicated that it recognized the utility of communications interoperability and the need to improve this capability and will direct the services to use common frequencies and data links to enhance communications interoperability. Second, in partially concurring with our recommendation to develop and enforce sensor and other payload commonality standards where practical, DOD commented that it does not typically focus on payload interchangeability. Instead, DOD pointed out that unmanned aircraft payload procurement is a service responsibility and is dependent on service mission requirements, unmanned aircraft physical design limitations, and rapid technological evolution. Our report recognizes that it is not practical for all unmanned aircraft sensors and payloads to be common due to the various sizes of some aircraft and we worded our recommendation accordingly. Third, DOD fully concurred with our recommendation that the appropriate DOD organizations work together to develop appropriately detailed UAS interoperability standards. DOD indicated that the UAS Roadmap 2005- 2030 released in August 2005 discusses the preferred framework, methodology, and standards for achieving UAS interoperability. DOD outlined a number of actions that it has taken to address UAS interoperability standards, including ratifying a North Atlantic Treaty Organization Standards Agreement aimed at achieving joint and combined interoperability. The Joint Chiefs of Staff has tasked the newly formed Joint UAS Material Review Board and Joint UAV Center of Excellence to provide recommendations for continuing to improve UAS interoperability. Fourth, DOD fully concurred with our recommendation to determine whether unmanned aircraft need all-weather flying capabilities, identify any performance degradation associated with all-weather capabilities, and obtain all-weather capabilities where appropriate. DOD commented that combatant commanders should expect UAS to support operations in diverse weather conditions. Further, DOD indicated that as UAS capabilities improve, the weather conditions these systems will need to operate in will also increase. However, DOD also points out that it is not cost effective to expect all classes of unmanned aircraft to have an all- weather capability. We agree. The intention of our recommendation is for DOD to determine those UAS for which all-weather capabilities are cost effective and to add such capabilities when appropriate. Finally, DOD partially concurred with our recommendation that U.S. Strategic Command ensure that the performance measurement system being developed at a minimum includes quantifiable goals, performance baselines, systematic collection procedures, measures of communications and payload interoperability, and performance indicators against which to measure performance. DOD indicated that the U.S. Strategic Command has drafted a Joint Functional Component Concept of Operations that includes metrics to gauge the force’s ability to meet intelligence, surveillance, and reconnaissance requirements. Moreover, DOD stated that in conjunction with the services, intelligence community, combatant commanders, and other DOD organizations, this action would facilitate not only the evaluation of UAS performance but would enable DOD to have the necessary information available to assess such factors as UAS requirements, mission accomplishment, UAS capabilities, and customer satisfaction. DOD also pointed out that the performance measures are in development and will require service participation to define the specific data and methodology which will result in useful information. While we acknowledge that these actions should address many of the data elements that we believe are necessary to evaluate UAS, we continue to believe that effective communications, interoperability, and avoidance of frequency congestion are important contributors to the success of joint operations. Therefore, we continue to believe that DOD should ensure that, at a minimum, the U.S. Strategic Command includes the data elements we recommended in its performance measurement system. In addition, we agree that other organizations including the services, should participate in the development of this measurement system if appropriate. We are sending copies of this report to other appropriate congressional committees, the Secretary of Defense, the secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Chairman of the Joint Chiefs of Staff; and the Director, Office of Management and Budget. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-9619 or email at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO contact and key contributors are listed in appendix III. To evaluate the operational performance of unmanned aircraft systems (UAS) in recent operations, we examined the Department of Defense (DOD) regulations, directives, and instructions as well as service guidance and documentation on UAS. We met with key DOD and service officials, including those from the UAS Planning Task Force and UAS program managers, to discuss the current status and future plans for these systems. We reviewed the Unmanned Aerial Vehicles Roadmap 2002-2027 because this document establishes an overall DOD management framework for developing and employing UAS DOD-wide and the update, 2005 Unmanned Aircraft Systems Roadmap. During discussion and visits with DOD and service officials, we obtained and reviewed DOD and service analyses, briefings, and summary reports describing each of the UAS used in supporting recent combat and combat support operations. This included obtaining detailed information on current and future UAS operational capabilities. Additionally, we obtained information on the numbers and types of missions performed by UAS, as well as the methods used by the services to evaluate UAS performance in accomplishing those missions. To assess the reliability and types of missions provided to us by DOD, we (1) interviewed knowledgeable officials about the processes for collecting and maintaining the data and (2) reviewed the data for completeness and reasonableness by comparing it to other sources of information. We determined that the data were sufficiently reliable for the purposes of this review. DOD and service officials also provided specific examples of operational successes and emerging challenges. We discussed actions taken and processes used by DOD and service officials and the Joint Capabilities Integration and Development System to address identified challenges. We also held discussions with Joint Staff officials to discuss their efforts to address joint UAS issues via the Tiger Team. The specific military activities that we visited and/or obtained written responses to questions from include the following: Office of the Undersecretary of Defense (Acquisition, Technology, and Logistics) and its Joint UAS Planning Task Force; Washington, D.C.; Headquarters, Department of the Army; Washington, D.C.; U.S. Army Redstone Arsenal, Huntsville, Alabama; U.S. Marine Corps, Systems Command, Quantico, Virginia; U.S. Navy Naval Sea Systems Command, Naval Air Station Patuxent U.S. Air Force Air Combat Command Directorate of Requirements, Langley Air Force Base, Virginia; U.S Air Force, Air Force Material Command, Wright Patterson Air Force Base Dayton, Ohio; U.S. Joint Forces Command, Norfolk, Virginia; U.S. Central Command, MacDill Air Force Base, Tampa, Florida; U.S. Special Operations Command, MacDill Air Force Base, Tampa, U.S. Joint Staff, Washington, D.C., and U.S Strategic Command, Omaha, Nebraska. We also obtained documents describing the mission and planned operations of the new Joint Unmanned Aerial Vehicle Center of Excellence and Joint Unmanned Aerial Vehicle Overarching Integrated Process Team. To assess the soundness of DOD’s approach to evaluating UAS operational performance, we interviewed DOD and service officials to discuss the criteria and processes used to assess performance. We also obtained and reviewed DOD and Army UAS Operations Assessments to identify issues and concerns regarding performance. Additionally, we held discussions with U.S. Strategic Command officials to obtain information on the status of their efforts to establish measures for assessing joint UAS performance. We also held discussions with service officials to determine the extent to which they are required to capture information on the use and performance of UAS in their existing lessons-learned systems. Finally, we obtained and reviewed DOD and service specific UAS or unmanned aerial vehicle roadmaps. We performed our work from July 2004 to October 2005 in accordance with generally accepted government auditing standards. In addition to the person named above, Brian J. Lepore, Assistant Director; Harry E. Taylor, Jr.; Patricia F. Albritton; Jeanett H. Reid; Elisha T. Matvay; Robert B. Brown; Cheryl A. Weissman; Ron La Due Lake; and Kenneth E. Patton also made major contributions to this report. Unmanned Aerial Vehicles: Improved Strategic and Acquisition Planning Can Help Address Emerging Challenges. GAO-05-395T. Washington, D.C.: March 9, 2005. Unmanned Aerial Vehicles: Changes in Global Hawk’s Acquisition Strategy Are Needed to Reduce Program Risks. GAO-05-6. Washington, D.C.: November 5, 2004. Unmanned Aerial Vehicles: Major Management Issues Facing DOD’s Development and Fielding Efforts. GAO-04-530T. Washington, D.C.: March 17, 2004. Force Structure: Improved Strategic Planning Can Enhance DOD’s Unmanned Aerial Vehicles Efforts. GAO-04-342. Washington, D.C.: March 17, 2004. Nonproliferation: Improvements Needed for Controls on Exports of Cruise Missile and Unmanned Aerial Vehicles. GAO-04-493T. Washington, D.C.: March 9, 2004. Nonproliferation: Improvements Needed to Better Control Technology Exports for Cruise Missiles and Unmanned Aerial Vehicles. GAO-04-175. Washington, D.C.: January 23, 2004. Defense Acquisitions: Matching Resources with Requirements Is Key to the Unmanned Combat Air Vehicle Program’s Success. GAO-03-598. Washington, D.C.: June 30, 2003. Unmanned Aerial Vehicles: Questionable Basis for Revisions to Shadow 200 Acquisition Strategy. GAO/NSIAD-00-204. Washington, D.C.: September 26, 2000. Unmanned Aerial Vehicles: Progress of the Global Hawk Advanced Concept Technology Demonstration. GAO/NSIAD-00-78. Washington, D.C.: April 25, 2000. Unmanned Aerial Vehicles: DOD’s Demonstration Approach Has Improved Project Outcomes. GAO/NSIAD-99-33. Washington, D.C.: August 30, 1999. Unmanned Aerial Vehicles: Progress toward Meeting High Altitude Endurance Aircraft Price Goals. GAO/NSIAD-99-29. Washington, D.C.: December 15, 1998. Unmanned Aerial Vehicles: Outrider Demonstrations Will Be Inadequate to Justify Further Production. GAO/NSIAD-97-153. Washington, D.C.: September 23, 1997. Unmanned Aerial Vehicles: DOD’s Acquisition Efforts. GAO/T-NSIAD-97- 138. Washington, D.C.: April 9, 1997. Unmanned Aerial Vehicles: Hunter System Is Not Appropriate for Navy Fleet Use. GAO/NSIAD-96-2. Washington, D.C.: December 1, 1995. Unmanned Aerial Vehicles: Performance of Short-Range System Still in Question. GAO/NSIAD-94-65. Washington, D.C.: December 15, 1993. Unmanned Aerial Vehicles: More Testing Needed Before Production of Short-Range System. GAO/NSIAD-92-311. Washington, D.C.: September 4, 1992. Unmanned Aerial Vehicles: Medium Range System Components Do Not Fit. GAO/NSIAD-91-2. Washington, D.C.: March 25, 1991. Unmanned Aerial Vehicles: Realistic Testing Needed Before Production of Short-Range System. GAO/NSIAD-90-234. Washington, D.C.: September 28, 1990. Unmanned Vehicles: Assessment of DOD’s Unmanned Aerial Vehicle Master Plan. GAO/NSIAD-89-41BR. Washington, D.C.: December 9, 1988.
Unmanned aircraft systems (UAS) consist of an unmanned aircraft; sensor, communications, or weapons, carried on board the aircraft, collectively referred to as payloads; and ground controls. UAS have been used successfully in recent operations, and are in increasingly high demand by U.S. forces. To meet the demand, the Department of Defense (DOD) is increasing its investment in and reliance on UAS, and often deploying them while still in development. GAO has previously found that DOD's approach to developing and fielding UAS risked interoperability problems which could undermine joint operations. GAO was asked to review (1) UAS performance in recent joint operations and (2) the soundness of DOD's approach to evaluating joint UAS operational performance. DOD has achieved certain operational successes using UAS, including identifying time-critical targets in Iraq and Afghanistan, and striking enemy positions to defeat opposing forces. Some missions effectively supported joint operations, and in other cases, the missions were service-specific. DOD has encountered challenges which have hampered joint operations at times. First, some UAS cannot easily transmit and receive data with other communication systems because they are not interoperable. Although DOD guidance requires interoperability, detailed standards for interoperability have not been developed; DOD has relied on existing, more general standards; and the services developed differing systems. For now, U.S. forces have developed technical patches permitting transmission but slowing data flow, potentially hampering time-critical targeting. Second, some sensor payloads cannot be interchangeably used on different UAS because DOD has not adopted a payload commonality standard. Some UAS missions may have to be delayed if compatible unmanned aircraft and payloads are not available. Based on its experience with UAS in Persian Gulf operations, U.S. Central Command believes communications interoperability and payload commonality problems occur because the services' UAS development programs have been service-specific and insufficiently attentive to joint needs. Lastly, the electromagnetic spectrum needed to control the flight of certain unmanned aircraft and to transmit data is constrained and no standard requiring the capability to change frequencies had been adopted because the problem was not foreseen. Thus, some systems cannot change to avoid congestion and consequently some missions have been delayed, potentially undermining time-critical targeting. In addition to the joint operational challenges, inclement weather can also hamper UAS operations. Unmanned aircraft are more likely to be grounded in inclement weather than manned aircraft and DOD had not decided whether to require all-weather capability. While DOD has acknowledged the need to improve UAS interoperability and address bandwidth and weather constraints, little progress has been made. Until DOD adopts and enforces interoperability and other standards, these challenges will likely remain and become more widespread as new UAS are developed and fielded. DOD's approach to evaluating UAS joint operational performance has been unsound because it was not systematic or routine. DOD has deployed UAS before developing a joint operations performance measurement system, even though results-oriented performance measures can be used to monitor progress toward agency goals. DOD has generally relied on after-action and maintenance reports which have useful but not necessarily joint performance information. DOD has also relied on short-duration study teams for some performance information but had not established ongoing or routine reporting systems. Thus, while continuing to invest in UAS, DOD has incomplete performance information on joint operations on which to base acquisition or modification decisions. In May 2005, U.S. Strategic Command began developing joint performance measures.
The Coast Guard has responsibilities divided into 11 programs that fall under two broad missions—homeland security and non-homeland security—which are recognized in the Homeland Security Act. To accomplish its wide range of responsibilities, the Coast Guard is organized into two major commands that are responsible for overall mission execution—one in the Pacific area and the other in the Atlantic area. These commands are divided into nine districts, which in turn are organized into 35 sectors that unify command and control of field units and resources, such as multimission stations and patrol boats. In fiscal year 2005, the Coast Guard had over 46,000 full-time positions—about 39,000 military and 7,000 civilians. In addition, the agency had about 8,100 reservists who support the national military strategy or provide additional operational support and surge capacity during times of emergency, such as natural disasters. Furthermore, the Coast Guard also had about 31,000 volunteer auxiliary personnel help with a wide array of activities, ranging from search and rescue to boating safety education. For each of its six non-homeland security programs, the Coast Guard has developed a primary performance measure to communicate agency performance and provide information for the budgeting process to Congress, other policymakers, and taxpayers. The Coast Guard has also developed 39 secondary measures that it uses to manage these six programs. The Coast Guard selected and developed the six primary measures based on a number of criteria, including GPRA, DHS, and OMB guidance as well as legislative, department, and agency priorities. When viewed as a suite of measures, the primary and secondary measures combined are intended to provide Coast Guard officials with a more comprehensive view of program performance than just the program’s primary measure. Some of these secondary measures are closely related to the primary measures; for example, a secondary measure for the marine environmental protection program, “annual number of oil spills greater than 100 gallons and chemical discharges per 100 million tons shipped,” is closely related to the program’s primary measure, “5-year average annual number of oil spills greater than 100 gallons and chemical discharges per 100 million tons shipped.” However, other secondary measures reflect activities and priorities that are not reflected in the primary performance measures. For example, a secondary measure in the search and rescue program, “percent of property saved,” reflects activities not captured in the program’s primary measure, “percent of mariners in imminent danger who are rescued.” In 2004, we compared trends in performance results, as reported by the Coast Guard’s primary performance measures, with the agency’s use of resources and found that the relationship between results achieved and resources used was not always what might be expected—that is, resources expended and performance results achieved did not have consistent direction of movement and sometimes bore an opposite relationship. We reported that disconnects between resources expended and performance results achieved have important implications for resource management and accountability, especially given the Coast Guard’s limited ability to explain them. In particular, these disconnects prompted a question as to why, despite substantial changes in a number of programs’ resource hours used over the period we examined, the corresponding performance results for these programs were not necessarily affected in a similar manner—that is, they did not rise or fall along with changes in resources. At that time, the Coast Guard could not say with any assurance why this occurred. For example, while resource hours for the search and rescue program dropped by 22 percent in fiscal year 2003 when compared to the program’s pre- September 11, 2001 baseline, the performance results for the program remained stable for the same period. These results suggest that performance was likely affected by factors other than resource hours. One set of factors cited by the Coast Guard as helping to keep performance steady despite resource decreases involved strategies such as the use of new technology, better operational tactics, improved intelligence, and stronger partnering efforts. Coast Guard officials also pointed to another set of factors, largely beyond the agency’s control (such as severe weather conditions), to explain performance results that did not improve despite resource increases. At the time of our 2004 report, the Coast Guard did not have a systematic approach to effectively link resources to results. However, the Coast Guard had begun some initiatives to better track resource usage and manage program results, but many of these initiatives were still in early stages of development and some did not have a time frame for completion. Like other federal agencies, DHS is subject to the performance-reporting requirements of GPRA. GPRA requires agencies to publish a performance report that includes performance measures and results. These reports are intended to provide important information to agency managers, policymakers, and the public on what each agency accomplished with the resources it was given. The three key annual publications that DHS and the Coast Guard use to report the Coast Guard’s non-homeland security primary performance measures are the DHS Performance and Accountability Report, the DHS fiscal year budget request, and the Coast Guard’s fiscal year Budget-in-Brief. The DHS Performance and Accountability Report provides financial and performance information to the President, Congress and the public for assessing the effectiveness of the department’s mission performance and stewardship of resources. The DHS annual budget request to Congress identifies the resources needed for meeting the department’s missions. The Coast Guard’s annual Budget- in-Brief reports performance information to assess the effectiveness of the agency’s performance as well as a summary of the agency’s most recent budget request. These documents report the primary performance measures for each of the Coast Guard’s non-homeland security programs, as well as descriptions of the measures and explanations of performance results. While these documents report performance results from some secondary measures, DHS and the Coast Guard do not report most of the Coast Guard’s secondary measures in these documents. GPRA also requires agencies to establish goals and targets to define the level of performance to be achieved by a program and express such goals in an objective, quantifiable, and measurable form. In passing GPRA, Congress emphasized that the usefulness of agency performance information depends to a large degree on the reliability of performance data. To be useful in reporting to Congress on the fulfillment of GPRA requirements and in improving program results, the data must be reliable—that is, they must be seen by potential users to be of sufficient quality to be trustworthy. While no data are perfect, agencies need to have sufficiently reliable performance data to provide transparency of government operations so that Congress, program managers, and other decision makers can use the information. In establishing a system to set goals for federal program performance and to measure results, GPRA requires that agencies describe the means to be used to validate and verify measured values to improve congressional decision making by providing objective, complete, accurate and consistent information on achieving statutory objectives, and on the relative effectiveness and efficiency of federal programs and spending. In addition, to improve the quality of agency performance management information, the Reports Consolidation Act of 2000 requires an assessment of the reliability of performance data used in the agency’s program performance report. OMB’s Program Assessment Rating Tool (PART) is designed to strengthen and reinforce performance measurement under GPRA by encouraging careful development of outcome-oriented performance measures. Between 2002 and 2005, OMB reviewed each of the Coast Guard’s six non- homeland security programs. OMB found that four programs—ice operations, living marine resources, marine environmental protection, and marine safety—were performing adequately or better, and two programs— aids to navigation and search and rescue—did not demonstrate results. OMB recommended that for the aids to navigation program, the Coast Guard develop and implement a better primary performance measure that allows program managers to understand how their actions produce results. Specifically, OMB recommended using an outcome-based measure, the number of collisions, allisions, and groundings, as a measure for the program, instead of the measure that was being used—aid availability. For the search and rescue program, OMB recommended that the Coast Guard develop achievable long-term goals for the program. Since these reviews, the Coast Guard has implemented a new primary performance measure for the aids to navigation program, “5-year average annual number of distinct collisions, allisions, and groundings,” and developed new long-term goals for the search and rescue program’s primary performance measure, that is rescuing between 85 and 88 percent of mariners in imminent danger each year from fiscal year 2002 through 2010. While the six non-homeland security primary performance measures are generally sound, and the data used to calculate these measures are generally reliable, we found weaknesses with the soundness of three measures and the reliability of the data used in one measure (see table 2). All six measures cover key program activities and are objective, measurable, and quantifiable, but three are not completely clear, that is, they do not consistently provide clear and specific descriptions of the data, events, or geographic areas they include. The Coast Guard’s processes for entering and reviewing its own internal data are likely to produce reliable data. However, processes for reviewing or verifying data gathered from external sources vary from source to source, and for the marine environmental protection measure, the processes are insufficient. While the six primary performance measures are generally sound—in that the measures cover key activities of the program, and are objective, measurable, and quantifiable—three of the measures are not completely clear. The primary performance measures for the ice operations, living marine resources, and search and rescue programs do not consistently provide clear and specific descriptions of the data, events, or geographic areas they include. It is possible these weaknesses could lead to decisions or judgments based on inaccurate, incomplete, or misreported data. The three programs with primary measures that are not completely clear are as follows: Ice operations. Further clarity and consistency in reporting the geographic areas included in the ice operations primary performance measure, “domestic ice breaking—annual number of waterway closure days,” would provide users additional context to discern the full scope of the measure. Despite its broad title, the measure does not reflect the annual number of closure days for all waterways across the United States, but rather reflects only the annual number of closure days in the Great Lakes region, although the Coast Guard breaks ice in many East Coast ports and waterways. According to Coast Guard officials, the measure focuses on the Great Lakes region because it is a large commerce hub where the icebreaking season tends to be longer and where ice has a greater impact on maritime transportation. While this limitation is included in accompanying text in some documents, the description of the limitation is inconsistent across department and agency publications. The DHS fiscal year 2005 Performance and Accountability Report notes that the measure is made up of nine critical waterways within the region, but the DHS fiscal year 2007 budget request reports that it consists of seven critical waterways, while the Coast Guard’s fiscal year 2007 Budget-in-Brief does not mention the number of waterways included in the measure. In addition, Coast Guard program officials said that the measure only reflects closures in one critical waterway—the St. Mary’s River. Coast Guard program officials at agency headquarters said that they are in the early stages of developing a new primary performance measure that will incorporate domestic icebreaking activities in areas beyond the Great Lakes. However, until a better measure is developed, the description of the current measure can confuse users and might cause them to think performance was better or worse than it actually was. Search and rescue. While the primary performance measure for the search and rescue program, “percent of mariners in imminent danger who are rescued,” reflects the program’s priority of saving lives, it excludes those incidents in which 11 or more lives were saved or lost. According to Coast Guard officials, an agency analysis in fiscal year 2005 showed that 98 percent of search and rescue cases involved 10 or fewer people that were saved or lost. Coast Guard officials added that large cases involving 11 or more people are data anomalies and by excluding these cases the agency is better able to assess the program’s performance on a year-to-year basis. While we understand the Coast Guard’s desire to assess program performance on a year-to-year basis, and to not skew the data, in some instances this type of exclusion may represent a significant level of activity that is not factored into the measure. For example, during Hurricane Katrina, the Coast Guard rescued more than 33,500 people. While including such large incidents in the performance measure would skew annual performance results, it is important for the Coast Guard to recognize these incidents, either through a footnote or accompanying text in department and agency publications. Not clearly defining the measure and recognizing such incidents may cause internal managers and external stakeholders to think performance was better or worse than it actually was. Living marine resources. Similar to the ice operations primary measure, the living marine resources primary performance measure, “percent of fishermen in compliance with regulations,” is not consistently and clearly defined in all department and agency publications. The Coast Guard enforces federal regulations, similar to agencies across law enforcement, not by checking fishing vessels at random, but instead by targeting those entities that are most likely to be in violation of fishery regulations, such as vessels operating in areas that are closed to fishing. Because the Coast Guard targets vessels, the primary measure does not reflect the compliance rate of all fishermen in those areas patrolled by the Coast Guard, as could be inferred by the description, but rather is an observed compliance rate, that is, the compliance rate of only those fishing vessels boarded by Coast Guard personnel. The description of this performance measure is inconsistent across department and agency publications. For example, in the DHS fiscal year 2005 Performance and Accountability Report and the Coast Guard’s Budget-in-Brief, this measure is described as an observed compliance rate, but the DHS fiscal year 2007 budget request does not clarify that this measure represents an observed compliance rate rather than the compliance rate of all fishermen in those areas patrolled by the Coast Guard. A measure that is not consistently and clearly stated may affect the validity of managers’ and stakeholders’ assessments of program performance, possibly leading to a misinterpretation of results. While the Coast Guard has controls in place to ensure the timeliness, completeness, accuracy, and consistency of internal data it creates—that is, original data that Coast Guard personnel collect and enter into its data systems—the agency does not have controls in place to verify or review the completeness and accuracy of data obtained from all external sources that it uses in calculating some of the primary performance measures. The internal data used to calculate the six primary performance measures are generally reliable—in that the Coast Guard has processes in place to ensure the data’s timeliness, completeness, accuracy, and consistency. These controls include data fields, such as pick lists and drop-down lists, that allow for standardized data entry, mandatory data fields to ensure all required data are entered, access controls that allow only authorized users to enter and edit data, requirements for entering data in a timely manner, and multiple levels of review across the agency. To ensure data consistency across the Coast Guard, each of the six non-homeland security programs has published definitions or criteria to define the data used for the primary measures. However, the Coast Guard acknowledges that in some instances these criteria may be open to subjective interpretation, such as with the search and rescue program. For example, when entering data to record the events of a search and rescue incident, rescuers must identify the outcome of the event by listing whether lives were “lost,” “saved,” or “assisted.” While program criteria define a life that is lost, saved, or assisted, there is potential for subjective interpretation in some incidents. Through reviews at the sector, district, and headquarters levels the Coast Guard attempts to remedy any inconsistencies from interpretations of these criteria. While the Coast Guard uses internal data for all six of its non-homeland security primary performance measures, it also uses external data to calculate the primary performance measures for two programs—marine safety and marine environmental protection (see table 3). The Coast Guard’s procedures for reviewing external data are inconsistent across these two programs. For example, while the Coast Guard has developed better processes and controls for external data used in the marine safety program’s primary performance measure—such as using a news clipping service that gathers media articles on recreational boating accidents and fatalities and using a database that gathers recreational boating injury data from hospitals—the agency does not have processes to test the reliability of external data used in the marine environmental protection program’s primary performance measure. The extent to which controls are used to verify external data for the marine safety and marine environmental protection primary measures is described below. Marine safety. To calculate the marine safety program’s primary performance measure, “5-year average annual number of deaths and injuries of recreational boaters, mariners, and passengers,” the Coast Guard uses internal data on deaths and injuries for mariners and passengers, as well as external data on recreational boating deaths and injuries from the Boating Accident Reporting Database (BARD)—a Coast Guard managed database—that relies on data collected and entered by the states. In 2000, the Department of Transportation Office of Inspector General reported that recreational boating fatality data collected from the states consistently understated the number of fatalities, in part because a precise definition of a recreational boating fatality did not exist. To improve the reliability and consistency of the data, the Coast Guard created a more precise definition and clarified reporting criteria by providing each state with a data dictionary that describes the definitions for all required data fields. In addition, to improve the timeliness of incident reporting, the Coast Guard created a Web-based version of BARD for electronic submission of recreational boating accident data. According to Coast Guard officials, this system allows Coast Guard staff to verify, validate, and corroborate data with each state for accuracy and completeness prior to inclusion in the measure. According to Coast Guard officials, a recent Coast Guard analysis showed that these efforts have improved the error rate from an average of about 6 percent to about 1 percent annually. However, despite these improvements, the Coast Guard acknowledges that some incidents may still never be reported, some incidents may be inaccurately reported, and some duplicate incidents may be included. Coast Guard officials told us that the agency continues to work to reduce these errors by developing additional steps to validate data. These recent steps include using a news clipping service that gathers all media articles concerning recreational boating accidents and fatalities and using a database that gathers recreational boating injury data from hospitals. Marine environmental protection. In contrast, the Coast Guard does not have processes to validate the reliability of external data used in the marine environmental protection program’s primary performance measure, “5-year average annual number of oil spills greater than 100 gallons and chemical discharges per 100 million tons shipped.” Each year the Coast Guard uses internal data on oil spills and chemical discharges, as well as external data from the Corps on the amount of oil and chemicals shipped annually in the United States to calculate this measure. However, the Coast Guard does not review the Corps’ data for completeness or accuracy, nor does it review the data reliability procedures the Corps uses to test the data for completeness or accuracy. Coast Guard officials said that they did not take these steps because they had thought the Corps performed its own internal assessments, but they were also unaware of what these assessments were or whether the Corps actually performed them. While, according to a Corps official, the Corps does have some controls in place, an official at the Coast Guard agreed that the Coast Guard would benefit from having, at a minimum, some familiarity with the internal controls used by the Corps. More than a third (9 of the 23) of the secondary performance measures assessed are generally sound—that is, they are clearly stated and described; cover key activities of the program; and are objective, measurable, and quantifiable (see table 4). However, as described below, weaknesses exist for the other 14 of these 23 measures. More specifically, for the 14 secondary measures, we found (1) the Coast Guard does not have measurable targets to assess whether program and agency goals and objectives are being achieved for 12 measures, (2) the Coast Guard does not have agencywide criteria or guidance to accurately reflect program results and ensure objectivity for 1 measure, and (3) the Coast Guard does not clearly state or describe the data or events included in 1 measure. These weaknesses do not allow the Coast Guard to provide assurance that these performance measures do not lead to decisions or judgments based on inaccurate, incomplete, or misreported information. More detail on all of the secondary measures we assessed is in appendix II. Measures without measurable targets. Twelve secondary measures— 11 living marine resources measures and 1 marine environmental protection measure—do not have annual targets to assess whether program and agency goals and objectives are being achieved. According to Coast Guard officials, these measures do not have targets because the focus of the program is on the primary performance measures, and not the inputs and outputs reflected in these secondary measures. However, without any quantifiable, numeric targets, it is difficult for the Coast Guard to know the extent to which program and agency goals and objectives are being achieved. Measure without criteria or guidance to accurately reflect program results and ensure objectivity. One of the search and rescue program’s secondary performance measures that we analyzed, “percent of property saved,” does not have criteria or guidance for agency personnel to objectively and consistently determine the value of saved property. Despite this lack of criteria on how to consistently and objectively determine property values, data from this measure are reported in both the Coast Guard’s annual Budget-in-Brief and the DHS fiscal year Performance and Accountability Report. Coast Guard officials said it would be difficult to develop such criteria because of the large number of boats and vessels and their varying values. Officials added that Coast Guard personnel generally do not have access to, and do not follow up to obtain, insurance or damage estimates for saved property. In addition, we found that Coast Guard units do not consistently record property values across the agency. For example, some units do not record property values at all, other units record property values only when the actual value can be determined, and other units estimate property values using a $1,000-per-foot-of-vessel- length rule of thumb. Without any criteria or guidance to determine property values, the Coast Guard cannot provide assurance that agency personnel are consistently and objectively making these determinations across the agency, and whether the measure accurately reflects program results. Measure not completely clear. Similar to the primary performance measure for the search and rescue program, one of the search and rescue program’s secondary measures we analyzed, “percent of lives saved after Coast Guard notification,” reflects the program’s priority of saving lives, but excludes those incidents in which 11 or more lives were saved or lost in a single case. As with the primary measure, including such large incidents in performance measures would skew annual performance results, and thus it may be appropriate to exclude them. However, it is important for the Coast Guard to recognize, either through a footnote or accompanying text, the exclusion of these incidents—such as during Hurricane Katrina, in which the agency rescued more than 33,500 people—because otherwise, performance results could be misinterpreted or misleading to users. While the primary measures for the Coast Guard’s six non-homeland security programs are generally sound and use reliable data, challenges exist with using the primary measures to assess the link between resources expended and results achieved. Ideally, a performance measure not only tells decision makers what a program is accomplishing, but it also gives them a way to affect these results through the decisions they make about resources—for example, by providing additional resources with a degree of confidence that doing so will translate into better results. Even sound performance measures, however, may have limits to how much they can explain about the relationship between resources expended and results achieved. For the Coast Guard, these limits involve (1) the difficulty of fully reflecting an entire program such as ice operations or marine environmental protection in a single performance measure and (2) the ability to account for the many factors, other than resources, that can affect program results. Recognizing these limitations, and responding to recommendations we have made in past reports, Coast Guard officials have been working on a wide range of initiatives they believe will help in understanding the effects of these other factors and deciding where resources can best be spent. According to Coast Guard officials, although the agency has been working on some of these initiatives for several years, the extent and complexity of the effort, together with the challenges presented in integrating a multitude of initiatives into a data-driven and comprehensive strategy, requires additional time to complete. At this time, the Coast Guard does not expect many of the initiatives to be implemented until 2010. Until these initiatives are developed and operational, it is not possible to fully assess the overall success the agency is likely to have in establishing clear explanations for how its resources and results are linked. Performance measures are one important tool to communicate what a program has accomplished and provide information for budget decisions. It is desirable for these measures to be as effective as possible in helping to explain the relationship between resources expended and results achieved, because agencies that understand this linkage are better positioned to allocate and manage their resources effectively. The Coast Guard follows DHS guidance in reporting a single measure per program, and doing so is consistent with our prior work on agencies that were successful in measuring performance and implementing GPRA. Previously, we found that agencies successful in measuring performance and meeting GPRA’s goal-setting and performance measurement requirements limited their measures to covering core program activities essential for producing data for decision making and not all program activities. Each of the Coast Guard’s primary measures for its six non- homeland security programs meets our criteria of covering a key activity. None of them, however, is comprehensive enough to capture all of the activities performed within the program that could affect results. For example, the primary performance measure for the marine environmental protection program relates to preventing oil and chemical spills. This is a key program activity, but under this program the Coast Guard also takes steps to prevent other marine debris and pollutants (such as plastics and garbage), protect against the introduction of invasive aquatic nuisance species, and respond to and mitigate oil and chemical spills that actually do occur. As such, resources applied to these other activities would not be reflected in the program’s primary measure, and thus, a clear and direct relationship between total program resources and program results is blurred. In some cases, it may be possible to identify or develop a performance measure that fully encapsulates all the activities within a program, but in many cases the range of activities is too broad, resulting in a measure that would be too nebulous to be of real use. Coast Guard officials told us that developing primary measures that incorporate all of the diverse activities within some programs, as well as reflect the total resources used within the program would be difficult, and that such a measure would likely be too broad to provide any value for assessing overall program performance. As such, officials added that performance measures provide a better assessment of program performance and resource use when all of a program’s measures—both primary and secondary—are viewed in conjunction as a suite of measures. A second challenge in establishing a clearer relationship between resources expended and results achieved is that many other factors can affect performance and blur such a relationship. Some of these factors can be external to an agency—and perhaps outside an agency’s ability to influence. At the time of our 2004 report, Coast Guard officials also pointed to these external factors outside of the agency’s control to explain performance results that did not improve despite resource increases. Because of the potentially large number of external factors, and their sometimes unpredictable or often unknown effect on performance, it may be difficult to account for how they—and not the resources expended on the program—affect performance results. Such factors are prevalent in the Coast Guard’s non-homeland security programs, according to Coast Guard officials. They cited such examples as the following: Changes in fishing policies off the coast of Alaska had an effect on performance results in the search and rescue program. For many years, commercial sablefish and halibut fishermen were allowed to fish only during a 2-week period each year. Given the limited window of opportunity that this system provided, these fishermen had a strong incentive to go out to sea regardless of weather conditions, thereby affecting the number of the Coast Guard’s search and rescue cases that occurred. In 1994, these regulations were changed; in place of a 2-week fishing season with no limits on the amount of fish any permitted fisherman could harvest, the regulations set a longer season with quotas. This change allowed fishermen more flexibility and more opportunity to exercise caution about when they should fish rather than driving them to go out in adverse weather conditions. Following the change in regulations, Coast Guard statistics show that search and rescue cases decreased in halibut and sablefish fisheries by more than 50 percent, from 33 in 1994 to 15 in 1995. However, Coast Guard officials said that because of the large number of search and rescue cases in the district during these two years—more than 1,000 annually—this policy change only had a minimal impact on the amount of resources the district used for search and rescue cases. Vagaries of weather can also affect a number of non-homeland security missions. Unusually severe weather, such as Hurricane Katrina, for example, can affect the success rates for search and rescue or cause navigational aids to be out of service. Even good weather on a holiday weekend, can increase the need for search and rescue operations—and consequently affect performance results—because such weather tends to encourage large numbers of recreational boaters to be out on the water. Harsh winter weather can also affect performance results for the ice operations program. Results for the marine environmental protection primary performance measure, “the 5-year average annual number of oil spills greater than 100 gallons and chemical discharges per 100 million tons shipped” can be affected by policies and activities that are not part of the marine environmental protection program. For example, according to Coast Guard officials, a foreign country’s decision to institute a more aggressive vessel inspection program could reduce spills caused by accidents in U.S. waters, if the inspections uncovered mechanical problems that were corrected before those vessels arrived in the United States. While not captured in the primary performance measure, the Coast Guard tracks such information through a secondary measure, “the Tokyo and Paris memorandums of understanding port state control reports.” This small set of examples demonstrates that, in some situations, other factors beyond resources expended may influence performance results. Developing a system or model that could realistically take all of these other factors into account is perhaps impossible, and it would be a mistake to view this second challenge as a need to do so. Rather, the challenge is to develop enough sophistication about each program’s context so that the Coast Guard can more systematically consider such factors, and then explain the influence of these factors on resource decisions and performance results. The Coast Guard is actively seeking to address such challenges, as those discussed above, through efforts, some of which have been under way for several years. In 2004, we reported that several initiatives had already begun, and we recommended that the Coast Guard ensure that its strategic planning process and associated documents include a strategy for identifying intervening factors that may affect performance and systematically assess the relationship among these factors, resources expended, and results achieved. Shortly thereafter the Coast Guard chartered a working group to investigate its then more than 50 ongoing initiatives to make recommendations on their value, contribution, and practicality, and to influence agency decisions on the integration, investment, and institutionalization of these initiatives. The working group’s product was a “road map” that clearly defined executable segments, sequencing, and priorities. These results were then documented in a January 2005 Coast Guard internal report that summarized these priorities. Agency documents indicate that the Coast Guard later reduced these 50 original initiatives to the 25 initiatives considered to be the most critical and immediate by evaluating and categorizing all 50 initiatives based on their ability to contribute to the agency’s missions. These 25 initiatives, listed along with their status in appendix III, involve a broad range of activities that fall into seven main areas, as follows: Measurement. Five initiatives are intended to improve the agency’s data collection, including efforts to quantify input, output, and performance to enhance analysis and fact-based decision making. Analysis. Eight initiatives are intended to transform data into information and knowledge to answer questions and enhance decision making on issues such as performance, program management, cause- and-effect relationships, and costs. Knowledge management. Three ongoing initiatives are intended to capture, evaluate, and share employee knowledge, experiences, ideas, and skills. Alignment. Three initiatives are intended to improve the consistency and alignment of agency planning, resource decisions, and analysis across all Coast Guard programs. Access. Two initiatives relate to making data, information, and knowledge transparent and available to employees. Policy and doctrine. Three initiatives are intended to develop new and maintain current Coast Guard management policies. Communication and outreach. One initiative is intended to assist and guide program managers and staff to understand and align all aspects of the Coast Guard’s overall management strategy. We found that one of the initiatives that the working group deemed important and included among the most critical and immediate initiatives, relates, in part, to the first challenge we discussed—that is, developing new measures and improving the breadth of old measures to better manage Coast Guard programs and achieve agency goals. Coast Guard efforts have been ongoing in this regard, and our current work has identified several performance measures that were recently improved, and others that are currently under development. For example, to provide a more comprehensive measure of search and rescue program performance, the Coast Guard is improving its ability to track lives-unaccounted-for— that is, those persons who at the end of a search and rescue response remain missing. According to Coast Guard officials, the agency is working on and anticipates being able to eventually include data on lives- unaccounted-for in the primary performance measure. Also, the Coast Guard began including data on the number of recreational boating injuries, along with the data on mariner and passenger deaths and injuries and recreational boater deaths, which can help provide a more comprehensive primary measure for the marine safety program. In addition, recently, OMB guidance began requiring efficiency measures as part of performance management, and in response, the Coast Guard has started developing such efficiency measures. The Coast Guard is also developing a variety of performance measures to capture agency performance related to other activities, such as the prevention of invasive aquatic nuisance species (marine environmental protection), maritime mobility (aids to navigation), and domestic and polar icebreaking (ice operations). Many of the Coast Guard’s other ongoing initiatives are aimed at the second challenge—that is, developing a better understanding of the various factors that affect the relationship between resources and results. This is a substantial undertaking, and in 2005, upon the recommendation of the working group, the Coast Guard created an office to conduct and coordinate these efforts. This office has taken the lead in developing, aligning, implementing, and managing all of the initiatives. Together, the activities cover such steps as (1) improving measurement, with comprehensive data on activities, resources, and performance; (2) improving agency analysis and understanding of cause-and-effect relationships, such as the relationship between external factors and agency performance; and (3) providing better planning and decision making across the agency. Coast Guard officials expect that once these initiatives are completed, the Coast Guard will have a more systematic approach to link resources to results. The Coast Guard has already been at this effort for several years but does not anticipate implementation of many of these initiatives until at least fiscal year 2010. The amount of time that has already elapsed since our 2004 report may raise some concerns about whether progress is being made. However, as described in the examples below, many of these are complex data-driven initiatives that make up a larger comprehensive strategy to better link resources to results, and as such, we think the lengthy time frame reflects the complexity of the task. According to Coast Guard officials, the agency is proceeding carefully and is still learning about how these initiatives can best be developed and implemented. Three key efforts help show the extent of, and interrelationships among, the various components of the effort: Standardized reporting. The Coast Guard is currently developing an activities dictionary to standardize the names and definitions for all Coast Guard activities across the agency. According to Coast Guard officials, this activities dictionary is a critical step in continuing to develop, implement, and integrate these initiatives. Officials added that standardizing the names and definitions of all Coast Guard activities will create more consistent data collection throughout the agency, which is important because these data will be used to support many other initiatives. Measurement of readiness. Another initiative, the Readiness Management System, is a tool being developed and implemented to track the agency’s readiness capabilities by providing up-to-date information on resource levels at each Coast Guard unit as well as the certification and skills of all Coast Guard uniformed personnel. This information can directly affect outcomes and performance measures by providing unit commanders with information to reconfigure resources for a broad range of missions. Tracking this information, for example, should allow the unit’s commanding officer to determine what resources and personnel skills are needed to help ensure the unit has the skills and resources necessary to accomplish its key activities, or for new programs or activities. Coast Guard officials told us that the Readiness Management System is in the early stages of being implemented across the agency. Framework for analyzing risk, readiness, and performance. According to Coast Guard officials, the information from the Readiness Management System will be integrated with another initiative currently under development, the Uniform Performance Logic Model. This initiative is intended to illustrate the causal relationships among risk, readiness management, and agency performance. Coast Guard officials said that by accounting for these many factors, the model will help decision makers understand why events and outcomes occur, and how these events and outcomes are related to resources. For example, the model will provide the Coast Guard with an analysis tool to assist management with decisions regarding the allocation of resources. The Coast Guard currently anticipates that many of the 25 initiatives will initially be implemented by fiscal year 2010 and expects further refinements to extend beyond this time frame. While the Coast Guard appears to be moving in the right direction and has neared completion of some initiatives, until all of the agency’s efforts are complete, it remains too soon to determine how effective it will be at clearly linking resources to performance results. It is important for the Coast Guard to have sound performance measures that are clearly stated and described; cover key program activities; are objective, measurable, and quantifiable—including having annual targets; and using reliable data. This type of information would help Coast Guard management and stakeholders, such as Congress, make decisions about how to fund and improve program performance. We found that the Coast Guard’s non-homeland security performance measures satisfy many of the criteria and use data that are generally reliable. The weaknesses and limitations we did find do not mean that the measures are not useful but rather represent opportunities for improvement. However, if these weaknesses are not addressed—that is, if measures are not clearly stated and well-defined, do not have measurable performance targets, or do not have criteria to objectively and consistently report data, or processes in place to ensure external data are reliable—the information reported through these measures could be misinterpreted, misleading, or inaccurate. For example, without either processes in place to review the reliability of external data used in performance measures, or a familiarity with the controls used by external parties to verify and validate these data, the Coast Guard cannot ensure the completeness or accuracy of all of its performance results. While the Coast Guard’s measures are generally sound, even sound performance measures have limits as to how much they can explain about the relationship between resources expended and results achieved. The Coast Guard continues to work to overcome these limitations by developing a number of different initiatives, including but not limited to developing and refining the agency’s performance measures. Although the agency appears to be moving in the right direction, until all of the Coast Guard’s efforts are complete, we will be unable to determine how effective these initiatives are at linking resources to results. In the interim, an additional step the Coast Guard can take to further demonstrate the relationship between resources and results is to provide additional information or measures in some of its annual publications—aside from the one primary measure used in department publications—where doing so would help provide context or provide additional perspective. For example, this could be done in other venues—such as the Coast Guard’s annual Budget-in-Brief, or any program-specific publications—where reporting some secondary measures or additional data could provide more context or perspective on programs, and could help to more fully articulate to stakeholders and decision makers the relationship between resources expended and results achieved. Reporting supplemental information on such things as the percentage of aids to navigation available and in need of maintenance, the annual number of search and rescue cases, and icebreaking activities beyond the Great Lakes region would provide additional information on the annual levels of activity that constitute the aids to navigation, search and rescue, and ice operations programs; information that external decision makers, in particular, might find helpful. Reporting these measures would be useful to provide additional information to Congress on activities being conducted that may require more or less funding while the Coast Guard continues its work on the many initiatives it has ongoing aimed at better linking its performance results with resources expended. To improve the quality of program performance reporting and to more efficiently and effectively assess progress toward achieving the goals or objectives stated in agency plans, we recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to: Refine certain Coast Guard primary and secondary performance further clarifying the ice operations primary measure by clearly and consistently describing the geographic area and number of waterways included in the measure; the living marine resources primary measure by clearly and consistently reporting the scope of the measure; and the search and rescue primary measure and the search and rescue “percent of lives saved after Coast Guard notification” secondary measure by reporting those incidents or data that are not included in the measures; developing measurable performance targets to facilitate assessments of whether program and agency goals and objectives are being achieved for the 11 living marine resources secondary measures and the 1 marine environmental protection secondary measure, “Tokyo and Paris memorandums of understanding port state control reports,” that lack annual targets; and establishing agencywide criteria or guidance to help ensure the objectivity and consistency of the search and rescue program’s “percent of property saved” secondary performance measure. Develop and implement a policy to review external data provided by third parties that is used in calculating performance measures to, at a minimum, be familiar with the internal controls external parties use to determine the reliability of their data. Report additional information—besides the one primary measure—in appropriate agency publications or documents where doing so would help provide greater context or perspective on the relationship between resources expended and program results achieved. We provided a draft of this report to the Department of Homeland Security, including the Coast Guard, for their review and comment. The Department of Homeland Security and the Coast Guard generally agreed with the findings and recommendations of the draft and provided technical comments, which we incorporated to ensure the accuracy of our report. The Department of Homeland Security’s written comments are reprinted in appendix IV. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretary of Homeland Security; the Commandant of the Coast Guard; the Director, Office of Management and Budget; and make copies available to other interested parties who request them. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or (202) 512-9610. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. For our first objective—whether the primary performance measure for the Coast Guard’s six non-homeland security programs are sound, and the data used to calculate them are reliable—we used previously established GAO criteria to determine the soundness of the primary performance measures. Specifically, we used our judgment to assess whether the measures are (1) clearly stated and described; (2) cover a key program activity and represent mission goals and priorities; (3) objective, that is whether they are open to bias or subjective interpretation; (4) measurable, that is, represent observable events; and (5) quantifiable, that is, are countable events or outcomes. A measure should be clearly stated and described so that it is consistent with the methodology used to calculate it and can be understood by stakeholders both internally and externally. Measures should also cover key program activities and represent program and agency goals and priorities to help identify those activities that contribute to the goals and priorities. To the greatest extent possible, measures should be objective, that is, reasonably free of bias or manipulation that would distort an accurate assessment of performance. When appropriate, measures should be measurable and quantifiable, including having annual targets, to facilitate future assessments of whether goals or objectives were achieved, because comparisons can be easily made between projected performance and actual results. In addition, to further assess the soundness of the primary performance measures, we interviewed program officials from each non-homeland security program and reviewed planning and performance documentation from each program office at the headquarters, district, and sector levels. Program officials we spoke with included headquarters officials responsible for developing and implementing performance measures in each program, as well as officials at the district and sector levels responsible for collecting and entering performance data. We reviewed documentation on Coast Guard policies and manuals for performance measures, Coast Guard annual performance plans and reports, commandant instructions, prior GAO reports, Office of Management and Budget Program Assessment Rating Tool reviews for each program, and Department of Homeland Security annual reports. To determine the reliability of data used in the primary measures, we assessed whether processes and controls were in place to ensure that the data used in the measures are timely, complete, accurate, and consistent, and appear reasonable. We reviewed legislative requirements for data reliability in both the Government Performance and Accountability Act of 1993 and the Reports Consolidation Act of 2000 and reviewed Coast Guard standards and procedures for collecting performance data and calculating results. In addition, we interviewed agency officials at Coast Guard headquarters, as well as at the district and sector levels, regarding standardized agencywide data collection, entry, verification, and reporting policies, and inquired as to if and how these procedures differed across programs and at each level of the organization. We observed data entry for the Marine Information Safety and Law Enforcement database at Coast Guard district and sector offices in Boston, Massachusetts; Miami, Florida; and Seattle, Washington; a district office in Cleveland, Ohio; as well as at an air station in Miami, Florida; and a marine safety office in Cleveland, Ohio, to check for inconsistencies and discrepancies in how data are collected and maintained throughout the agency. We selected these field locations because of the number and types of non-homeland security programs that are performed at these locations. We also spoke with information technology officials responsible for maintaining the Marine Information Safety and Law Enforcement database. For our second objective—whether selected secondary performance measures for four of the Coast Guard’s non-homeland security programs are sound—we selected measures in addition to the primary performance measures for the aids to navigation, living marine resources, marine environmental protection, and search and rescue programs. We selected these programs because they had the largest budget increases between the fiscal year 2005 budget and the Coast Guard’s fiscal year 2006 budget request, and are programs of particular interest because of events surrounding Hurricane Katrina. In addition, we did not assess any of the secondary measures that were in development at the time of our report. For these four programs, we assessed the soundness of only those other performance measures that Coast Guard officials said were high level, strategic measures used for performance budgeting, budget projections, management decisions, and external reporting. The 23 secondary measures we assessed for these four programs represent more than half of the 39 high-level, strategic secondary measures used to manage the six non-homeland security programs. To assess the soundness of the selected 23 secondary measures, we used the same GAO criteria and followed the same steps that we used to determine the soundness of the primary performance measures. For our third objective—the challenges, if any, that are present in trying to use these measures to link resources expended to results achieved—we interviewed Coast Guard budget officials at agency headquarters to discuss how performance measures are used in resource and budget allocation decision making processes. We reviewed previous GAO reports on performance measures, performance reporting, and the link between the Coast Guard’s resources expended and results achieved. We also interviewed program officials at Coast Guard headquarters about ongoing initiatives the agency is developing and implementing to link resources expended to results achieved. We conducted our work from July 2005 to August 2006 in accordance with generally accepted government auditing standards. Appendix II provides our findings for the soundness of the high-level, strategic secondary measures we assessed (see table 5), as well as a list of those high-level, strategic secondary measures we did not assess (see table 6). Because of the large number of secondary measures for the Coast Guard’s six non-homeland security programs, we assessed the soundness of secondary measures for the aids to navigation, living marine resources, marine environmental protection, and search and rescue programs, and we did not assess the soundness of secondary measures for the ice operations and marine safety programs. Appendix III provides a list of the Coast Guard’s ongoing initiatives to improve the agency’s planning, resource management, and decision support systems to more closely align performance with resources. (See table 7.) In addition to the individual named above, Billy Commons, Christine Davis, Michele Fejfar, Dawn Hoff, Allen Lomax, Josh Margraf, Dominic Nadarski, Jason Schwartz, and Stan Stenersen made key contributions to this report. Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain. GAO-05-161. Washington, D.C.: January 31, 2005. Coast Guard: Relationship between Resources Used and Results Achieved Needs to Be Clearer. GAO-04-432. Washington, D.C.: March 22, 2004. Coast Guard: Comprehensive Blueprint Needed to Balance and Monitor Resource Use and Measure Performance for All Missions. GAO-03-544T. Washington, D.C.: March 12, 2003. Performance Reporting: Few Agencies Reported on the Completeness and Reliability of Performance Data. GAO-02-372. Washington, D.C.: April 26, 2002. Coast Guard: Budget and Management Challenges for 2003 and Beyond. GAO-02-538TU. Washington, D.C.: March 19, 2002. Coast Guard: Update on Marine Information for Safety and Law Enforcement System. GAO-02-11. Washington, D.C.: October 17, 2001. Tax Administration: IRS Needs to Further Refine Its Tax Filing Performance Measures. GAO-03-143. Washington, D.C.: November 22, 2002. The Results Act: An Evaluator’s Guide to Assessing Agency Performance Plans. GAO/GGD-10-1.20. Washington, D.C.: April 1998. Agencies’ Annual Performance Plans under the Results Act: An Assessment Guide to Facilitate Congressional Decision Making. GAO-GGD/AIMD-10.1.18. Washington, D.C.: February 1998. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996.
Using performance measures, the Coast Guard explains how well its programs are performing. To do so, it reports one "primary" measure for each program (such as percent of mariners rescued) and maintains data on other, "secondary" measures (such as percent of property saved). Concerns have been raised about whether measures for non-homeland security programs accurately reflect performance, that is, they did not rise or fall as resources were added or reduced. For the six non-homeland security programs, GAO used established criteria to assess the soundness of the primary measures--that is, whether measures cover key activities; are clearly stated; and are objective, measurable, and quantifiable--and the reliability of data used to calculate them. GAO also used these criteria to assess the soundness of 23 selected secondary measures. Finally, through interviews and report review, GAO assessed challenges in using measures to link resources to results. While some opportunities for improvement exist, the primary measures for the Coast Guard's six non-homeland security programs are generally sound, and the data used to calculate them are generally reliable. All six measures cover key program activities and are objective, measurable, and quantifiable, but three are not completely clear--that is, they do not consistently provide clear and specific descriptions of the data, events, or geographic areas they include. Also, the processes used to enter and review the Coast Guard's own internal data are likely to produce reliable data; however, neither the Department of Homeland Security (DHS) nor the Coast Guard have policies or procedures for reviewing or verifying data from external sources, such as other federal agencies. Currently, the review processes vary from source to source, and for the primary measure covering marine environmental protection (which concerns oil and chemical spills), the processes are insufficient. Of the 23 secondary performance measures GAO assessed, 9 are generally sound, with weaknesses existing in the remaining 14. These weaknesses include (1) a lack of measurable performance targets, (2) a lack of agencywide criteria or guidance to ensure objectivity, and (3) unclear descriptions of the measures. Two main challenges exist with using primary measures to link resources to results. In one case, the challenge is comprehensiveness--that is, although each primary measure captures a major segment of program activity, no one measure captures all program activities and thereby accounts for all program resources. The other challenge involves external factors, some of which are outside the Coast Guard's control, that affect performance. For example, weather conditions can affect the amount of ice that must be cleared, the number of aids to navigation that need repair, or mariners that must be rescued. As a result, linking resources and results is difficult, and although the Coast Guard has a range of ongoing initiatives to do so, it is still too early to assess the agency's ability to successfully provide this link.
FAA’s roughly 6,100 technicians are part of the agency’s Air Traffic Organization (ATO) and are located organizationally within ATO’s Technical Operations Services. (See app. II for ATO’s organization.) Physically, the technicians are located throughout the country at approximately 600 air traffic control facilities, which they are responsible for maintaining, repairing, and certifying, together with the systems and equipment the facilities contain. Currently, FAA operates nearly 60,000 pieces of legacy equipment and has begun to deploy NextGen equipment. Technicians maintain the equipment, certify that it is working properly by conducting periodic performance checks, and repair malfunctioning equipment and return it to service. They conduct maintenance under various approaches depending on the equipment. Those approaches include (1) periodic maintenance (which includes periodic equipment inspections, performance checks, and routine maintenance), (2) condition- based maintenance (which includes proactive maintenance tasks to predict or prevent equipment failures), and (3) run-to-fault maintenance (which means maintenance is performed after the equipment stops functioning—an approach that, according to FAA, is normally applied when other types of maintenance actions will not reduce the probability of failure or extend the lifetime of the equipment). Since 2007, FAA has used reliability-centered maintenance to determine the most appropriate approach and timing for conducting maintenance activities for each type of equipment. Reliability-centered maintenance requires that data on the function and performance of specific equipment be collected and analyzed, including data on the causes and consequences of failure, in order to determine the maintenance approach needed to keep the equipment functioning effectively and prevent future failures. For example, performance data can be analyzed to determine whether a particular component wears out with age or fails randomly; this information is then used to decide the maintenance approach most appropriate for that item. (Fig. 1 shows a technician upgrading lighting on an approach lighting system.) As mentioned previously, FAA’s technicians are responsible for minimizing the frequency, duration, and impact of equipment outages. Over the last 10 years, the frequency and duration of unscheduled outages has generally increased. (See figs. 2 and 3.) Age and the resulting deteriorating condition of equipment and facilities are contributing to the increase in outages and repair time. According to a senior FAA official, the number of outages decreased around 2008 because of changes in reporting practices. FAA is still determining technician responsibilities and maintenance requirements under NextGen. A senior FAA official noted that the agency plans to look at near-term system deployments and new system requirements to see what maintenance requirements are planned for new systems in the short term. The initial systems critical to implementing NextGen—En Route Automation Modernization (ERAM) and Automatic Dependent Surveillance Broadcast (ADS-B)—are currently being deployed, and FAA expects that several other systems will come online over the next several years. (See table 1 for FAA’s schedule for deploying NextGen systems.) Initially, FAA planned to decommission legacy equipment as it deployed related NextGen equipment, but it has since decided to retain much of the legacy equipment, according to a senior ATO manager. As a result, the technicians’ workload will increase in the near term. Over the last 11 years, staffing levels for technicians reached a high of 6,721 in fiscal year 2001 and, with some fluctuation, dropped to a low of 6,086 at the end of fiscal year 2008—a decline of about 9 percent. At the end of fiscal year 2009, the number of technicians increased to 6,147, slightly more than the minimum staffing level established in FAA’s contract with PASS. (See table 2.) Over the same period, the number of pieces of equipment increased from 40,360 in 1999 to 63,846 in 2009, while the number of air traffic control facilities decreased from 651 in 1999 to 581 in 2009. The number of technicians declined slightly during fiscal years 2006 through 2008 because separations exceeded hiring, as shown in table 3. During the past 11 years, the number of technician separations—primarily because of retirements—has averaged about 280 per year. The number peaked at 366 separations in fiscal year 2006 and then decreased to 211 separations in fiscal year 2009—a decline of 42 percent. (See table 4.) The number of technicians eligible to retire each year over this period has ranged from about 800 to about 1,000. In 2005 and 2006, the largest numbers of technicians retired—about one-third of those eligible each year. (See table 5.) The relatively low number of technicians retiring in 2009 may be due to the downturn in the economy. ATO’s Technical Operations Training and Development Group (Technical Operations Training) is responsible for training technicians. (See app. II for ATO’s organizational chart.) Through the Technical Operations Training and Personnel Certification Program, FAA grants certification to technicians who have obtained a professional level and are responsible for the operation and performance of air traffic control facilities and equipment. The certification program consists of five types of training: (1) resident training taught in a classroom environment by an instructor; (2) distance learning, such as correspondence study or computer-based instruction; (3) refresher training, which can be provided through resident or distance learning courses, for technicians who hold a certification; (4) on-the-job training (OJT), providing direct experience in the work environment where the employee is required to perform his or her duties; and (5) enhanced hands-on training (EHOT) and demonstration of proficiency (DoP) training. To obtain certification, technicians must satisfactorily complete their training—including theory-of-operations training, OJT in the workplace, or EHOT and DoP—at the training location or pass a performance examination in the workplace. Technicians must also receive an endorsement, first by a manager and then by a second-level manager, that the preceding actions have been properly completed. At the beginning of the year, Technical Operations Training works with FAA’s human resource personnel to obtain an estimate of new hires’ training needs. Technicians are earmarked for a piece of equipment at a particular facility when they are hired; over the course of their careers, they may be trained on many pieces of equipment. All new hires must get one equipment course in their first year, and that training is targeted to the needs of the facility to which they are assigned. Technicians need to pass two types of equipment courses to reach the full-performance level. New hires are at their facility for 30 days for familiarization and then go to the FAA Academy for theory-of-operations training and one equipment course. Afterwards, the technician’s manager determines the additional equipment on which the technician needs training. Training to work on legacy equipment is provided at the academy, where technicians reside during the training. When FAA acquires new air traffic control equipment, it follows an established process for training technicians. Vendor courses are the primary source of training for NextGen systems coming into the FAA inventory. Figure 4 shows how FAA plans and funds technician training. FAA’s workforce planning for technicians partially or mostly incorporates key practices of leading organizations, but no practices are fully incorporated, and FAA has no comprehensive, written strategy to guide its efforts. To the extent that the agency does not incorporate leading practices, it may be limited in its ability to plan effectively for the right number of technicians with the right skill sets, both now and in the near term. Table 6 presents our analysis of the extent to which FAA has incorporated key practices of leading organizations in its workforce planning. FAA is partially following a leading practice for workforce planning in the area of determining current and future critical skills and competencies. (See table 7.) To establish and maintain an inventory of employee skills and competencies, FAA assesses technicians’ skills and competencies at hiring and then biennially. Newly hired and on-board technicians complete competency-based technical training on legacy systems and equipment to establish a baseline level of technical proficiency on these systems and equipment. Additionally, since August 2007, FAA has assessed its technicians’ proficiency every 2 years as part of an Aviation Safety Oversight Credentialing Program to ensure that their skills are current and they remain competent to perform work on the equipment. FAA’s initial and biennial skills assessments evaluate technicians’ readiness to meet the agency’s current maintenance needs, but FAA has not determined whether its technician workforce has the skills and competencies needed to achieve future programmatic results. Such a determination will be critical as the transformation to NextGen proceeds and the agency faces organizational as well as technological changes. FAA’s strategic plan for NextGen—the NextGen Implementation Plan— describes the technology changes planned through 2018 but does not mention workforce planning—including planning for critical skills and competencies—for technicians. FAA officials stated that the agency has started to determine its maintenance requirements for NextGen equipment. This determination will affect the skills and competencies that technicians will need under NextGen. As part of this effort, FAA’s NextGen Integration and Implementation Office is establishing a workgroup with members representing relevant FAA divisions and technician subject matter experts. According to FAA officials, this workgroup will look at changes in FAA’s maintenance philosophy and NextGen equipment acquisitions and needs, both leading up to and during implementation. The officials said FAA will consider these factors as it develops NextGen planning documents that outline, among other things, changes needed in the technician workforce, including changes in skills, competencies, and training. In addition, Technical Operations Training officials have dedicated a staff member to compile technician job descriptions, tasks, and training courses. According to the officials, FAA will use this information to develop a skills and competency model for training purposes (modeling efforts are discussed later in this report). FAA’s workforce planning efforts partially address leading practices in developing strategies to close the gap between needed and actual skills and competencies. Although FAA has reasonable strategies in place to allocate staffing annually, it does not have a staffing model and has not developed succession plans to prepare for impending retirements. (See table 8.) FAA has developed annual technician hiring and staff planning strategies that are derived from the budget—a human capital strategy that takes into consideration the resources that can be reasonably expected to be available, following a key practice of leading organizations. The staffing process, which is discussed in more detail later in this report, begins with a budgetary dollar amount that is used to determine how many new full- time technician positions can be filled. Top management, with input from front-line managers in the form of requests based on their facility needs, then distributes these positions across locations. According to FAA officials, these recommendations generally take into account FAA’s equipment inventory and restoration requirements and the varying levels of trained, certified, and experienced technicians at FAA facilities, although top management considers these factors in an ad hoc manner. Foremost among the succession planning challenges ATO faces is the impending retirement of portions of the technician workforce. We updated FAA’s 2008 projections with the most current federal personnel data from the Office of Personnel Management’s Central Personnel Data File (CPDF) and found that 23 percent of the technicians on staff at the end of fiscal year 2009 would be eligible for retirement in 2012. Moreover, if the 2009 staffing level remained constant, 31 percent would be eligible for retirement in 2015 and over 50 percent in 2020. (See fig. 5.) From 2005 through 2009, FAA averaged 236 actual technician retirements annually, or 27 percent of those eligible. If actual retirements, estimated for existing staff, continued at that rate, FAA could face over 500 retirements in fiscal year 2015 and about 900 retirements in fiscal year 2020. As discussed previously, not all technicians that are eligible to retire will do so, and as seen in figure 5, the gap between the number of technicians who are eligible to retire and those projected to actually retire will continue to expand at least through 2020. FAA does not have succession plans for technicians—that is, FAA lacks a pipeline to develop new technicians to respond to the impact on operations of retirements, attrition, and the implementation of midterm NextGen capabilities. Officials noted that new technicians are brought in only as others retire or leave the agency; however, this strategy does not factor in how long it takes new technicians to become fully certified and acquire skills and abilities on a par with those of the retiring technicians. A pipeline approach to workforce planning, which would create a steady flow of trained technicians with some on-the-job experience to replace experienced technicians as they retire, would help to alleviate the pressures resulting from FAA’s current approach. In all of the focus groups we conducted, participants raised concerns about this aspect of FAA’s workforce planning, noting that when an experienced technician trained to work on multiple systems is replaced by a new technician trainee, the new technician cannot fully replace the original employee for years, placing a burden on other technicians at the facility as well as the training program. FAA officials acknowledged that it can take 2 to 3 years for new technicians to attain the skills and abilities of more experienced technicians. According to FAA’s analysis, 686 full-performance-level technicians with multiple certifications will be eligible to retire by the end of fiscal year 2011. (These retirements will also have an impact on training, as discussed later in this report.) The expected increase in retirements could also affect FAA’s implementation of midterm NextGen capabilities, scheduled for completion by 2018. Currently FAA plans to implement ERAM by 2011 and the initial segments of several other systems—including SWIM and DataComm—by fiscal year 2016, as well as continue to operate its legacy equipment. With both legacy and new systems to maintain, technicians could have more varied, if not more, responsibilities and therefore need a wider range of skills, further highlighting the importance of succession planning. FAA needs to continue to plan for these issues so that it can minimize the operational impact of projected retirements. FAA plans to rely on hiring and training to address gaps between the skills and competencies that its technicians currently have and those that they will need in the future. Senior FAA officials recognize that, as NextGen technologies are introduced, technicians will need very broad, and perhaps difficult-to-find, skill sets that will allow them to maintain both old and new air traffic control technologies. According to ATO’s strategic human capital planning document, technicians will continue to need the majority of their current skills to maintain legacy systems, and they will need to enhance many of these skills to maintain new computer networks and automated software tools. ATO’s planning document identifies timely new hire selections and technical training as integral to maintaining and acquiring the correct knowledge and skill base for FAA’s technician workforce. Moreover, the document states, technicians will need a full set of technical, business, and leadership skills to be successful in a rapidly changing environment. As the rate of technological change increases, it will be an ongoing challenge for ATO to acquire and maintain a technically current workforce able to integrate new technologies and respond effectively to changes in technology, as well as maintain legacy technologies. Additionally, to meet this challenge, FAA will have to address new and expanded training requirements and deliver that training in a cost-efficient and timely manner. FAA does not have a staffing model for technicians, and senior officials told us that FAA is currently not sure how many technicians are and will be needed to maintain the national airspace system. This uncertainty limits FAA’s ability to plan strategically for the technician workforce. Senior FAA officials confirmed that there is no staffing standard for technicians. They noted that previous efforts to develop a staffing model for technicians were not completed because of too many variables and that FAA has not yet identified staffing requirements for the technician workforce. Those officials pointed to the technicians’ union contract as the primary factor affecting FAA’s annual technician staff planning process. They explained that the 6,100 staffing minimum established by the PASS contract was negotiated and is not based on equipment inventory or maintenance requirements. They further noted that this contractual staffing minimum has deterred FAA from implementing staffing requirements for the technician workforce. The 2000 contract has not been renegotiated since it expired in 2005; however, as of April 2010, the parties were reportedly discussing a return to the negotiating table. Although the negotiated minimum staffing level may preclude changes below that level, it does not prevent FAA from examining the numbers of staff and the skills required for its technician workforce now and in the future. According to the Vice President of Technical Operations Services, FAA may require more technicians in the future to adequately maintain both legacy and NextGen systems. Conversely, FAA may require fewer technicians because of the digital nature of the new equipment and different maintenance approaches, such as reliability-centered maintenance. FAA recently hired staff to begin collaboratively developing an accurate, requirements-based predictive staffing model. A senior FAA official suggested that FAA will look at various NextGen planning documents to determine new maintenance requirements. FAA also plans to look at near-term system deployments and new system requirements to determine the short-term maintenance requirements for NextGen systems. Senior FAA officials said that FAA will not be taking as much equipment out of the national airspace system as previously thought; systems that were originally planned to be decommissioned are still in service and are expected to remain in service indefinitely. These officials further noted that the maintenance process requires administrative and business management personnel as well as technicians, and the staffing model will therefore identify and incorporate nontechnical as well as technical positions. The FAA Reauthorization Bill of 2009 contains a provision that would require the National Academy of Sciences to study the assumptions and methods FAA uses to estimate staffing needs for its technicians to ensure proper maintenance and certification of the national airspace system. If enacted, this provision could help address FAA’s staffing approach. According to our analysis, FAA has mostly developed the capacity needed to address requirements important to support its technician workforce planning strategies. (See table 9.) FAA has taken steps to educate managers and employees on the availability and use of hiring flexibilities. For example, FAA provides managers with guidance on special appointing authorities, such as “on-the- spot” hiring and recruitment and retention incentives. Moreover, FAA’s guidance for using specific hiring flexibilities provides clear and transparent rules to help ensure that managers and supervisors make fair and effective use of the flexibilities, further addressing this leading practice. FAA has streamlined its process for hiring technicians, further building the capacity to support its workforce planning strategies. For example, FAA uses a Web-based automated rating and ranking system for screening applicants and making candidate selections for technician vacancies. According to FAA, it has created efficiencies in the hiring process for technicians by centralizing this function in Oklahoma City, Oklahoma, much as it has done to streamline its hiring of air traffic controllers. Additionally, according to FAA, it has the ability to expand the use of temporary pre-employment clearance processing centers to include technicians. These centers provide a centralized interview site and “one- stop” service for potential new hires, and their use can significantly shorten the hiring process, which can take up to 6 months, thus allowing FAA to get qualified applicants into academy training sooner. Managers at one location we visited nevertheless stated that, although management tries to hire and train new technicians as quickly as possible, the process takes time and is still too slow. For the transition to NextGen, FAA has acknowledged that a new generation of personnel selection procedures may be needed. According to FAA, the next generation of selection procedures should be developed in parallel with the operational evolution of the national airspace system. Identifying those future requirements will be part of the agency’s overall strategic workforce planning effort, requiring the continued development and validation of a methodology for identifying gaps between current and future knowledge, skills, and abilities, and staffing profiles in safety- critical occupations. FAA has partially implemented initiatives—such as developing strategic human capital goals and analyzing attrition—to monitor and evaluate its progress in workforce planning for technicians, but it does not have measures to evaluate the contribution that its technician human capital strategies have made toward achieving programmatic results. (See table 10.) FAA has put monitoring and evaluation initiatives in place to assess progress toward its human capital goals for technicians, such as its hiring, training, and retention goals. These initiatives are consistent with the practices of leading organizations and provide information for oversight by identifying performance shortfalls and options for corrective action. For example, FAA has a strategic initiative with activity targets and milestones in its human resource business plan to improve its external recruiting for several occupations, including technicians, and it has met these targets and milestones. FAA, as discussed earlier, also has completed an attrition analysis of its technical operations workforce, which includes technicians—an important step in identifying and addressing staffing goals. FAA plans to use this analysis to understand the unique characteristics of employee subgroups, including technicians, in an effort to better forecast specific staffing turnover and anticipate needs for new hires. However, FAA needs to better link its human capital strategies and programmatic results to evaluate the contribution that technician human capital strategies have had on program results. As noted above, the agency has just begun to identify—and has no strategy to help ensure its technicians will have—the skills and competencies needed to maintain NextGen systems; linking FAA’s human capital strategies for the technician workforce to that workforce’s responsibilities in the transition to NextGen will be critical as the transformation proceeds. For example, a workforce plan can include measures that indicate whether the agency executed its hiring, training, or retention strategies as intended and achieved the goals for these strategies, and how these initiatives changed the workforce’s skills and competencies. It can also include additional measures that address whether the agency achieved its program goals and the link between human capital and program results. Without periodic measurement of the extent to which human capital activities contributed to achieving programmatic goals, FAA lacks information for identifying performance shortfalls and appropriate corrective actions for effective oversight. FAA involves top management, but minimally involves technicians, in developing, communicating, and implementing workforce planning strategies. (See table 11.) Consistent with a key leading practice, top management at FAA sets the overall direction and goals of workforce planning. More specifically, top FAA management, including resource management groups and service area directors, conducts FAA’s annual technician staff planning process, as discussed previously in this report. The resource management groups— ad hoc panels of district managers and representatives from administrative services and business services—make recommendations several times annually on the distribution of personnel and funding. The director of each of three service areas nationwide takes the recommendation made by that service area’s resource management group and, in conjunction with the area’s first-line managers, makes staffing allocation decisions. While FAA has involved top management in developing and implementing workforce strategies, it has not involved technicians, notwithstanding a key leading practice calling for the involvement of employees and other stakeholders. The Vice President of Technical Operations Services, who is responsible for technician workforce planning, told us that technicians have not been included in any technician workforce planning efforts. The president of PASS and participants in all 12 focus groups we held also said that technicians had not been involved in workforce planning activities. By not involving employees in strategic workforce planning efforts, FAA may miss opportunities to develop new synergies and ways to streamline processes and improve human capital strategies. FAA does not have a workforce planning communication strategy, a key practice designed to create shared expectations, promote transparency, and report progress. FAA has a strategic workforce plan for ATO, but it does not have one specifically for the technician workforce, although ATO has designated the technician workforce as mission-critical. In contrast, FAA does have a strategic workforce plan for nearly 16,000 air traffic controllers, another mission-critical workforce within ATO, and the only group of FAA employees larger than the technicians. Previous workforce planning documents for the technicians—including the National Airspace System Maintenance Workforce Plan, issued in July 2008—either primarily emphasized training or were never implemented. According to FAA officials, Technical Operations is in the process of collaboratively developing an accurate predictive staffing model, a draft of which will be completed in about another year. Without a final and public technician workforce plan, FAA’s approach to communicating about technician workforce planning has limited potential to create shared expectations, promote transparency, and report progress. The technicians we spoke with described what they perceived as a lack of management communication and support in the area of planning. They raised concerns about how FAA plans for and communicates staffing and planning decisions. Negative morale, stemming from such concerns over management support and planning, could adversely affect FAA’s hiring and retention of technicians in the future. We have reported previously that FAA’s consistent ranking near the bottom in published lists of best places to work in the federal government (viewed as an indicator of employee morale) could pose challenges in recruiting, motivating, and retaining employees to replace those retiring and to meet current and future mission requirements. The PASS union president also expressed concerns about FAA’s communication of information on policy changes and new technologies throughout the agency. He stated that technical bulletins come from other organizations but not from Technical Operations and that FAA does not coordinate among the lines of business, resulting in a “stovepipe effect.” He suggested that FAA dedicate a person as a conduit for communication to help ensure that information gets passed along, which would help improve morale. In January 2010, PASS and Technical Operations management developed the Joint Leadership Team as a joint effort to rebuild their relationship and improve communication and collaboration. As part of this effort, through a contractor, PASS and Technical Operations management have conducted focus groups with employees, including management, to identify areas of concern that might affect employee morale. PASS and Technical Operations management plan to survey field technicians in the next few months to help identify opportunities to collaboratively address issues. One way to address leading organizations’ key practices in the area of stakeholder involvement is to develop comprehensive workforce planning strategy documents, such as a workforce plan or policy statement, that reflect the human capital needs of an organization, any new initiatives or refinements to existing human capital approaches, and data on the organization’s workforce profile. Without a written workforce planning strategy, a staffing model, and a more strategic approach, including succession planning issues related to developing new technicians when experienced technicians leave, FAA lacks a fully considered analysis of the appropriate number and composition of its technician workforce, and it may not be able to meet future maintenance demands. Moreover, as FAA transitions from legacy to NextGen systems, it risks having too many technicians with legacy skills and not enough with NextGen skills. As this transition occurs, strategic plans for identifying and responding to changes in needed competencies and potential gaps in knowledge and skills will be critical to ensure that FAA acquires or develops the needed human capital resources and makes full and efficient use of them. FAA at least partially follows key practices of leading organizations in its training and development for technicians. Table 12 presents our analysis of the extent to which FAA has incorporated these key practices. FAA has partially implemented initiatives—such as establishing annual training goals and incorporating employees’ developmental goals—to plan for strategic training and development of its technicians. (See table 13.) FAA is taking action to ensure that its training goals are consistent with its overall mission, goals, and culture in that it plans to train annually at least the minimum number of technicians that it believes it needs to maintain air traffic management facilities; however, FAA has not identified future training needs beyond the annual cycle and has only just begun to determine the critical skills and competencies that it will need to maintain NextGen systems. In previous work, we have found that accountability mechanisms, such as an active training oversight committee and effective performance management systems, can help to ensure that sufficient attention is paid to planning for training and development needs and that those planning efforts are consistent with an agency’s mission, goals, and culture. Line managers and supervisors can ensure that employees’ training goals are consistent with the agency’s overall mission and goals by keeping this alignment in mind as they work with employees to set training goals and approve employees’ training requests. For approximately the last 5 years, FAA has maintained a Technical Training Advisory Council, which includes training program support staff, a supervisory committee consisting of technician line managers, and representatives of three ATO service centers. The council meets in person four times a year and has a monthly teleconference to provide training feedback to ATO Technical Training and review the agency’s training needs and goals. However, the president of PASS told us the union had not been approached in recent years to provide input into training planning and is not represented on the council. FAA training officials told us, and the PASS president confirmed, that the union provided a technician to work in Technical Operations Training to assist with training coordination between the two organizations through December 2009. As discussed earlier in this report, FAA has begun to determine the critical skills and competencies that it will need to maintain NextGen systems; however, FAA officials stated that the agency has never previously had a robust competency model for technician training. With the transition to NextGen, technicians’ training requirements—and thus critical skills and competencies—will increase, since technicians will have to learn how to maintain the new systems while remaining proficient in maintaining the legacy systems that FAA plans to continue operating indefinitely. The recent assignment of a staff member dedicated to compile technician job descriptions, tasks, and training courses for Technical Operations Training supports this effort to develop a skills and competency model. Technical Operations Training does not have a formal process to identify future needed skills and competencies beyond those that will be required to maintain new systems that are turned over in the near term from system program offices, and it lacks a strategic training plan or other document that presents a business case for proposed investments in training and development. When assessing investment opportunities for its training plan, an agency ought to consider the competing demands it faces, the resources available to it, and how those demands can best be met with available resources. Because FAA has not developed a longer-term strategic plan to prepare for impending retirements and determine how many technicians it will need to replace those who retire, the agency cannot determine how many technicians it will need to train in the future and what certifications will need to be replaced. For example, as mentioned earlier in this report, 686 full-performance-level technicians with multiple certifications will be eligible to retire by the end of fiscal year 2011, and those that do retire will be replaced by new technicians who might not acquire those skills and abilities for 2 to 3 years. While FAA has a well-established process to identify current training and development needs annually and to prioritize training funds annually, its ability to plan longer-term training and funding is limited by Technical Operations Training’s dependency on receiving timely and accurate planning information on FAA systems from the agency’s individual program offices and NextGen office. Technical Operations Training officials told us they identify future training needs through coordination with the system program offices and independently monitor the status of new systems coming into the FAA inventory by reviewing the agency’s Capital Investment Plan (CIP) to see when Technical Operations Training should start planning for training. However, opportunities for coordination with the program offices have decreased with recent organizational changes, according to FAA Academy officials. Formerly, the program offices initially coordinated the contracts for new systems and equipment, and the academy could work with the program offices to develop training while the contracts were being negotiated. Now, however, responsibilities for the contracts have been consolidated within FAA’s acquisition management offices, and there is less coordination between the academy and the program offices. Academy officials recognize that training development and funding for that training must await equipment development, but they stated that coordination during the contracting and development process would be extremely beneficial. To incorporate employees’ developmental goals into the planning process, employees develop their course requests annually, in conjunction with their managers. The agency uses individual development plans to identify specific developmental needs and areas for further enrichment for each employee. Technical Operations Training officials stated that they do not solicit additional input for training planning and development from technicians themselves because the line managers are the best source of information on training needs for their facilities. Technicians in our focus groups told us they have had some difficulty obtaining the training they need for several reasons. First, they said, some courses on legacy systems needed for advancement have not been available at the academy. Technical Operations Training officials acknowledged that some academy courses on legacy systems were prematurely canceled because their subject matter was incorporated in course offerings for new systems, and then these new courses were postponed because of delays in rolling out the new systems. For example, plans for deploying ERAM led Technical Operations Training to cancel the training on two legacy systems that ERAM incorporates—En Route Communications Gateway and the Display System Replacement—but then ERAM’s deployment was delayed, and no courses were available on the two legacy systems. Second, technicians said, the recent declines in technician staffing and a reduction in periodic maintenance under the reliability-centered maintenance approach have limited their ability to become familiar with new systems and acquire timely on-the-job training, as well as maintain proficiency in areas where they have already received training or gained experience. Finally, technicians told us, they often did not receive approval to attend the courses they have requested as a priority to meet their developmental needs. FAA officials stated that training requests are filled according to a facility’s priority, which is determined through a number of factors, such as the minimum number of trained people it takes to maintain the facility, the size of the airport where the equipment is maintained, or the amount of equipment at a facility that is operationally essential to maintain air traffic control. When a facility receives approval for a technician to attend a course and the technician then cannot attend, Technical Operations Training prioritizes the remaining requests to determine which technician from which facility should go instead. FAA officials estimate that, on average, 98 percent of operationally essential training requests in recent years have been met. For example, in fiscal year 2009, there were 5,100 requests for training, and 5,100 slots were provided. However, academy officials estimated that 50 to 70 percent of the courses are not filled to capacity. As mentioned below, technicians are not able to attend all training classes they receive approval to attend because of workforce staffing issues at their facility. According to our analysis, FAA has mostly developed the capacity to identify design and development initiatives to improve individual and agency performance. (See table 14.) FAA offers a mix of in-residence, centralized training at the academy and external, decentralized training at various locations provided by vendors whose equipment FAA has purchased. Training at the academy focuses on legacy equipment, while vendor courses are the primary source of training for next-generation systems coming into the FAA inventory. FAA is limited in its choice of training delivery mechanisms because of the unique and complex nature of air traffic control system components. For example, the unique configurations of and modifications to FAA generators make it difficult to replicate their features and teach technicians how to maintain them at a field office or vendor location rather than at the academy, according to academy officials. (Fig. 6 illustrates the variety of generators available in the training classroom at the FAA Academy.) Overall, technicians in our focus groups maintained that, compared with the training offered at the academy, vendor training was less informative and more conceptual, offered less hands-on and problem-solving instruction, and was limited by proprietary considerations that restricted students’ access to some information. Additionally, they said, vendors could not teach FAA-specific safety issues or explain how their systems interacted with other components of air traffic management systems. No vendor courses have been approved to replace academy training for legacy courses, although Technical Operations Training is studying the feasibility of having vendors provide certain courses that are currently offered through the academy and are filled to capacity. For example, evaluations are under way to determine if the engine generator courses can be taught by approved colleges and universities. In deciding how to provide these courses, FAA is considering capacity, quality, and cost criteria. Specifically, FAA is assessing whether (1) the academy courses have a sufficient number of seats to fulfill the training requests; (2) the replacement courses meet FAA’s standards for training in the applicable subject areas; and (3) how the costs of academy training would compare with the costs of tuition, a per diem allowance, and travel for training at a local junior college. Technical Operations Training officials told us they will need to make a business case to ATO management that there will be cost savings from college training as well as demonstrate that the technicians’ training needs can be met with that approach. FAA is also comparing the merits of different training delivery mechanisms, such as computer-based simulation training, but had adopted no such mechanisms as of April 2010. Some technicians told us that emerging FAA maintenance policies limit hands-on interaction with systems and that the combination of these policies and modifications to equipment over time make the technicians feel they are no longer qualified to work on certain systems. For example, some technicians stated that because they have been away from training for so long, they are unable to apply their now-dated knowledge and skill when doing their work. Others stated that preventive maintenance checks served as critical refresher training and familiarization tools, and they raised concerns about the effects on their proficiency of less frequent preventive maintenance checks resulting from the change in maintenance philosophy. Technicians suggested that different methods, such as the use of simulator training or the addition of detailed visuals or photographs in training and system manuals, would greatly aid their job knowledge in lieu of the reduced hands-on training. Technical Operations Training officials said they were aware of this issue and intend to evaluate additional methods for technicians to maintain proficiency, including the use of online videos for specific pieces of equipment. FAA has partially implemented practices—such as adjusting work schedules so that employees can participate in developmental activities and taking actions to foster an environment conducive to effective training and development—when implementing training and development for technicians. (See table 15.) FAA provides information on training opportunities to technicians but does not communicate the importance of training and development and its expectations for technicians in those areas. FAA publicizes training information through its comprehensive Web site, known as the FAA Information Superhighway for Training (FIST). FIST contains training and certification program information from Technical Operations Training, including policies and procedures, forms, course descriptions, and examinations. However, FAA does not use established mechanisms or written plans to communicate either the importance of training and development for technicians or its expectations for technician training and development programs to achieve results. As previously noted, FAA does not have a strategic training plan for technicians, and the agency has not included any expectations for, or discussion of, technician training and development needs in its planning document for NextGen, the March 2010 NextGen Implementation Plan. FAA is limited in the options it can consider for paying for employee training and development to academy-provided training and vendor- provided training, primarily because of the unique training requirements of technicians. In addition, technicians’ workloads limit FAA’s ability to adjust their schedules for training. Technicians told us their high workload and a lack of staff to cover the work in their absence impedes their ability to take time from their positions to obtain training. Technical Operations Training officials confirmed that three to four times a week, on average, technicians who have requested and received approval to attend training have not been able to do so because of staffing issues at their facilities. In an effort to enhance training, increase technician proficiency, and avoid burdening technicians in the field while other technicians are in training, officials told us that they have been working to shorten the training time for certain technician courses by adjusting training methods and enhancing demonstrations of proficiency at the academy. In the case of one course, these efforts reduced the average time for certification from 240 days in fiscal year 2005 to 59 days in June 2007. FAA does not consistently foster an environment conducive to Technical Operations Training’s efforts to train and develop employees so that they can participate fully and apply new knowledge and skills when doing their work. For example, training is not always timed to coincide with the introduction of new systems. Technicians told us that they received training on ERAM—a foundation system for NextGen and one of the most recent additions to the technician curriculum—over 2 years ago, but the system has yet to come on line. FAA training officials confirmed that because of delays in the implementation of ERAM, some technicians were trained months and even years ago and have not touched the equipment since. As a result, Technical Operations Training is concerned about technicians’ proficiency and is evaluating the need to retrain some staff on ERAM. FAA has partially implemented practices—such as using some types of performance data to assess the results achieved and incorporating certain feedback perspectives—for evaluating its training and development of technicians. (See table 16.) Technical Operations Training has a formal evaluation program in place and amends the training or makes recommendations based on trends observed in student evaluations. For example, student critiques of ERAM training revealed that students had problems making the connection between academy equipment used in ERAM training and the actual equipment installed at their facilities. As a result, training officials recently identified a need to have training systems installed at the academy that would replicate the fielded systems whenever possible. Technical Operations Training also completed an audit on all technicians who had been trained in a course designated as a prerequisite to the current ERAM course. Training officials concluded that by the time ERAM was delivered and ready for commissioning, up to 40 percent of the technicians who had completed the prerequisite training could have left the technician workforce and thus a new developmental course would be needed. FAA partially uses quantitative or qualitative measures to assess technician training results by using end-of-course evaluations and follow- on evaluations, as discussed below. Successful organizations typically develop and implement human capital approaches based on a thorough assessment of their specific needs and capabilities. To assess the results achieved through training and development, agencies can rely on hard (quantitative) data, such as indicators of productivity or output, quality, costs, and time, or soft (qualitative) data, such as feedback on how well a training program met employees’ expectations. While technicians provide feedback after completing a course, as discussed below, the additional use of quantitative data could help strengthen the linkages between training and development programs and improved performance. FAA evaluates the effectiveness of its training efforts and incorporates formal evaluation feedback into the implementation of its training efforts, but it does not solicit or incorporate feedback from personnel other than line managers into its planning and design of technician training. Technicians are required to complete an evaluation for any course they attend before they can graduate from and become certified in that course. Additional evaluations go out 3 to 6 months after graduation to both the technicians and their supervisors for additional feedback. Technical Operations Training officials stated that the line managers who oversee the technicians, not the technicians themselves, are the critical training customers because the line managers are the best source of information on training needs for their facilities. FAA officials meet with these line managers to obtain training feedback through an FAA council that meets four times a year. However, to the extent possible, agencies need to ensure that they incorporate a wide variety of stakeholder perspectives in assessing the impact of training on employee and agency performance, including the receptiveness to and use of results from employees’ feedback on developmental needs. Senior FAA training officials recognized they needed to develop additional measures to address supervisory feedback and opinions on training and stated that FAA will be developing additional measures to address these issues in the future. In the past few years, the academy has provided hundreds of training classes to thousands of FAA technicians. These courses are taught both by academy staff and by contractors hired to assist with instruction. The costs that FAA identified for academy training include those for instructor services and those for student travel to and from the academy in Oklahoma City. As shown in table 17, the number of technicians who received training each year from fiscal year 2006 through fiscal year 2009 fluctuated, while the number of instructors providing that training declined slightly. Despite the small decline in the number of instructors, the cost for instructor services rose slightly over the 4 years, likely because of increases in the cost of salaries, benefits, and contractor fees. Overall, the data indicate that the instructor-based cost for academy training has remained fairly stable over the 4-year period, a result consistent with the relatively stable need for instructors. Table 18 shows travel costs during the same 4 years for the students who attended academy courses. The total annual travel costs went up 34 percent from fiscal year 2006 through fiscal year 2009. During that period, the number of students attending training also rose, resulting in an 18 percent increase in the per student trip cost over the 4 years. In addition, according to Bureau of Labor Statistics data, airfares rose about 16 percent during that period, while hotel fees slightly decreased. Thus, it appears that the increases in the number of students and in airfares likely drove the increases in travel costs. According to data provided by FAA, costs for vendors to provide training for technicians on new equipment have risen very quickly in the past few years. This increase has been associated with the rollout of new equipment related to the implementation of NextGen, which has created new training needs for technicians. For example, as shown in table 19, the vendor began to offer courses for Digital Audio Legal Recorder (DALR) in 2007 and for ERAM in 2008. During the period of our review, the total number of vendor courses rose from fewer than 100 to over 200. Accordingly, training costs for vendor training have also grown substantially in the past few years. With other NextGen systems poised to go online in the near future, these costs may continue to rise as technicians require further training on other new equipment. An FAA employee identified by the agency as a subject matter expert told us the agency’s cost accounting system is unable to accumulate costs for travel to vendor training courses and report trends in those costs because the funds for that travel are derived from multiple sources—including the system program office, a centralized training fund, and in some cases the local facility. FAA is subject to various laws and standards that have an effect on its development and use of cost information, including standards reported in the Statement of Federal Financial Accounting Standards (SFFAS) No. 4, Managerial Cost Accounting Standards and Concepts. While SFFAS No. 4 does not specify the programs, services, or activities that federal entities should determine costs for, such as travel for vendor- provided training, the standards focus on developing information to help management and Congress understand the costs of operations and make informed decisions. The standards also provide that often a combination of a cost accounting system and cost finding techniques should be used to provide the cost information that is needed to address specific issues that arise. The lack of cost data available from FAA’s cost accounting system or through cost analysis techniques to summarize travel to vendor training courses limits FAA’s ability to manage the costs of such travel and evaluate all aspects of technician training costs and benefits. FAA could help provide information that addresses congressional concerns about the cost of in-house and vendor-provided training and of the travel related to those training activities by modifying its cost accounting system or cost finding techniques. Technicians possess unique skills and are critical to the safety and efficiency of the nation’s air transportation system, as well as the successful implementation of NextGen. FAA is not fully incorporating key leading practices, such as determining the critical skills and competencies that will be needed to achieve current and future results, in its strategic workforce planning for technicians. FAA does not have a comprehensive, written technician workforce strategy to help it identify and focus on the long-term technician human capital issues with the greatest potential to affect mission results. The lack of a written strategy limits transparency, and thus the ability to evaluate and measure performance, in FAA’s workforce planning approach. Such a strategy would include, among other things, approaches to (1) identify the skills and competencies technicians need to address both current and future needs and (2) anticipate attrition and hire technicians with the requisite skills and abilities in time to accomplish agency missions, down to the facility level. FAA’s practice of hiring replacements for technicians only after a vacancy occurs leaves the agency vulnerable to skills imbalances, with inexperienced, newly certified technicians replacing seasoned veterans. While the contractual staffing minimum has deterred FAA from developing staffing requirements for the technician workforce, it does not prevent FAA from incorporating leading practices to provide a strategic focus for technician workforce planning. Not having such strategies raises the risk of adverse effects on the safety and efficiency of the nation’s air transportation system. Furthermore, the training that technicians receive could lack prioritization because FAA has not developed a strategic training plan. Such a plan would need to be aligned with a written technician workforce planning strategy and should incorporate key leading practices in training and development. Without adequate planning, agencies cannot establish priorities or determine the best ways to leverage investments to improve performance. Additionally, including input into planning for any future NextGen systems training from a wide variety of employees—such as FAA’s NextGen Integration and Implementation Office, ATO’s Technical Operations Training and Development Group, technician supervisors, technical experts, and technicians—could help FAA develop integrated ways to address specific performance gaps or incorporate necessary enhancements in the technician training curriculum. Such an inclusive approach could create opportunities to develop solutions that FAA might otherwise miss. Finally, the lack of cost data to summarize travel to vendor training courses does not allow FAA to fully develop information about the cost of in-house and vendor-provided training and of the travel related to those training activities and therefore limits FAA’s ability to manage travel costs and evaluate all aspects of technician training costs and benefits. To ensure that FAA can hire and retain the technician staff it needs to install, maintain, repair, and certify equipment and facilities in the national airspace system, in the current and NextGen environments, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following four actions: 1. develop and implement a comprehensive, written workforce strategy or policy for the technician workforce that incorporates the key leading practices in strategic workforce planning that FAA has not fully incorporated, such as determining the critical skills and competencies that will be needed to achieve current and future results; 2. develop and implement a strategic training plan that is aligned with a written technician workforce strategy and incorporates key leading practices in training and development that FAA has not fully incorporated, such as determining how training and development efforts are expected to contribute to improved performance and results; 3. improve planning for any future NextGen systems training by including input from FAA’s NextGen Integration and Implementation Office, ATO’s Technical Operations Training and Development Group, technician supervisors, technical experts, and technicians to develop an integrated way to address specific performance gaps or incorporate necessary enhancements in the technician training curriculum; and 4. consider modifying FAA’s cost accounting system or cost analysis techniques to develop information about the cost of in-house and vendor-provided training and of the travel related to those training activities to assist Congress in understanding the costs of operations and making informed decisions. We provided the Department of Transportation with a draft of this report for its review and comment. The department provided technical corrections, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the Administrator of the Federal Aviation Administration. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix III. This report addresses the Federal Aviation Administration’s (FAA) processes for carrying out workforce planning and training for the agency’s technician workforce. It describes the processes and discusses the extent to which FAA’s efforts incorporate key leading practices in workforce planning and training and how the costs of technician training, including travel costs, have changed in recent years. Specifically, we addressed the following questions: (1) To what extent does FAA incorporate key practices of leading organizations in its workforce planning for technicians? (2) How does FAA’s technician training compare with key practices of leading organizations? (3) How have the costs of technician training, including travel costs, changed in recent years? To describe the composition of FAA’s technician workforce, we obtained information on its nature and scope, including job descriptions and job series information; the current, historical, and projected population of technicians; hiring trends; the current and projected numbers of technicians eligible to retire; the number of technicians who retire when eligible; and data on the geographic locations of work stations. We summarized FAA and federal personnel data from the Office of Personnel Management’s Central Personnel Data File (CPDF) on the technician workforce and developed trends in staffing and attrition for fiscal years 1999 through 2009, as well as retirement projections through fiscal year 2020. We assessed the reliability of the CPDF data by reviewing related documentation and determined that those data were of sufficient quality to be used for the purposes of this report. We focused on the technicians in the 2101 job series because (1) according to FAA data about the technician workforce, the majority of technicians are in the 2101 job series and (2) the FAA reauthorization bill refers to systems specialists, the employees included in the 2101 job series. To determine the extent to which FAA has incorporated key practices of leading organizations in its workforce planning and training for technicians, we sought to compare FAA’s efforts with those of leading organizations. We selected key leading practices in these areas by reviewing, in conjunction with subject matter experts, our past work to identify those most applicable. To determine how FAA’s technician- specific workforce planning and training components and practices compare with those of leading organizations, we reviewed FAA documents and regulations that detailed FAA policies and practices in the functional areas of workforce planning and training. We discussed the structure and processes of FAA’s workforce planning and training for technicians with FAA officials responsible for implementing those human capital procedures within the Air Traffic Organization (ATO) line of business, where the technicians are located. We interviewed FAA officials at FAA headquarters in Washington, D.C., and at FAA’s Training Academy in Oklahoma City, Oklahoma. Additionally, we obtained the perspectives of the bargaining unit that represents FAA technicians on FAA’s workforce planning and training for technicians through semistructured interviews with representatives of the Professional Airways Safety Specialists (PASS), the employee union representing technicians. We assessed the extent to which FAA followed each practice by applying the following scale: “Fully” indicated that, in our judgment, all or virtually all aspects of the practice were followed; “mostly” indicated that more than half but less than all or virtually all were followed; “partially” indicated that less than half but more than a few were followed; and “minimally” indicated that few or no aspects of the practice were followed. We conducted our comparison of FAA’s practices with leading practices at a high level: More detailed comparisons could disclose specific leading practices that FAA is not following, beyond those discussed in this report. We did not assess the effectiveness of FAA’s workforce planning, because factors other than FAA’s human capital system may also affect FAA’s performance. To balance the views of FAA management and obtain perspectives of the technician workforce on FAA’s workforce planning and training efforts, we conducted 12 focus group meetings with 101 FAA technicians and 12 academy managers at 11 locations. These meetings involved structured small-group discussions designed to gain more in-depth information about specific issues that cannot easily be obtained from single or serial interviews. Consistent with typical focus group methodologies, our design included multiple groups with varying characteristics but some similarity in experience and responsibility. Most groups involved 7 to 10 participants. Discussions were structured, guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. Our overall objective in using a focus group approach was to obtain the views, insights, and feelings of FAA technicians on issues related to their workload, staffing, and training. We conducted 12 separate focus group sessions—11 with FAA technicians, including a range of (1) technical specialties (Communications, Automation, Navigation, Environmental, and Surveillance/Radar), (2) experience (less senior and more senior staff), and (3) operating environments (e.g., air route traffic control center , terminal radar approach control , air traffic control tower, or general national airspace system ). By including GNAS participants in the focus groups, we ensured that the perspectives of technicians that perform their duties at geographically distant, isolated, or smaller facilities were included. One additional focus group was held with academy managers from all areas of technician instruction. Table 20 identifies the specialties included in the focus groups at each location. We traveled to FAA facilities in Baltimore, Chicago, Dallas, Los Angeles, Miami, and Oklahoma City to conduct the focus groups. We developed a guide to assist the moderator in leading the discussions. The guide helped the moderator address several topics related to workforce planning (staffing levels, workload issues, the Next Generation Air Transportation System , contract personnel, reliability- centered maintenance) and training (quality and quantity of training, FAA- provided and vendor-provided training). We assured participants of the anonymity of their responses, promising that their names would not be directly linked to their responses. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the focus group participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, the information includes only the responses of FAA technicians from the 11 selected groups. Second, while the composition of the groups was designed to ensure a range of specialties, experience, and operational environments, the groups were not randomly sampled. Third, participants were asked questions about their specific experiences with workload, staffing, and training. Other FAA technicians who did not participate in our focus groups may have different experiences. Because of these limitations, we did not rely entirely on focus groups, but rather used several different methodologies to corroborate and support our conclusions. To determine how training funds, including travel funds, have changed in recent years, we obtained quantitative cost data (including travel costs) from ATO and FAA Academy officials from fiscal year 2005 through May 2010 and compared these data for FAA-provided and vendor-provided training. We also conducted semistructured interviews with FAA management about technician training costs. We analyzed student travel costs for academy training obtained from FAA’s DELPHI system and personnel compensation and benefits of academy instructors data from the Federal Personnel and Payroll System. However, as FAA’s cost accounting system is not sufficient to provide costs for vendor training and travel-related activities, the team had to analyze data provided from FAA’s Electronic Learning Management System (ELMS) to summarize the cost of vendor technician training. We presented the data provided by FAA despite the fact that the data are unaudited at the level of detail needed for findings presented in table 19. As a result, this report identified a recommendation for FAA to consider modifying its cost accounting system or cost analysis techniques to develop information about the cost of in-house and vendor-provided training, and of the travel related to those training activities. We assessed the reliability of the data we obtained electronically by reviewing relevant documentation and internal controls, and interviewing agency officials, and determined that those data were of sufficient quality to be used for the purposes of this report. To develop information on the occurrence and duration of scheduled and unscheduled outages, we obtained operational performance data from FAA for fiscal years 2000 through 2009. FAA outage data are collected in accordance with the reporting guidance contained in FAA Order 6040.15E, National Airspace Performance Reporting System, and are currently entered and stored in the Maintenance Management System. These data are validated and fed into the National Airspace System Performance Analysis System (NASPAS). NASPAS may be used for facility or service performance trend analysis. NASPAS is capable of extracting user-defined outage parameters, performing calculations, and generating graphics for report writing. To understand how such outages affect the national airspace system’s efficiency, safety, and costs; industry; and the flying public, we conducted structured interviews with FAA and PASS officials. We assessed the reliability of the outage data by reviewing relevant documentation and interviewing agency officials, and determined that those data were of sufficient quality to be used for the purposes of this report. We conducted this performance audit from May 2009 to October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Teresa Spisak, Assistant Director; Jessica A. Evans; Maren McAvoy; Taylor Reeves; Amy Abramowitz; Emily Biskup; Melinda Cordero; Peter DelToro; Bess Eisenstadt; Brandon Haller; Rich Hung; Bert Japikse; Steven Lozano; Colleen Phillips; Andrew Stavisky; and John Warner made significant contributions to this report.
Since 2006, air traffic control (ATC) equipment outages and failures at Federal Aviation Administration (FAA) facilities have caused hundreds of flight delays and raised questions about FAA's maintenance capabilities. About 6,100 technicians maintain FAA's current (legacy) facilities and equipment and will be responsible for the Next Generation (NextGen) technologies planned for the next 15 years. Safe and efficient air travel will therefore partly depend on FAA's having technicians with the right skills now and in the future. As requested, GAO reviewed how (1) FAA incorporates key practices of leading organizations in its workforce planning for technicians, (2) FAA's technician training compares with key practices of leading organizations, and (3) the costs of technician training, including travel costs, have changed in recent years. GAO analyzed FAA workforce and training data, compared FAA planning and training practices with criteria identified in prior GAO work, and conducted focus group interviews with FAA technicians and FAA Training Academy instructors. FAA has followed some key practices of leading organizations in its strategic workforce planning for technicians but lacks a comprehensive, written strategy to guide its efforts. GAO assessed whether FAA followed those practices fully, mostly, or partially, or did not follow them. For example, FAA partially follows one practice--determining critical skills and competencies--because it assesses those skills and competencies its technicians now have to maintain legacy systems, but has just begun to identify those they will need to maintain NextGen systems. FAA also partially develops strategies to close the gap between the technician workforce it needs and the one that it has: It determines staffing needs annually, but lacks a longer-term strategy to address the hundreds of technician retirements projected through 2020. Without a comprehensive, written technician workforce planning strategy, FAA does not have a transparent road map to acquire and retain the right number of technicians with the right skills at the right time. FAA mostly follows other leading workforce planning practices, although it only partially involves key stakeholders--managers, but not technicians--in workforce planning and may thus be missing opportunities for improvement. FAA at least partially follows key practices of leading organizations in its strategic training and development for technicians, but it lacks a strategic training plan, and workload issues limit its ability to fully incorporate key leading practices. With the transition to NextGen, technicians will need to be trained both to maintain new systems and to remain proficient in maintaining the legacy systems that FAA plans to continue operating. FAA has partially implemented a strategic approach to planning for training in that it has established annual training goals and incorporated employees' developmental goals in its planning processes. As noted, however, it has just begun to identify the skills and competencies technicians will need to maintain NextGen systems. FAA mostly follows other key practices for design and development, such as developing a mix of in-house and vendor training. FAA is studying the feasibility of having vendors provide certain courses that are currently offered through the FAA Training Academy and are filled to capacity. FAA partially follows leading practices for implementing training and development, but workload demands often limit technicians' opportunities to attend training. FAA also partially follows leading practices for demonstrating how training and development efforts contribute to improved performance and results. For example, FAA identifies annual training goals, but does not link them to specific performance goals. As a result, it is limited in its ability to assess the effectiveness of its investments in training. Recent compensation costs for instructors at the FAA Training Academy have been roughly stable, while those for student travel to and from the academy and for training courses provided by vendors, exclusive of travel costs, have risen. The higher student travel costs reflect increases in air fares, and vendor training costs have grown as FAA has rolled out more courses for new equipment in preparation for the deployment of NextGen systems. Among other things, FAA should develop a written technician workforce planning strategy that identifies needed skills and staffing, and a strategic training plan showing how training efforts contribute to performance goals. The Department of Transportation provided technical corrections.
The U.S. government has imposed numerous sanctions targeting Iran since 1987, in part to deter Iran from supporting terrorism and developing its nuclear program. U.S. laws and executive orders have established a U.S. trade and investment ban targeting Iran, have been used to impose sanctions against foreign entities that support Iranian terrorist organizations or proliferation activities, and have imposed financial sanctions targeting Iran. According to a Treasury official, the U.S. trade and investment ban was aimed at making it more difficult for Iran to procure U.S. goods, services, and technology, including those that could be used for terrorism or proliferation. In 1987, the United States enacted a ban on imports of Iranian goods and services, and in 1995, executive orders banned specified U.S. exports and investment in Iran. These prohibitions apply to U.S. persons, including U.S. companies and their foreign branches. In 1996, Congress enacted the Iran Sanctions Act of 1996 (ISA), which authorized the imposition of sanctions on foreign firms that make certain investments in Iran’s energy sector. In ISA, Congress declared that it is the policy of the United States to deny Iran the ability to support acts of international terrorism and to fund the development and acquisition of weapons of mass destruction and the means to deliver them by limiting the development of Iran’s ability to explore for, extract, refine, or transport by pipeline its petroleum resources. The UN and EU, as well as other countries, have also imposed sanctions to pressure Iran to suspend the development of its nuclear program and end its support for terrorism. In 2002, the International Atomic Energy Agency (IAEA) confirmed allegations that Iran was building facilities that could produce fissile material for the development of a nuclear weapon. After Iran failed to suspend its uranium enrichment program in 2006 pursuant to UN Security Council (UNSC) resolution 1696, the UNSC adopted resolutions that imposed several sanctions targeting Iran between 2006 and 2010. Following a UNSC determination that Iran had not suspended the development of its nuclear program, the UNSC adopted additional resolutions that imposed sanctions on Iran, including, among others,  a proliferation-sensitive nuclear and ballistic missile programs-related  a ban on the export or procurement of any arms and related material from Iran and a ban on the supply of seven categories, as specified, of conventional weapons and related material to Iran; and  a travel ban and an assets freeze on designated persons and entities. The assets freeze also applies to any individuals or entities acting on behalf of, or at the direction of, the designated persons and entities, and to entities owned or controlled by the designated persons or entities. In addition to the UN, the EU has expressed serious and deepening concerns over the Iranian nuclear program, and has imposed sanctions targeting Iran since 2007. Recent sanctions that the EU enacted in 2012 imposed, among other things, restrictive measures on the energy sector, including a phased embargo of Iranian crude oil imports into the EU and financial sanctions against the Central Bank of Iran. Specifically, recalling the potential connection between Iran’s revenues derived from its energy sector and the funding of its proliferation-sensitive nuclear activities as underlined in USCR 1929, the sanctions prohibited the import, purchase, and transport of Iranian crude oil and petroleum products by member states. In addition, the EU has enacted targeted financial measures to freeze the assets of persons and entities associated with Iran’s nuclear activities. The Council of the European Union decided on March 15, 2012, to prohibit the provision of specialized financial messaging services to certain persons and entities that are designated by the UN or EU, or have engaged in, supported, or been associated with Iran’s proliferation-sensitive nuclear activities or the development of nuclear weapon delivery systems. In response to the council’s decision, on May 17, 2012, the Belgium-based Society for Worldwide Interbank Financial Telecommunication (SWIFT) announced it would end all transactions with Iranian banks that had been designated by the EU. Figure 1 identifies selected U.S. and international actions targeting Iran. U.S. law allows the export of certain agricultural goods, medicine, and medical devices to Iran under certain conditions. The Trade Sanctions Reform and Export Enhancement Act of 2000 (TSRA) required the President to terminate any unilateral agricultural or medical sanction. In addition, some of the laws and executive orders authorizing U.S. sanctions targeting Iran include language that allows for certain exceptions to the sanctions, such as for agricultural goods or medicine. For the purposes of this report, we refer to agricultural goods, medicine, and medical devices that are authorized for export to Iran as “humanitarian goods.” Treasury’s Office of Foreign Assets Control (OFAC) issues licenses that authorize the export and reexport of humanitarian goods pursuant to TSRA. OFAC indicated that it provides exporters with an efficient and expedited process to export humanitarian goods. Recent congressional legislation and a number of executive orders enacted since 2010 have established additional U.S. financial sanctions targeting Iran. According to Treasury, recent U.S. financial sanctions targeting Iran are authorized by, and outlined in, four laws and a number of executive orders. The discussion below provides examples of some of the financial sanctions authorized by these laws and executive orders from 2010 through 2012. According to an Under Secretary of the Treasury, the new legislation that Congress has enacted has increased financial and economic pressure on Iran. In 2010, Congress passed the Comprehensive Iran Sanctions, Accountability, and Divestment Act of 2010 (CISADA) to amend the Iran Sanctions Act of 1996 and to enhance U.S. diplomatic efforts with respect to Iran by expanding economic sanctions targeting Iran. According to an Under Secretary of the Treasury, “CISADA set a new precedent” because “… t gave the Secretary of the Treasury the authority for the first time to require U.S. banks to terminate correspondent banking relationships with foreign banks that knowingly engaged in significant transactions with designated Iranian banks.” Among other actions, section 104(c) of CISADA required the Secretary of the Treasury to prescribe regulations to prohibit or impose strict conditions on the opening or maintaining in the United States of a correspondent account or a payable-through account by a foreign financial institution found to have knowingly engaged in certain activities or facilitating a significant transaction by entities such as Iran’s Islamic Revolutionary Guard Corps (IRGC). Furthermore, section 104(d) of CISADA required Treasury to “prescribe regulations to prohibit any person owned or controlled by a domestic financial institution from knowingly engaging in…transactions with or benefitting the ,” its agents, or its affiliates whose property or interests in property are blocked pursuant to the International Emergency Economic Powers Act (IEEPA). This provision in CISADA also extends certain monetary penalties under IEEPA (50 U.S.C. § 1705(b)) to domestic financial institutions if a person owned or controlled by the domestic financial institution violates the regulations and if the domestic financial institution knew, or should have known, about the violation. In 2011, Congress enacted the National Defense Authorization Act for Fiscal Year 2012 (NDAA). The act required the President to block the property and interests in property, which is subject to U.S. jurisdiction, of all Iranian financial institutions, including the Central Bank of Iran. In addition, the act required the President to prohibit the opening, and prohibit or impose strict conditions on the maintenance, of a correspondent or payable-through account in the United States by a foreign financial institution found to have knowingly conducted or facilitated any significant financial transaction with the Central Bank of Iran or another designated Iranian financial institution. This sanction applies to foreign central banks, only insofar as the transactions are related to the sale or purchase of petroleum or petroleum products to or from Iran. Moreover, the sanction applied to transactions related to the purchase of petroleum or petroleum products from Iran only if the President has made a determination that there is a sufficient supply of petroleum or petroleum products from countries other than Iran. However, if the President does determine that there is a sufficient supply of petroleum and petroleum products, the financial sanctions will not apply if the President determines that the country with primary jurisdiction over the foreign financial institution has significantly reduced its volume of crude oil purchases from Iran in a specific period. The President delegated the authority to determine whether a country has significantly reduced the volume of Iranian crude oil purchases in a specific period to the Secretary of State, in consultation with the Secretary of the Treasury, the Secretary of Energy, and the Director of National Intelligence. In 2012, Congress passed the Iran Threat Reduction and Syria Human Rights Act of 2012 (TRA) to strengthen Iran sanctions laws for the purpose of compelling Iran to abandon its pursuit of nuclear weapons and other threatening activities and for other purposes. TRA expanded sanctions in a number of areas including sanctions relating to Iran’s energy sector. For example, the TRA amends CISADA by requiring the Secretary of the Treasury to revise the regulations prescribed under CISADA section 104(c) to apply, to the same extent that they apply to a foreign financial institution found to knowingly engage in an activity described in CISADA section 104(c)(2), to a foreign financial institution that the Secretary of the Treasury finds (1) knowingly facilitates, or participates or assists in, an activity described in section 104(c)(2) of CISADA; (2) attempts or conspires to facilitate or participate in such an activity; or (3) is owned or controlled by a foreign financial institution that the Secretary finds knowingly engages in such an activity. Moreover, section 312 of the TRA also amended CISADA to require Treasury to determine whether the National Iranian Oil Company or the National Iranian Tanker Company is an agent or affiliate of the IRGC. On September 24, 2012, Treasury made a determination that the National Iranian Oil Company is an agent or affiliate of the IRGC. Although the National Iranian Oil Company was already subject to sanctions under Executive Order 13599 (see below), according to Treasury, the determination that the National Iranian Oil Company is an agent or affiliate of the IRGC carries additional consequences. According to Treasury, as a result of the TRA section 312 determination, the National Iranian Oil Company is now an agent or affiliate of the IRGC, as described by CISADA section 104(c), whose property or interests in property are blocked pursuant to IEEPA. Furthermore, foreign financial institutions determined to have knowingly facilitated a significant transaction for the National Iranian Oil Company could have prohibitions or the imposition of strict conditions placed on their opening or maintenance of correspondent or payable-through accounts in the United States. IEEPA granted the President a number of authorities, including the blocking of a foreign country’s or foreign national’s property, to respond to any unusual and extraordinary threat to the national security, foreign policy, or economy of the United States. Administrations have invoked authority provided by IEEPA, as well as other authorities, to issue executive orders that provide for sanctions targeting Iran. The executive orders have imposed a number of sanctions, including a comprehensive trade and investment ban on Iran, and have been used to freeze the assets of parties designated for their engagement in proliferation or terrorism-related activities involving Iran. Recently, the Obama administration has issued the following executive orders for additional steps to increase the sanctions on financial transactions relating to Iran:  Executive Order 13599 (February 5, 2012). This executive order blocked the property, and interests in property, of the government of Iran, and any Iranian financial institutions, including the Central Bank of Iran, that are in the United States. According to the executive order, this was done “in light of the deceptive practices of the Central Bank of Iran and other Iranian banks to conceal transactions of sanctioned parties, the deficiencies in Iran’s anti-money laundering regime and the weaknesses in its implementation, and the continuing and unacceptable risk posed to the international financial system by Iran’s activities.” As a result of this blocking, no property of the government of Iran that is under the jurisdiction of the United States can be transferred, paid, exported, withdrawn, or otherwise dealt in.  Executive Order 13608 (May 1, 2012). This executive order authorized sanctions on a foreign person who has been determined to have facilitated deceptive transactions for or on behalf of any person subject to U.S. sanctions concerning Iran or Syria. The order defined, “deceptive transaction” as any transaction where the identity of any person subject to United States sanctions concerning Iran or Syria is withheld or obscured from other participants in the transaction or any relevant regulatory authorities. Pursuant to the executive order, Treasury may prohibit all transactions or dealings, whether direct or indirect, involving a foreign person that it has determined to have facilitated deceptive transactions for, or on behalf of, any person subject to the requisite U.S. sanctions. According to Treasury, “ith this new authority, Treasury now has the capability to publicly identify foreign individuals and entities that have engaged in these evasive and deceptive activities, and generally bar access to the U.S. financial and commercial systems.”  Executive Order 13622 (July 30, 2012). This executive order authorized three new sanctions to be implemented by Treasury. First, the executive order authorized new sanctions on foreign financial institutions determined to have knowingly conducted or facilitated specified significant financial transactions with the National Iranian Oil Company or Naftiran Intertrade Company. Second, the executive order authorized sanctions against foreign financial institutions found to have knowingly conducted or facilitated significant transactions for the purchase or acquisition of petroleum, petroleum products, or petrochemical products from Iran. Under the executive order, foreign financial institutions that engage in the two aforementioned activities could be prohibited from opening or maintaining correspondent or payable-through accounts in the United States. Third, the executive order authorized Treasury to block the property of any person determined to have materially assisted, sponsored, or provided financial, material, or technological support for, or goods or services in support of, (1) the National Iranian Oil Company, the Naftiran Intertrade Company, or Central Bank of Iran or (2) the purchase or acquisition of U.S. bank notes or precious metals by the government of Iran. According to the executive order, these actions were taken “in light of the government of Iran’s use of revenues from petroleum, petroleum products, and petrochemicals for illicit purposes; Iran’s continued attempts to evade international sanctions through deceptive practices; and the unacceptable risk posed to the international financial system by Iran’s activities.”  Executive Order 13628 (October 9, 2012). This executive order, among other things, blocked a person’s property and interests in property in the United States or under the possession or control of a U.S. person once Treasury, in consultation with State, determines that the person has engaged in certain specified conduct. For example, the executive order blocked the property of a person determined to have knowingly transferred or facilitated the transfer of goods, or technologies to Iran or any Iranian entity for use by the government of Iran to commit serious human rights abuses against the people of Iran. The executive order also prohibited any entity that is owned or controlled by a U.S. person and established outside the United States from knowingly engaging in any transaction with the Iranian government if that transaction would be prohibited under specified executive orders if it were engaged in by a U.S. person or in the United States. U.S. government agencies and regulators administer and enforce U.S. financial sanctions targeting Iran with banks’ assistance. Treasury has primary responsibility for administering financial sanctions. State administers some investment and trade sanctions, principally energy sanctions, targeting Iran. Banks play an important role in the sanctions process by blocking transactions that are required to be blocked by U.S. law and reporting apparent violations to Treasury. The federal and state banking regulators ensure effective compliance with these sanctions programs by the banks that they regulate. Treasury and other U.S. agencies have enforced sanctions through a variety of actions including issuing enforcement actions against entities that violate the sanctions. Specifically, since 2005, Treasury and Justice, in coordination with State and federal regulators, have taken actions against banks, assessing large financial settlements for systematic and willful violations of sanctions laws, including violations of Iran financial sanctions regulations. Table 1 lists the various U.S. entities involved in the administration and enforcement of U.S. financial sanctions targeting Iran, along with their respective roles and responsibilities. Treasury has primary responsibility for administering the finance-related provisions of recent U.S. sanctions authorities by developing regulations, conducting outreach to domestic and foreign financial regulators and financial institutions, and identifying apparent sanctions violations. Treasury also assesses the effects of financial sanctions on the Iranian economy. Regulations. OFAC developed and issued the Iranian Financial Sanctions Regulations to administer the financial sanctions enacted in July 2010 pursuant to CISADA. Treasury has amended the Iranian Financial Sanctions Regulations to implement additional legislation, such as Section 1245 of NDAA. While drafting, and before publishing regulations, OFAC solicited input on the proposed regulations from other Treasury officials and State. All U.S. persons must comply with the OFAC regulations, including all U.S. citizens, all persons and entities within the United States, and all U.S.-incorporated entities and their foreign branches. Outreach. According to Treasury, since 2010, Treasury officials have conducted outreach to more than 145 foreign financial institutions in more than 60 countries as well as to foreign governments, regulators, and other trade groups and associations. U.S. embassy consulate staff in Dubai informed us that Treasury officials made several trips to the United Arab Emirates to conduct outreach with financial institutions. Financial officials we met with in Dubai confirmed that Treasury had provided them with information on the new sanctions regulations under CISADA. According to Treasury officials, Treasury conducted this outreach to raise awareness of U.S. financial sanctions. Identification of violations. According to Treasury, OFAC continually compiles evidence and reviews information regarding potential sanctions violations from a variety of sources, including intelligence and public sources. Treasury officials stated that OFAC identifies potential violations through a variety of means, including financial irregularities from bank reports, referrals from federal bank regulators, and self-disclosures of potential violations by banks. According to Treasury officials, when OFAC designates an entity because of its engagement in sanctionable activity, OFAC declassifies and uses a portion of the evidence in order to make the designation public. Assessments. Treasury regularly assesses the administration of sanctions and their impact on Iran. According to Treasury officials, Treasury gathers various sources of information to monitor and assess the impact of U.S. sanctions targeting Iran. Treasury officials indicated that they rely on Iranian press reports, input from banks and other financial institutions, Iranian economic indicators, and intelligence information, among other sources. According to U.S. consulate officers in Dubai, they monitor Iranian events and the Iranian economy, collecting information on trade, real estate, gold, and the volume of transactions in exchange houses in Iran. Treasury develops classified quarterly reports on the impacts of sanctions on Iran’s economy, trade, and other sectors. State is responsible for administering the significant reduction exception set forth in section 1245 of the NDAA of 2012. The act requires the President to prohibit the opening, and prohibit or impose strict conditions on the maintenance, of a correspondent or payable-through account in the United States by a foreign financial institution found to have knowingly conducted or facilitated any significant financial transaction with the Central Bank of Iran or another designated Iranian financial institution. The sanction applies to foreign central banks only insofar as the transactions are related to the sale or purchase of petroleum or petroleum products to or from Iran. The sanction applies to transactions related to the purchase of petroleum or petroleum products from Iran only if the President has made a determination that there is a sufficient supply of petroleum or petroleum products from countries other than Iran. However, if the President does determine that there is a sufficient supply of petroleum and petroleum products, the financial sanctions will not apply if the President determines that the country with primary jurisdiction over the foreign financial institution has significantly reduced its volume of crude oil purchases from Iran in a specific period. The President delegated the authority to determine whether a country has significantly reduced the volume of Iranian crude oil purchases in a specific period to the Secretary of State, in consultation with the Secretary of the Treasury, the Secretary of Energy, and the Director of National Intelligence. The Secretary of State’s determinations are based on an assessment of each country’s efforts to reduce the volume of crude oil imported from Iran. According to State, the Secretary of State considers various factors, including the quantity and percentage of the reduction in purchases of Iranian crude oil over the relevant period; termination of contracts for future delivery of Iranian crude oil; and other actions that demonstrate a commitment to substantially decrease such purchases. On the basis of the assessment led by State, the Secretary of State granted exceptions to 20 countries, including China, Japan, the Republic of Korea, and India, for “significantly” reducing their volume of crude oil purchases from Iran since the enactment of the NDAA. Banks play an important role in the sanctions process by blocking property or interests in property that are required to be blocked under U.S. law and by reporting apparent violations to Treasury. Iran sanctions regulations generally require banks to block transactions that (1) are by, or on behalf of, a blocked individual or entity; (2) are to, or go through, a blocked entity; or (3) are in connection with a transaction in which a blocked individual or entity has an interest. Banks holding, receiving, or blocking transfers of blocked property must report to OFAC within 10 days of the property becoming blocked. Banks must place the assets or funds in a segregated interest-bearing account. In addition, banks may report apparent violations to Treasury. Treasury officials stated that once a bank discloses an apparent sanctions violation to Treasury, the bank often engages in a thorough review of its own past conduct and provides information to OFAC. According to OFAC officials, the bank generally presents an overview of its transactions and the context in which they occurred, and OFAC provides direction on where additional review is needed from the banks. After the disclosure, OFAC asks the bank to identify other recipients of the information of the transactions. After OFAC’s review, OFAC then makes a determination on the possibility of enforcement. The civil penalty for violating the Iran financial sanctions regulations may be as much as $250,000 per violation or twice the amount of the transaction, whichever is greater. Designating entities. As part of its enforcement efforts, Treasury has used a range of actions to enforce sanctions targeting Iran, including designating entities for engaging in sanctionable activity related to Iran, imposing sanctions on financial institutions, and issuing enforcement actions against financial entities. For example, according to Treasury, OFAC publishes a list of individuals and entities that have been designated for engaging in certain conduct, as well as a list of individuals and entities owned or controlled by, or acting for or on behalf of those previously listed individuals and entities. OFAC also identifies individuals and entities that are officials of; are owned or controlled by; or act on behalf of certain countries. OFAC blocks the assets of these entities and individuals and generally prohibits U.S. persons from dealing with them. According to Treasury, as of January 2013, OFAC had designated more than 360 individuals and entities–including banks, energy companies, and businesses–linked to Iran’s weapons-of-mass-destruction program and support for terrorism under various Iran-related executive orders. These designations included actions taken under Treasury’s executive order authorities related to the proliferation of weapons of mass destruction or delivery systems for weapons of mass destruction and international terrorism. Imposing sanctions. In July 2012, Treasury imposed sanctions under CISADA on two foreign financial institutions—the Bank of Kunlun (China) and Elaf Islamic Bank (Iraq)—for knowingly facilitating significant transactions and providing significant financial services for designated Iranian banks. According to Treasury documents, the action against the two banks effectively barred the banks from directly accessing the U.S. financial system. In addition, financial institutions may not open correspondent or payable-through accounts for Bank of Kunlun or Elaf Islamic Bank in the United States, and any financial institutions that held such accounts were required to close them within 10 days of the imposition of the sanction. Applying enforcement actions. OFAC has also issued enforcement actions against banks for violations or apparent violations of Iran sanctions regulations. From 2005 through 2012, OFAC imposed 45 civil penalties against banks for facilitating transactions in apparent violation of Iran sanctions regulations. The penalty and settlement amounts for apparent violations varied significantly. For example, in May 2006 OFAC announced a settlement with a bank for $3,352 in connection with an unauthorized funds transfer involving Iran. In June 2012, OFAC announced a $619 million settlement with ING Direct Bank N.V. to address, in part, apparent violations of the Iranian Transaction Regulations, among other sanctions programs, over a number of years and involving a total of $1.6 billion in transactions. All enforcement actions published to date involve violations of Iran sanctions regulations that were enacted before 2007. Federal and state banking regulators have imposed enforcement actions concurrently, or in close coordination, with OFAC in cases of significant failures to comply with OFAC regulations. For example, in 2005 the Federal Reserve, FinCEN, the New York State Banking Department, the Illinois Department of Financial and Professional Regulation, and OFAC announced the assessment of penalties against the Dutch bank ABN AMRO based, in part, on OFAC violations. The agencies jointly assessed $75 million in penalties against the bank on the basis of findings that it participated in transactions that violated U.S. sanctions laws, as well as findings of the bank’s failures related to U.S. anti-money laundering laws and regulations and other banking laws. In a recent case, federal and state banking regulators did not impose enforcement actions at the same time. In August 2012, the New York State Department of Financial Services announced that Standard Chartered Bank had agreed to a settlement of $340 million and the implementation of remedial actions in connection with the omission of Iranian customer information from U.S. dollar payment messages sent to U.S. financial institutions with respect to 59,000 transactions that totaled approximately $250 billion. The regulator determined that the bank’s policies and procedures during the relevant period prevented examiners from performing complete safety and soundness examinations, and from identifying suspicious patterns of activity that could, among other things, allow regulators to assist law enforcement authorities. In December 2012, OFAC announced a settlement with Standard Chartered for $132 million for apparent violations of U.S. sanctions laws and regulations. In a separate action, also in December 2012, the Federal Reserve also imposed a $100 million civil money penalty against the bank and its New York branch, a portion of which related to unsafe and unsound banking practices associated with the insufficient oversight of its compliance program for U.S. sanctions. From 2009 to 2012, Justice, through its Criminal Division, National Security Division, and U.S. Attorney’s Offices, pursued criminal investigations against seven banks for potential violations of sanctions laws that involved transactions with Iran. All seven cases involved banks’ potential violations of IEEPA, under which it is criminal to violate, or attempt to violate, regulations issued under those statutes. Criminal investigations against banks for sanctions violations were resolved through settlements that involved monetary forfeitures and deferred prosecution agreements (see table 2). Senior law enforcement officials cited threats to both national security and the integrity of the U.S. financial system posed by the banks’ misconduct. Furthermore, in each investigation, the bank systematically removed or obscured payment data that would have revealed the involvement of sanctioned countries and entities, including Iran. For example, in 2009, Credit Suisse AG agreed to a one-count filing in federal court that charged the bank with violating IEEPA. Justice determined that from 1995 through 2006, Credit Suisse AG in European locations deliberately removed material information, such as customer names, bank names, and addresses, from payment messages so that the wire transfers would pass undetected through filters at U.S. banks. Credit Suisse AG also provided its Iranian clients with a pamphlet that provided detailed payment instructions on how to avoid triggering U.S. OFAC filters. The scheme allowed U.S.-sanctioned countries and entities to move hundreds of millions of dollars through the U.S. financial system. In another investigation, Justice indicated that beginning in the early 1990s until 2007, ING Bank N.V. violated U.S. law by moving more than $2 billion illegally through the U.S. financial system–via more than 20,000 transactions–on behalf of entities subject to U.S. economic sanctions, including Cuba and Iran. According to Justice, bank staff intentionally manipulated financial and trade transactions to remove references to Iran and other sanctioned countries to avoid detection by software filters used by unaffiliated banks in the United States. Similarly, in December 2012, both HSBC Holdings, PLC and HSBC Bank USA N.A. entered into a deferred prosecution agreement with Justice for violations of IEEPA and the Trading With the Enemy Act in connection with Iran and other sanctioned countries. Court documents indicated that from the mid-1990s through September 2006, HSBC Holdings, PLC allowed approximately $660 million in OFAC-prohibited transactions to be processed through U.S. financial institutions, including HSBC Bank USA N.A. According to an official from the Federal Reserve, HSBC Holdings, PLC permitted subsidiaries in Europe and the Middle East to follow instructions from sanctioned countries, including Iran, to omit and otherwise obscure their names from U.S. dollar payment messages sent to HSBC Bank USA N.A. and other financial institutions located in the United States. According to a senior Justice official, prosecutors sought to obtain the appropriate dispositions of cases against banks for criminal violations of financial sanctions laws. Federal guidelines regarding prosecution of business organizations direct prosecutors to consider additional factors to those normally considered in prosecuting individuals. The guidelines direct federal prosecutors to consider factors including the timely and voluntary disclosure of the wrongdoing by the business and its willingness to cooperate in the investigation, among others. In announcing the deferred agreements, Justice officials cited the banks’ remedial actions, willingness to accept responsibility, and significant cooperation during the investigations. The combination of the various U.S. and international trade, investment, and financial sanctions has adversely affected the Iranian economy and its future outlook. Our analysis indicates that the Iranian economy has consistently underperformed comparable peer countries across key economic indicators since the enactment of U.S. and international sanctions between 2010 and 2012. Furthermore, professional and International Monetary Fund (IMF) forecasters revised their projections of the Iranian economy after the enactment of sanctions to reflect deterioration in its expected performance. U.S. and EU exports of humanitarian goods to Iran increased in the first three quarters of 2012 compared with 2011, according to our analysis of trade data. According to open source reports, the government of Iran is attempting to adapt to the sanctions through various means, including using alternative payment mechanisms such as barter agreements, but thus far these agreements have not fully offset Iran’s reduced oil exports. U.S. and international sanctions have adversely affected the Iranian economy. Experts and U.S. officials have indicated that the sanctions have created a number of difficulties for the Iranian economy and that the financial sanctions have limited Iran’s ability to conduct trade and finance. Following the enactment of sanctions beginning in 2010, Iran’s oil production, oil export revenue, and gross domestic product (GDP) have declined relative to comparable countries, and inflation has increased. Moreover, IMF and professional forecasters downgraded their projections of Iranian economic performance to reflect a deterioration of the Iranian economy, specifically with regard to GDP, inflation, and unemployment, since the enactment of recent sanctions. U.S. and international sanctions have created a number of difficulties for the Iranian economy. Some experts stated that the deterioration in Iran’s recent economic performance resulted from a combination of sanctions— including U.S. and international sanctions—and economic mismanagement by the government of Iran. The recent sanctions are likely to have reduced Iran’s ability to ship and sell oil, an important component of the economy and historically a key source of foreign currency earnings and government revenue. U.S. financial sanctions have made receiving payment for oil and other exports more difficult. U.S. officials and representatives from financial institutions said that U.S. financial sanctions have increasingly denied Iran access to U.S. and international financial institutions, limiting its ability to finance trade and conduct other financial transactions, and increasing transaction costs. For example, according to officials from some international financial institutions, many foreign banks are unwilling to process transactions for Iranian businesses and citizens even when it was not clear that these transactions would trigger sanctions. In addition, as already noted, in 2012 Iranian banks designated by the EU were cut off from the largest financial messaging service, SWIFT, which processed more than 2 million financial messages for 29 Iranian financial institutions in 2011. To help isolate economic changes that are unique to Iran we identified a set of comparable countries (peers) to serve as benchmarks for Iranian economic performance. We identified 23 peers that were either countries in the same region as Iran or countries with a similar share of oil in their exports. We used this combined peer group to assess the performance of Iran’s oil market, GDP, and inflation. Oil production. Iranian oil production sharply diverged from peer oil production beginning in 2011 (see fig. 2). Iranian oil production has fallen by more than 16 percent since July 2010, while production by peers concurrently increased by roughly 4 percent according to our analysis of data from the Energy Information Administration. However, significant deterioration in oil production and exports did not occur until 2012. According to our econometric analysis, oil production dropped by a statistically significant 26 percent more than expected (on an annualized basis) in 2012. Several aspects of the sanctions have reduced Iran’s ability to produce oil. U.S. officials and independent experts stated that U.S. and international sanctions have limited foreign investment in Iran’s oil and gas sectors. Furthermore, EU sanctions, including an embargo on Iranian oil imports as well as prohibitions on insurance for shipping of Iranian oil and petrochemicals, were adopted in January 2012. According to State, 20 countries reduced their volume of crude oil purchases from Iran after the passage of NDAA. Revenue from oil exports. Since 2010, Iranian oil export revenue has declined while peers’ revenue has increased. According to our analysis of IMF data, Iranian oil export revenue is estimated to have declined by approximately 18 percent between 2010 and 2012, while peers’ combined oil exports revenues are estimated to have increased by more than 50 percent over the same time period (see fig. 3). This reflects a large estimated decrease in oil export revenue in Iran in 2012 relative to peers. According to open source reports, the International Energy Agency stated that Iranian oil exports declined from about 2.5 million barrels per day in 2011 to about 1.3 million barrels per day in November 2012. Declining export revenue is principally driven by lower estimates of oil exports, but lower prices may also be a factor. According to one expert we spoke with, Iran may be offering as much as a 10 percent discount from its official selling price to some customers. Revenue from oil exports is an important component of government revenue in Iran and IMF estimates that Iran ran its largest budget deficit since 1998—almost 3 percent of GDP—in 2012. GDP. GDP—an aggregate measure of an economy’s production of goods and services—has increased less in Iran relative to peers since 2010 (see fig. 4). Because official estimates of GDP have not been available since 2010, we averaged estimates from IMF and two private economic information services. The resulting consensus estimates indicate that the Iranian economy grew by 1.9 percent in 2011 and shrank by 1.4 percent in 2012. In contrast, Iran’s median peer economy grew by 4.2 percent in both 2011 and 2012. Inflation. Annual inflation in Iran, which has historically been higher and more volatile than inflation in peer countries, increased from almost 8 percent in 2010 to 27 percent in late 2012, while median peer inflation remained lower, between 4 and 6 percent (see fig. 5). According to our econometric analysis, inflation increased by a statistically significant 12.6 percentage points more than expected (on an annualized basis) in 2012. As recently as 2010, Iran had reduced inflation to below 10 percent, down from nearly 30 percent in 2008. Higher inflation may also have been driven in part by higher transaction costs resulting from U.S. financial sanctions that made processing payments for imports more costly. One measure of the Iranian rial-dollar market exchange rate depreciated almost 70 percent from July 2010 to October 2012. The depreciating exchange rate increased the price of certain imported goods, which also likely contributed to the increase in inflation. In December 2010, the government of Iran introduced a reform of energy subsidies that increased energy prices and hence also had an impact on inflation. One expert has suggested that excessive money growth by the Central Bank of Iran also contributed to higher inflation. Three forecasters—IHS Global Insight, IMF, and the Economist Intelligence Unit—have downgraded their forecasts of the Iranian economy to reflect a deterioration in Iran’s expected economic performance after the enactment of recent U.S. and international sanctions. We compared the forecasts performed before and after the latest round of sanctions, and found that the forecasts predicted poorer performance on key macroeconomic indicators, such as Iranian GDP, inflation, and unemployment, between 2012 and 2016 than was previously expected. For example, according to IHS Global Insight, Iran will continue to face declining oil output, plunging exports, surging prices, and a sharply weaker currency after 2012. Real GDP. Before recent sanctions were enacted from July 2010 through 2012, the three forecasters predicted that between 2012 and 2016, the Iranian economy would grow, on average, by about 3.2 to 4.3 percent per year. However, in their updates, published in October and November 2012, all three forecasters predicted that Iran’s annual GDP would grow, on average, by -0.5 to 0.8 percent for the same period (see fig. 6). According to IHS Global Insight, the U.S. and EU sanctions that target Iranian oil exports and the Central Bank of Iran are harsher and more punitive than previously enacted sanctions, and will likely push the Iranian economy into recession. In particular, after updating its forecast in August 2012, IHS Global Insight expected the Iranian economy to contract by 2.0 percent in 2012 and by 1.3 percent in 2013. According to the IMF’s Regional Economic Outlook for the Middle East and Central Asia of November 2012, Iran’s oil production has declined owing to tightened U.S. sanctions and the EU oil embargo, lowering the country’s growth outlook. All three forecasters predicted that Iran’s crude production and exports would continue their downward trend as a result of the sanctions and that Iran would be heavily reliant on its Asian and Middle Eastern trading partners to purchase crude oil available for exports. Furthermore, the IMF’s Regional Economic Outlook projected that Iran’s gross official reserves would decline from $101.5 billion in 2011 to $89.2 billion in 2012 and $84.6 billion in 2013. Based on the IMF’s projections of Iran’s annual imports of goods and services in 2012 and 2013, the anticipated reserves will be less than Iran’s annual imports. Although the forecasters projected that the negative trend of real GDP would likely reverse in or after 2013, the Economist Intelligence Unit, for example, did not take into account any future changes in current sanctions or the possible enactment of new sanctions. Inflation. The forecasters revised the projected inflation rate for Iran to reflect predicted future economic environment that was worse than originally projected (see fig. 7). Before recent sanctions were enacted, the average annual inflation rate predicted by IHS Global Insight, IMF, and the Economist Intelligence Unit ranged between 10.0 and 16.3 percent for the period from 2012 to 2016. However, the revised forecasts predicted that inflation would average 19.0 to 21.0 percent for the same period. According to the three forecasters, the near-term inflation outlook for Iran has deteriorated in light of subsidy cuts, the collapsing value of the Iranian rial, and additional EU and U.S. sanctions. For example, according to the Economic Intelligence Unit, inflation will remain high, driven by the removal of subsidies and by sanctions, which are leading to a dramatic weakening of the unofficial value of the rial and surging prices for imports. Since Iran is a major consumer of refined petroleum, a domestic production shortage means that the country needs to import refined petroleum to meet demand, exacerbating the vulnerability to import price inflation. Furthermore, the Economic Intelligence Unit anticipated that in the face of declining government revenue, there is a risk that the authorities will print money to fund spending, which could feed an inflationary spiral. IHS Global Insight projected higher inflation over the next 5 years to reflect the move to further reduce—and ultimately eliminate—potentially costly government subsidies on food, utilities, education, and other goods and services. Unemployment. In addition to expecting the economy to shrink in the near term, the forecasters also revised their projections of the employment outlook for Iran (see fig. 8). Before the enactment of the recent U.S. and international sanctions, the three forecasters projected from 2012 through 2016, the unemployment rate would average between 14.6 and 15.2 percent. After the enactment of recent U.S. and international sanctions from 2010 through 2012, the forecasters predicted a higher average unemployment rate for 2012 through 2016, ranging from an average of 15.0 to 16.6 percent. All three forecasters anticipated a sustained high unemployment rate of 15 percent or higher. For example, the IMF forecast predicts an increase of unemployment to almost 19 percent in 2016. Our analysis indicates that EU and U.S. exports of humanitarian goods to Iran increased by about 35 percent in the first 10 months of 2012, from $1.671 billion in the first 10 months of 2011 to $2.258 billion in the first 10 months of 2012 (see table 3). The increase is largely due to U.S. exports of wheat and EU exports of wheat and barley. Medicine and medical devices have remained relatively stable for the EU, but U.S. exports of those goods declined by approximately 11 percent in the first 10 months of 2012. However, the United States has not been a major supplier of humanitarian goods to Iran. U.S. exports of humanitarian goods to Iran are about 10 percent of the EU humanitarian exports to Iran. Since the enactment of recent U.S. and international sanctions from 2010 through 2012, the annualized growth rate in EU exports of humanitarian goods to between 2010 and 2012 Iran nearly tripled to 18.5 percent from the historical average of 6.6 percent in 2004 through 2009 (see fig. 9). In addition, from 2010 through the third quarter of 2012, EU exports of medicine and medical devices grew at an annualized rate of about 11.2 percent, compared with about 0.6 percent from 2004 through 2009. Moreover, EU exports of agricultural goods grew at an annualized rate of 31 percent from 2010 through the third quarter of 2012. In addition, U.S. humanitarian exports to Iran increased at an annualized rate of about 10 percent since 2010 (see fig. 10). EU and U.S. agricultural exports increased in the second half of 2008, owing to increased wheat exports that assisted Iran in coping with a drought that had affected its agricultural sector. Official UN and open source reports have raised concerns regarding the availability of humanitarian goods in Iran as a result of the U.S. and international sanctions. According to a 2012 UN report, the sanctions targeting Iran have had significant impacts on the general population, including causing a shortage of necessary items, such as medicines. The UN also reported that some nongovernmental organizations operating in Iran have reported that people do not have access to life- saving medicines. In addition, a report published by the Wilson Center in February 2013 stated that sanctions are “causing disruptions in the supply of medicine and medical equipment in Iran.” Foreign financial and business officials in Dubai informed us in September 2012 that sanctions may have adversely affected some Iranian citizens and businesses. Some of these officials stated that sanctions may have limited the export of some humanitarian goods, such as food and medicine, to Iran. For example, one business official indicated that the recent financial sanctions had significantly limited his ability to export food to Iran because foreign banks were unwilling to process transactions for Iranian business. Some open source reports have noted that economic mismanagement and insufficient funding for medicines by the Iranian government have exacerbated the shortage of medicines in Iran. According to open sources, the government of Iran has made efforts to adapt to U.S. and international sanctions in a number of ways, including using alternative payment mechanisms such as barter agreements and changing its trading partners. Open sources report that Iran is selling oil at a discount to a number of customers, and is accepting other countries’ currencies as payment, which may limit its ability to use the revenue for anything other than purchasing products in those countries. For example, open sources reported Iran has entered into barter agreements with countries including India, exchanging oil for food, medicine, and commercial products in lieu of using traditional payment methods. According to an international energy market expert, while the barter arrangements allow Iran to continue selling oil to other countries without accessing the international financial institutions, such arrangements may also limit Iran’s ability to receive the full market value of its oil. Furthermore, as the EU and some countries, such as South Korea and Japan, have significantly reduced the purchase of Iranian oil in response to EU and U.S. sanctions, open source reports indicate that Iran has attempted to reach agreements with India, Pakistan, and other countries to purchase Iranian oil. However, these recent agreements have thus far not fully offset the reduced exports to the EU and others. According to open source reports, the International Energy Agency stated that Iranian oil exports declined from about 2.5 million barrels per day in 2011 to about 1.3 million barrels per day in October 2012. Although Iranian exports of oil have declined, trade data from certain countries show that their exports to Iran have increased since before 2010. In 2008 and 2009, before the enactment of recent U.S. sanctions and international sanctions in 2010 through 2012, the average aggregate quarterly exports to Iran were about $15.5 billion. During the first half of 2012, quarterly exports to Iran from the same countries were $20.4 billion despite the recent U.S. and international financial sanctions targeting Iran. Table 4 shows that the share of EU exports to Iran has decreased while the shares of Turkish and United Arab Emirati exports have markedly increased. The U.S. share of Iran’s imports has remained at 1 percent or less. We provided a draft of our report to Treasury, State, Justice, the Board of Governors of the Federal Reserve, the Office of the Comptroller of the Currency, and the International Monetary Fund for their review and comment. The agencies and organizations did not provide official comments on the report. The Departments of Treasury, the Board of Governors of the Federal Reserve, the Office of the Comptroller of the Currency, and the International Monetary Fund provided technical comments on the draft, which we incorporated in the report, as appropriate. We are sending copies of this report to interested congressional committees, the secretaries and agency heads of the departments addressed in this report, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To describe recent laws and executive orders that have added to the Department of Treasury’s (Treasury) authority to implement financial sanctions targeting Iran, we reviewed the public laws and executive orders that define these sanctions, as well as the regulations developed to administer them. We spoke with Treasury officials to identify laws enacted and executive orders issued from 2010 through 2012 that added to Treasury’s authority to administer and enforce financial sanctions targeting Iran. Treasury officials identified four primary laws and four executive orders that authorized the financial sanctions targeting Iran. We focused primarily on those financial sanctions targeting Iran that are defined in laws, regulations, or executive orders and that either (1) block the property of designated entities or (2) target a financial transaction as an action that can result in the prohibition of the opening or the prohibition or imposition of strict conditions on the maintenance of a correspondent or payable-through account in the United States by a foreign financial institution. We discussed the sanctions with officials from Treasury and the Department of State (State), and we reviewed official statements and press releases on the content and purpose of the sanctions. We also reviewed selected financial sanctions targeting Iran enacted by the United Nations (UN) and European Union (EU). To describe U.S. efforts to administer U.S. financial sanctions targeting Iran, we reviewed Treasury regulations and guidance establishing the process for administering the sanctions. We reviewed the Iranian Financial Sanctions Regulations, Iranian Transactions and Sanctions Regulations, and additional sanctions guidance and documents developed and published by Treasury. We spoke with Treasury officials to discuss the agency’s administration of financial sanctions through various activities, including its development of regulations, outreach to banks and financial institutions, review of financial transactions, identification of potential violations, and assessment of the impact of financial sanctions. We also interviewed State officials regarding the department’s process for granting exceptions under section 1245 of the National Defense Authorization Act of 2012. To describe the efforts of the U.S. government and banks to ensure compliance with the financial sanctions targeting Iran, we reviewed the Bank Secrecy Act, as amended, and the examination procedures used by the regulators to assess banks’ compliance with Bank Secrecy Act and Office of Foreign Assets Control- related requirements, which includes guidance on the establishment and maintenance of an effective Office of Foreign Assets Control (OFAC) compliance program. We also reviewed available data from the regulators on the numbers of Bank Secrecy Act examinations conducted during fiscal years 2010–2012, which generally included reviews of banks’ OFAC compliance programs. We interviewed officials from the Board of Governors for the Federal Reserve System and the Office of the Comptroller of the Currency to discuss the bank examination process regarding OFAC compliance programs. We also spoke with representatives from the American Bankers Association and the Institute of International Bankers to discuss the role that banks play in the administration of financial sanctions and the programs that banks establish to comply with OFAC reporting guidelines. To describe U.S. efforts to enforce financial sanctions targeting Iran, we interviewed officials from Treasury, State, the Department of Justice (Justice), and federal banking regulatory agencies to identify the methods and activities that the agencies used for enforcement. We reviewed the Specially Designated Nationals list, which Treasury publishes, to determine the number of entities that Treasury designated for violations of U.S. financial sanctions targeting Iran. We reviewed OFAC guidance on the enforcement of financial sanctions. We also reviewed documents on the federal banking regulators’ enforcement actions against banks involving OFAC compliance issues. We additionally reviewed court documents and press releases regarding enforcement actions taken by Justice in response to banks’ criminal violations of financial sanctions. To assess Iranian economic performance, we identified a group of peer economies, which helped us to isolate economic changes that are unique to Iran but not necessarily to identify the impact of sanctions. The peer group we identified includes the International Monetary Fund’s (IMF) Middle East and North Africa region, neighboring countries not included in the Middle East and North Africa region, and oil export-dependent countries outside the region. The peer group is comprised of Algeria, Angola, Armenia, Azerbaijan, Bahrain, Djibouti, Egypt, Equatorial Guinea, Gabon, Jordan, Kuwait, Mauritania, Morocco, Oman, Panama, Republic of Congo, Qatar, Saudi Arabia, Tunisia, Turkey, Turkmenistan, United Arab Emirates, and Venezuela. The group excludes Afghanistan, Chad, Iraq, Lebanon, Libya, Nigeria, Pakistan, Sudan, Syria, and Yemen–– countries that were rated very high on the Fund for Peace Failed States Index or very low on the Institute for Economics and Peace Global Peace Index in 2011 or 2012. We assessed the performance of the Iranian oil market (oil production and oil export revenue), gross domestic product, and consumer price inflation against the peer group’s, using data from IMF databases (World Economic Outlook and International Financial Statistics), the Energy Information Administration (International Energy Statistics database), IHS Global Insight, and the Economist Intelligence Unit. We assessed the reliability of these data and found that they were sufficiently reliable for identifying peers for the Iranian economy and assessing Iran’s economic performance. For example, we corroborated data from multiple sources and spoke with cognizant officials and experts to confirm the reliability of the data. Because of concerns about Iranian economic data, we relied on third party data and estimates to a large extent, and considered the published views of the IMF on Iranian inflation data, whose original source was the Central Bank of Iran. If, as some suggest, Iranian official statistics underestimate inflation, our results with respect to inflation are conservative. In addition to conducting simple peer comparisons, we conducted a more rigorous econometric analysis that controlled for historical trends in Iranian oil production as well as contemporaneous changes in peers’ oil production. We interpreted the results of our analysis in light of expert views, contemporaneous events including U.S. and EU sanctions, and certain domestic policies in Iran. In most instances we did not attempt to isolate the impact of U.S. financial sanctions. The contemporaneous implementation of many sanctions, including U.S., UN, and EU financial and non-financial sanctions from 2010 through 2012, would make attributing certain outcomes to any particular sanction very difficult. For a complete description of our peer group selection and econometric analysis see Appendix III. To assess the impact of the sanctions on the projected future performance of the Iranian economy, we reviewed the forecasts that three sources–the IMF’s World Economic Outlook, IHS Global Insight, and Economist Intelligence Unit–developed to predict the performance of Iran’s economy from 2012 through 2016. We reviewed the forecasts that each source developed before the enactment of the most recent U.S. and international sanctions, and we compared the results with forecasts published between September and October 2012 to identify changes in the predicted performance of the Iranian economy. To compile the original forecasts, we used IHS Global Insight data for June 2010 and the IMF World Economic Outlook estimates for April 2010, with the exception of predicted unemployment rate, which came from the September 2011 World Economic Outlook database. We also averaged two forecasts developed by the Economist Intelligence Unit, from March 2010 and October 2010, to establish a baseline forecast of the performance of Iran’s economy before the enactment of the recent sanctions. For the updated forecasts, we used the November 2012 IHS Global Insight data, the October 2012 IMF World Economic Outlook database, and the November 2012 Economist Intelligence Unit forecasts. To identify the efforts of the government of Iran to adapt to the U.S. and international sanctions, we reviewed U.S. government statements regarding the impact of sanctions on Iran in publicly available testimonies, speeches, and other remarks made by U.S. officials from State, Treasury, and the White House. We reviewed these statements regarding the U.S. government’s position on the impact of sanctions on Iran, factors that might lessen their impact, the influence of international sanctions on the impact of sanctions, and for ways that Iran was adapting to the sanctions. We interviewed U.S. government officials, as well as academic and independent experts, regarding the extent to which sanctions targeting Iran have affected the Iranian economy and government and business with Iran. In addition, we reviewed open source and media reports regarding the effect of U.S. and international sanctions on Iran. To review the impact of sanctions targeting Iran on the availability of humanitarian goods to Iran, we reviewed official UN and open source reports about the access of such goods in Iran. In addition, since the United Arab Emirates is one of Iran’s largest trading partners, we met with several business officials in Dubai, United Arab Emirates, to discuss the effect that sanctions have had on business with Iran and the resulting impact on Iranian citizens and the availability of humanitarian goods. To analyze the export of humanitarian goods to Iran, we analyzed U.S. and EU trade data between January 2004 and October 2012. For the purposes of this report, we defined “humanitarian goods” as those goods authorized for exports by the Iranian Transactions Regulations as of October 2011. The regulations defined agriculture goods to include items that are intended to be consumed by and provide nutrition to humans or animals in Iran, including vitamins and minerals, bottled drinking water, and seeds that germinate into items that are intended to be consumed by and provide nutrition to humans or animals in Iran. Agricultural goods did not include alcoholic beverages, cigarettes, gum, or fertilizer. Medicine and medical devices consisted of medical supplies, equipment, instruments, and ambulances, and medicines which include prescription and over-the-counter medicines for humans and animals. We used a U.S. Census-defined concordance between the North American Industry Classification System used by the United States and the Harmonized Commodity Description and Coding System used by the European Union. We performed our selection of humanitarian goods at the two-, four- and five-digit levels of the harmonized system codes, as appropriate. For the trend analysis since January 2004, we also performed a sensitivity check by using the definition of authorized agricultural exports to Iran stated in the Export Administration Regulations as of July 2001. These regulations included tobacco and tobacco products, beer, wine and spirits, livestock, fertilizer and reproductive materials in the list of authorized agricultural exports. We found that those categories of products did not have a significant impact on our analysis, and we decided to use a consistent definition for our short-term 10-month comparison between 2011 and 2012 exports, as well as our longer-term trend analysis. In addition, the narrower scope of the authorized agricultural exports as stated in the updated regulations provided a more precise definition of humanitarian goods. To ensure that we did not overlook any authorized agricultural commodities and medicine and medical devices exported by the U.S. to Iran, we also reviewed OFAC data of export licenses issued to U.S. businesses that allowed the export of these goods to Iran between 2009 and 2012. We conducted this performance audit from February 2012 to February 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix we describe the process we used to identify peers for the Iranian economy, the econometric approach we used to determine the magnitude and statistical significance of recent changes in several economic indicators for Iran, and the results of this analysis. To help understand economic changes occurring uniquely in Iran we identified a set of peer countries to approximate a control group. We identified (1) regional peers, and (2) oil exporting peers, and then we pooled the two groups to form a single peer group. To identify regional peers, we chose countries in the International Monetary Fund’s (IMF) Middle East and North Africa peer group and other countries that bordered Iran but were not in the group. To identify countries whose dependence on oil exports is similar to Iran’s, we calculated Iran’s oil exports as a percentage of goods exports (roughly 86 percent), and then considered any country to be an oil exporting peer if its oil exports were more than 75 percent of goods exports. To remove certain countries that experienced significant instability associated with civil conflict or political violence (e.g., certain countries associated with the “Arab Spring”), we excluded countries from the peer group if they exceeded certain thresholds on the Fund for Peace Failed States Index or the Institute for Economics and Peace Global Peace Index in 2011 or 2012. We then combined into a single peer group the countries that we had identified with both methodologies (see table 5). We estimated several panel data difference-in-difference models on the growth rates of two macroeconomic indicators: oil production and consumer prices. While the dependent variable varies, the independent variables are the same across models—an intercept and month and country fixed effects. We assume a robust covariance structure which allows for heteroskedasticity–volatility could vary over time or across countries–and serial correlation of the errors within a country. In addition, we estimate two variations based on different “sanctions dummies” for Iran that correspond to two key financial sanctions laws—Comprehensive Iran Sanctions, Accountability, and Divestment Act of 2010 (CISADA), passed in July 2010, and the National Defense Authorization Act for Fiscal Year 2012 (NDAA), passed in December 2011 (As a result, the post-sanction dummies are equal to 1 for observations on Iran from August 2010 to the present in the case of CISADA and from January 2012 to the present in the case of NDAA). For the NDAA dummy in particular, we recognize that European Union (EU) sanctions related to insurance and an EU oil embargo are contemporaneous with NDAA financial sanctions. Furthermore, we recognize that we are not identifying the impact of sanctions based on this approach. We do not control for other macroeconomic or idiosyncratic (time-country specific) factors. We also recognize that we lack detailed institutional knowledge of idiosyncratic factors across all of the countries in our sample. However, we argue that other factors we might attempt to control for are likely to be endogenous to the sanctions. For example, one would typically include the growth of the money supply and the output gap in a regression designed to explain inflation. However, both of these factors could be influenced by the sanctions or by policy responses to the sanctions; therefore, by including them we could underestimate the role of sanctions. We estimated all models with data from February 2000 to the most recently available month at the time of the analysis (June 2012 or July 2012). Changes in the Iranian economic indicators we analyzed were consistently statistically significant during the time period associated with recent U.S. financial sanctions, and the measured effects (coefficients) were of magnitudes that were economically meaningful. The size of the effect is larger in the post-NDAA time period (which also includes EU sanctions related to oil and insurance) than in the post-CISADA period. Although this is not necessarily a measure of the impact of U.S. and international sanctions, it does indicate that the recent deterioration in the Iranian economy is larger than what one would expect relative to the historical trends and volatility of Iran and its peers. The increase in the inflation rate is statistically significant and large, indicating that inflation is significantly larger than one would expect during the post-CISADA and post-NDAA time periods (see table 6). The effect in 2012 (the post-NDAA period) is slightly larger: 12.6 percentage points versus 10.2 percentage points in the post-CISADA period. An energy subsidy program initiated in December 2010 is likely to have contributed to higher inflation during this time period. U.S. and international sanctions may have contributed to higher transaction costs, higher import prices, and a lower exchange rate, all of which could increase inflation. The decline in oil production is also statistically significant and large, indicating that oil production fell significantly more than one would expect during the post-CISADA and post-NDAA time periods (see table 7). The effect in 2012 (the post-NDAA period) is much larger: 26 percentage points versus 9 percentage points in the post-CISADA period. U.S. and international sanctions, such as a European Union embargo on oil from Iran, may have made it more difficult to attract investment in Iran’s oil sector, more difficult to sell oil on international markets, and more difficult to receive payment for oil Iran was able to sell, all of which could decrease oil production. We estimated several additional models to assess the robustness of our results. In one instance, we allowed the Iran dummy variables representing the post-CISADA and post-NDAA time periods to vary over time, beginning in January 2010. The coefficients on the dummy variables were larger and more likely to be statistically significant during the post- CISADA and, especially, post-NDAA time periods. We also estimated models with alternative error structures that allow for more general heteroskedasticity or for contemporaneous correlation across countries, respectively, and our results were substantively unchanged. Iran’s initial efforts to develop nuclear energy technology began in the 1950s with assistance from the United States through President Eisenhower’s Atoms for Peace program. Iran’s nuclear energy program accelerated during the mid-1970s through the efforts of Shah Mohammad Pahlavi. However, not much was publicly known of the extent of Iran’s nuclear capability until 2002, when the International Atomic Energy Agency (IAEA) was informed of a previously undeclared nuclear enrichment plant in Natanz and a heavy water plant in Arak. Subsequent IAEA inspections revealed that Iran had already made significant progress toward mastering the technology needed to make enriched uranium, a material that can be used to fuel nuclear weapons. IAEA inspectors reported that they were unable to conclude that Iran’s program was exclusively peaceful. Under the terms of the Paris Agreement, negotiated in 2004, Iran voluntarily suspended its uranium enrichment program. In August 2005, coinciding with President Ahmadinejad’s assumption of power, Iran resumed its enrichment program. In response, IAEA reported these actions to the UN Security Council (UNSC). This resulted in UNSC Resolution 1696, which demanded that Iran suspend its uranium enrichment and reprocessing activities, acting under Article 40 of the UN Charter. The resolution requested that the IAEA complete a report by August 31, 2006, on whether Iran had suspended its enrichment activities. The August IAEA report concluded that Iran had not suspended its enrichment activities and had not addressed the outstanding verification issues–-a conclusion that IAEA reasserted in May 2007. In its follow-up inspection, IAEA reported that Iran had neither suspended its enrichment activities nor provided the necessary transparency to remove uncertainties associated with some of its activities. Iran continued to defy the UNSC resolutions and was sanctioned by a series of additional UNSC resolutions between 2006 and 2010 that, among other things, prohibited the sale of technology that could contribute to Iran’s enrichment activities and freeze financial assets of entities involved in the Iranian nuclear industry. Beginning in 2006, six countries formed a group, the “Permanent Five Plus 1,” to negotiate with Iran through a series of discussions. The group has negotiated with Iran on several occasions, but, to date, has not achieved any breakthroughs or reached agreement with Iran. A November 2011 IAEA report cited credible information that led to serious concerns indicating that Iran carried out activities relevant to the development of a nuclear explosive device and was continuing to expand its inventory of enriched uranium. Most recently, the November 2012 IAEA report stated that Iran had installed additional centrifuges and had continued to enrich uranium. In addition, the report reiterated IAEA’s inability to reach agreement with Iran on a “structured approach” to resolving outstanding questions regarding the potential military dimensions to Iran’s program that were cited in the November 2011 report. In addition to the contact named above, Pierre Toureille (Assistant Director), Tetsuo Miyabara (Assistant Director), John F. Miller, Eddie Uyekawa, Emily Biskup, Grace P. Lui, Michael Hoffman, Tonita Gillich, Gergana Danailova-Trainor, Jennifer Young, Debbie Chung, and Bruce Kutnick made key contributions to this report. Additional technical assistance was provided by Joanna Berry, Gezahegne Bekele, Etana Finkler, Martin De Alteriis, Fang He, Reid Lowe, Elisabeth Helmer, Emily Gupta, Roberto Pinero, Courtney LaFountain, and Heather Latta.
Since 1987, the United States has implemented a broad range of sanctions targeting Iran to deter it from developing its nuclear program, supporting terrorism, and continuing its human rights abuses. Beginning in 2010, Congress has enacted additional financial sanctions which generally restrict Iranian access to the U.S. financial system. In addition, the United Nations and the European Union have adopted several sanctions to compel Iran to suspend its nuclear program. However, concerns have been raised in Congress and by the United Nations about the impact of these sanctions, including the effect of recent financial sanctions on exports of humanitarian goods to Iran. The export of certain humanitarian goods to Iran is allowed by U.S. law, under certain conditions. In this report, GAO (1) describes recent laws and executive orders that have added to Treasury's authority to implement financial sanctions targeting Iran, (2) describes U.S. efforts to administer and enforce the financial sanctions, and (3) analyzes evidence of the effect that recent U.S. and international sanctions have had on the Iranian economy. GAO reviewed U.S. public laws, executive orders, and agency guidance; met with U.S. agency officials; and analyzed trade and economic data from the International Monetary Fund, European Union, and others, as well as forecasts of Iran's future economic performance. Since 2010, congressional legislation, such as the Comprehensive Iran Accountability, Sanctions, and Divestment Act of 2010 (CISADA), as well as a number of executive orders, have established additional U.S. financial sanctions targeting Iran. For example, CISADA authorized the imposition of sanctions on foreign financial institutions that facilitated certain activities or financial transactions by entities including Iran's Islamic Revolutionary Guard Corps. According to an Under Secretary of the Treasury, CISADA "set a new precedent," because "[i]t gave the Secretary of the Treasury the authority for the first time to require U.S. banks to terminate correspondent banking relationships with foreign banks that knowingly engaged in significant transactions with designated Iranian banks." The Department of the Treasury (Treasury)-along with other U.S government agencies-administers and enforces U.S. financial sanctions targeting Iran. Treasury administers the sanctions by developing regulations, conducting outreach to domestic financial regulators and foreign banks, identifying apparent sanctions violations, and assessing the effects of the sanctions. State administers some investment and trade sanctions, principally energy sanctions, targeting Iran. U.S agencies and federal and state banking regulators have taken a range of actions to ensure compliance with financial sanctions. Specifically, in recent years, Treasury and the Department of Justice (Justice) have taken actions against banks for systematic and willful violations of sanctions laws, including violations of U.S. financial sanctions regulations targeting Iran. For example, in 2012, Justice announced that both HSBC Holdings, PLC and HSBC Bank USA NA had agreed to forfeit $1.256 billion to the United States in connection with violations of sanctions targeting Iran, among other countries. The combination of U.S. and international sanctions has adversely affected the Iranian economy and its future outlook. According to GAO's analysis, the Iranian economy has consistently underperformed the economies of comparable peer countries across a number of key economic indicators since 2010, when recent sanctions were enacted. In contrast to its peers, Iran's oil production, oil export revenues, and economic growth estimates have fallen, and its inflation has increased. For example, Iran's oil export revenues fell by 18 percent from 2010 to 2012, while its peers' oil export revenues increased by 50 percent. In addition, professional and International Monetary Fund forecasts of the Iranian economy were downgraded to reflect deterioration in Iran's expected economic performance after the implementation of recent sanctions. Some experts have stated that Iran's recent economic deterioration has resulted from a combination of sanctions and Iranian economic mismanagement. GAO's analysis of European Union and U.S. exports to Iran of humanitarian goods indicates that exports of these goods, such as agricultural goods and medicines, increased in the first 10 months of 2012 compared with 2011. UN reports have raised concerns about the availability of such goods in Iran. According to open sources, the government of Iran has tried to adapt to the sanctions through various means, including using alternative payment mechanisms such as barter agreements and changing its trading partners. However, these recent agreements have thus far not fully offset the reduced exports of oil to the European Union and others.
From an insurance standpoint, measuring and predicting terrorism risk is challenging. The difficulties of measuring and predicting losses associated with terrorism risks stem from factors including lack of experience with similar attacks, difficulty in predicting terrorists’ intentions, and the potentially catastrophic losses from terrorist attacks. To underwrite insurance—that is, decide whether to offer coverage and at what price—insurers consider both the likelihood of an event (frequency) and the amount of damage it would cause (severity). Although insurers increasingly have used sophisticated modeling tools to assess terrorism risk, from a statistical perspective little data exist on which to base estimates of future losses in terms of frequency or severity, or both. Reinsurers (insurers for insurers) follow an approach similar to that of insurers for pricing risk exposures and charging premiums based on that risk and, therefore, face similar challenges in pricing terrorism risks. Congress passed TRIA in 2002 to address some of the challenges the insurance industry and businesses faced after the September 11 attacks, when coverage for terrorism risk generally became unavailable. The goals of TRIA are to (1) protect consumers by addressing market disruptions and ensuring the continued widespread availability and affordability of commercial property/casualty insurance for terrorism risk; and (2) allow for a transitional period for the private markets to stabilize, resume pricing of such insurance, and build capacity to absorb any future losses, while preserving state insurance regulation and consumer protections. As required by TRIA, insurers must make terrorism coverage available to commercial policyholders, although commercial policyholders are not required to buy it. TRIA requires an insurer to make coverage for terrorism losses available that does not differ materially from the terms, amounts, and other coverage limitations applicable to losses arising from events other than acts of terrorism. For example, an insurer offering $100 million in commercial property coverage must offer $100 million in coverage for property damage from a certified terrorist event. As discussed in greater detail later in this report, insurers can charge a separate premium to cover their terrorism risk, although some include the coverage in their base rates for all-risk policies. Neither insurers nor the federal government charges for the government’s coverage of terrorism risk under TRIA, but the government may recoup at least some of its losses following a terrorism event. For eligible lines, TRIA covers insured losses resulting from an act of terrorism, which is defined, in part, as a “violent act or an act that is dangerous” to human life, property, or infrastructure. The act is silent about losses from attacks with nuclear, biological, chemical, or radiological weapons (NBCR). Initial loss sharing. In the event of a certified act of terrorism, TRIA’s loss-sharing structure requires that insurers pay claims on covered terrorism losses and that Treasury reimburse individual insurers for losses that exceed a specified amount. For federal compensation to be paid, aggregate industry insured losses from certified acts must exceed a certain amount (program trigger). For calendar year 2016, this amount was $120 million. An individual insurer with terrorism losses in excess of a deductible (20 percent of its previous year’s direct earned premiums in TRIA-eligible lines) may make a claim to Treasury for payment of the federal share of compensation for its insured losses. As shown in figure 1, Treasury would reimburse the insurer for a certain percentage of its losses (84 percent for calendar year 2016) above the deductible, and the insurer would be responsible for the remaining portion (16 percent). Annual coverage for losses is limited (capped) so that aggregate industry insured losses in excess of $100 billion are not covered by private insurers or the federal government. Recoupment. The federal share of losses may be recouped after a terrorist event through premium surcharges. As previously discussed, Treasury reimburses an insurer for a certain percentage of its losses above its deductible. And, if insurers’ aggregate losses are below a specified amount, Treasury may be required to recoup federal losses through post-event premium surcharges. Figure 2 shows the TRIA funding mechanism before and after a terrorism event. Specifically, the program includes a provision for mandatory recoupment of at least a portion of the federal share of losses if the aggregate sum of all insurers’ deductibles and co-shares are below an amount prescribed by TRIA— known as the industry aggregate retention amount. Under mandatory recoupment, the insurers must impose and remit to Treasury a premium surcharge on all policies in TRIA-eligible lines until total industry payments reach 140 percent of any mandatory recoupment amount. Treasury establishes the amount of the mandatory recoupment surcharge. The collection timeframes for mandatory recoupment range from 1 year and 9 months to about 6.5 years, depending on when the terrorism event occurs. When federal assistance exceeds the mandatory recoupment amount, TRIA allows for discretionary recoupment. Under the discretionary recoupment provision, Treasury may recoup additional amounts based on the ultimate cost to taxpayers of no additional recoupment, economic conditions in the marketplace, the affordability of commercial insurance for small and medium-sized businesses, and other factors Treasury considers appropriate. Treasury also sets the surcharge for discretionary recoupment, but the increase to TRIA-eligible premiums must not exceed 3 percent per calendar year. Changes in TRIA reauthorizations. As shown in table 1, the TRIA reauthorizations have changed several loss-sharing provisions of the program. Over time, the reauthorizations have reduced federal responsibility for losses and increased private-sector responsibility for losses. The 2015 reauthorization requires further incremental decreases in the federal share of losses over 5 years. In addition, the 2015 reauthorization requires insurers in the program to submit information to Treasury about the coverage they write for terrorism risk, including the lines of insurance with exposure to such risk, the premiums earned on such coverage, and the participation rate for such coverage. Insurance in the United States is primarily regulated at the state level. State regulators license agents, review insurance products and premium rates, and examine insurers’ financial solvency and market conduct. In addition, through the NAIC, state insurance regulators (of the 50 states, the District of Columbia, and the U.S. territories) establish standards and best practices, conduct peer reviews, and coordinate their regulatory oversight. For issues that involve a national standard or require uniformity among all the states, the NAIC develops and distributes model insurance laws and regulations for consideration among its member states. Generally, state law requires insurers to file rates (and to file insurance forms) with state regulators who review the rates to ensure they are not excessive, inadequate, or unfairly discriminatory. States vary with regard to the timing and depth of reviews of insurers’ rates and contractual language. Many state laws have filing or review exemptions (or both) that apply to large commercial policyholders. State insurance regulators do not perform rate or form reviews for these entities because it is presumed that large businesses have a better understanding of insurance contracts and pricing than the average personal-lines consumer and as such are able to effectively negotiate price and contract terms with insurers. Capital requirements, accounting standards, and other tools help state regulators, insurers, and other entities monitor and mitigate potential risks and assess insurers’ financial strength. Risk-based capital requirements: State regulators require insurance companies to maintain specific levels of capital to continue to conduct business. Regulators determine the minimum amount of capital appropriate for an insurer to support its overall business operations, taking into consideration its size and risk profile. Most U.S. jurisdictions have adopted statutes, regulations, or bulletins that are substantially similar to NAIC’s Risk-Based Capital for Insurers Model Act, according to NAIC, and also use formulas that NAIC has developed to establish a minimum capital requirement based on the types of risks to which a company is exposed. NAIC has separate models for different lines of insurance. Own risk and solvency assessments: Starting in 2015, state regulators began requiring large- and medium-size U.S. insurance groups to begin regularly conducting own risk and solvency assessments and submitting an annual written report to either the insurer group’s lead state or state of domicile, depending upon if the assessment is prepared on a group or legal entity basis. The assessments are internal processes undertaken by insurers or insurance groups to assess the adequacy of their risk management and current and prospective solvency positions under normal and severe stress scenarios. Enterprise risk-management practices: Insurance companies use these practices to obtain an enterprise-wide view of their risks and help management engage in risk-based decision making. Enterprise risk management generally has two goals: (1) to identify, evaluate, and quantify risks; and (2) to ensure that the organization actively implements risk-treatment strategies and manages appropriate risk levels. Examples of specific enterprise risk-management practices include identifying and categorizing risks, establishing well-defined risk tolerances, assessing risk mitigation with cost-benefit analyses, and conducting stress tests and other risk-modeling analyses. Insurance companies must report much of this information annually in their summary reports for own risk and solvency assessments. Accounting standards and financial reporting: Insurers must report their financial holdings on an individual legal entity basis to the regulator in their state of domicile, using statutory accounting principles (SAP). According to documentation from NAIC, SAP are designed to assist state insurance departments in the regulation of the solvency of insurance companies. The ultimate objective of solvency regulation is to ensure that policyholder, contract holder, and other legal obligations are met when they come due and that companies maintain capital and surplus at all times and in such forms as required by statute to provide a margin of safety. SAP stress the measurement of ability to pay claims in the future. In addition to SAP, insurance groups may report financial holdings information using generally accepted accounting principles (GAAP), which in the United States are promulgated by the Financial Accounting Standards Board (FASB) and are designed to provide decision-useful information to investors and other users of financial reporting. SAP and GAAP recognize certain items differently and therefore may result in different reported capital and net income amounts. Unless otherwise noted, references in this report to accounting for or recording liabilities refers to SAP. Credit ratings: A credit rating is generally intended to measure the likelihood of default for an issue or issuer, such as an insurer. To determine an appropriate rating, credit rating agencies determine the financial strength of insurance companies and their ability to meet ongoing obligations to policyholders by analyzing companies’ balance sheets, operating performance, and business profile. Insurers we interviewed stated that they manage their terrorism exposure using several tools, and all said they generally charge premiums for terrorism risk coverage although data to accurately price terrorism risk are lacking. Insurers’ practices for managing their exposure and pricing terrorism risk coverage are intended to cover their share of losses under TRIA (their deductibles and coshares). Insurers do not consider in their pricing the potential federal share of losses, which may be recouped after an event. Based on interviews we conducted in our previous work and our work for this report, insurers manage their terrorism exposure by establishing geographic risk limits, considering potential terrorism losses when assessing capital adequacy, and purchasing reinsurance. Location-based risk limits. In our 2008 report, we found that the insurers we interviewed determined the amount of terrorism coverage they would be willing to provide in defined geographic areas, such as financial districts where many large buildings are located or specific parts of cities considered at high risk of attack. We also found that some insurers used models available from risk modeling firms to estimate the severity of potential attack scenarios to determine internal limits on the aggregate coverage they would offer in defined areas (aggregation limits). For example, they would limit their aggregate exposure in 250-foot, 500- foot, or quarter-mile circles around certain landmarks or areas where the insurer had high concentrations of risk. Officials from two insurers we interviewed for this report discussed how they manage their terrorism exposure using aggregation limits. For example, one insurer stated that it manages its exposure based on estimated potential losses at more than 200 identified landmark locations spanning its U.S. exposure base. The insurer told us that it calculates its loss estimates for a conventional terrorism event that causes a building collapse at a single location. As part of this calculation, it told us that it considers the portion of potential losses that would be covered by the insurer, its reinsurer, and the federal government in setting an internal limit for terrorism exposure in each location. Considering terrorism risk in managing and assessing capital. Most insurers we interviewed also stated they manage their terrorism risk by considering their terrorism risk exposure as part of external requirements or internal assessments related to capital adequacy. Insurers’ capital generally is intended to be available for purposes such as unexpected losses and expanding the business and is not segregated for specific purposes (see fig. 3). Insurers are generally free to manage their capital as long as they satisfy external solvency and liquidity requirements as well as internal assessments. State regulators require insurance companies to maintain specific levels of capital (risk-based capital requirements), but their capital calculations do not specifically address terrorism risk exposure. However, rating agencies may assess insurers’ terrorism risk exposure specifically as an indicator of financial strength. For example, A.M. Best’s ratings evaluation methodology for terrorism risk includes calculating appropriate levels of capital for insurers with material terrorism risk exposure. In their own internal assessments of capital adequacy, insurers may decide to exceed what is required. Most U.S. insurers hold several times more capital than states require. In addition, although not required under NAIC’s risk-based capital calculations, five of the six insurers we interviewed stated that they specifically measure their terrorism risk exposure in determining the appropriate amount of capital to maintain, including three insurers which indicated that they specifically consider their terrorism risk exposure due to rating agency assessments. One insurer explained that its capital calculations include estimates for its potential deductible and copayments under the current TRIA structure. Three insurers we interviewed stated they also consider terrorism risk in their internal enterprise risk-management assessments or their own risk and solvency assessments—two internal processes insurers use to monitor and mitigate potential risks. Because own risk and solvency assessments are a recent requirement from state regulators applicable to large- and medium-size insurers, few insurers and regulators we interviewed had experience with them at the time of our review. For example, one state regulator stated that it had reviewed some insurers’ filings for the assessments, which discussed risk management for multiple terrorism events, while another state regulator had not yet seen terrorism risk addressed specifically in insurers’ filings. Insurers may consider terrorism risk exposure in their assessments of the adequacy of capital, but they do not set aside funds specifically for potential future terrorism losses in their assets or liabilities. Insurers’ assets are available for potential covered losses and generally are not segregated or restricted for limited uses. However, in some circumstances insurers may segregate or restrict their assets for specific purposes such as for collateral. None of the insurers we interviewed indicated doing so for potential terrorism losses. Insurers generally account for actual or expected claims by establishing loss reserves as liabilities on their balance sheet. As with all future losses, insurers cannot create loss reserves for potential terrorism losses before an event occurs. Specifically, accounting standards for recording insurance liabilities state that insurers may create a loss reserve only for a covered event that has occurred and for which the cost of the event is estimable. No liability exists without the occurrence of a covered event. Insurance and state regulatory officials we interviewed confirmed that insurers do not include estimated potential future losses for terrorism or other potential catastrophic events in loss reserves. Purchasing reinsurance. Five of the six insurers we interviewed purchase reinsurance to help manage their terrorism exposure. Most of these insurers used a portion of the terrorism premiums they collected to purchase the reinsurance. As we previously reported, primary insurers may purchase reinsurance for potential terrorism losses up to the difference between what they are willing to cover in a terrorism event and the sum of their TRIA deductible and co-share under the program. Four of the six insurers we interviewed said they purchased treaty reinsurance coverage, which usually covers a part or a percentage of a book of an insurer’s business across multiple risks. According to a reinsurance broker we interviewed, the majority of terrorism reinsurance is sold along with other property/casualty coverage. One insurer also stated that on rare occasions it purchased facultative reinsurance (which covers individual policies) for specific risks or unique cases. Finally, another insurer we interviewed stated it purchased stand-alone terrorism risk reinsurance—coverage for terrorism risks only—for all policies with terrorism coverage. Insurers we interviewed acknowledged the difficulty of pricing terrorism risk and attributed it to the lack of data on terrorism risk. We previously found that insurers’ primary concern with respect to covering terrorism risks was limiting the amount of their exposures and that pricing was secondary. For most insurance products, insurers typically estimate the frequency and severity of an insurable risk based on data from past events to help calculate premiums that are commensurate with their risk exposure. However, as previously discussed, terrorism risk insurance is challenging to price because the frequency and severity of terrorism events are difficult to predict and quantify. As we concluded in prior work, because the frequency and severity of terrorism events are difficult to predict, the limits established in TRIA (which cap the potential severity of losses to insurers) make underwriting the risk and determining a price for terrorism coverage easier for insurers. However, one insurer and one reinsurer we interviewed expressed concern that primary insurers do not charge premiums sufficient to reflect their terrorism risk exposure. The charge for terrorism risk coverage generally represents a small percentage of the overall commercial property/casualty premium, if insurers explicitly charge for it at all. According to insurers we interviewed, terrorism risk insurance is generally provided in conjunction with commercial property/casualty policies. For most TRIA-eligible lines, five of the six insurers we interviewed told us that they generally charge a percentage of overall property/casualty premiums for their share of terrorism risk coverage under TRIA. A report from Marsh found that from 2012 to 2015 policyholders paid between 3 percent and 5 percent of their total property premium for terrorism risk coverage. According to Treasury’s 2016 report, on average, reporting insurers that charged for terrorism risk insurance charged about 2.6 percent of the total policy premium for terrorism risk coverage and the percentage charged varied from 0.7 percent to 7.1 percent, depending on the line of insurance. However, according to Treasury’s 2016 report, although an explicit premium is charged for terrorism risk coverage in the majority of cases, about 23 percent of reporting insurers did not identify explicit terrorism risk premiums for such coverage. The report stated that insurers may not explicitly charge for terrorism risk coverage for reasons such as lack of any cognizable terrorism risk in certain regions or under certain policies or to ease administrative burden. An official from an insurance broker we interviewed also stated that some insurers do not include an additional charge for terrorism coverage in many of their policies. While one insurer we interviewed charges a flat rate for terrorism risk coverage—that is, a premium rate per dollar of coverage that did not vary with location or other risk factors—four of the six insurers we interviewed consider location as a risk factor and thus charged policyholders located in densely populated areas a higher rate for the coverage. For example, one insurer stated it uses a model from the Insurance Services Office that uses a risk classification system that places urban centers into one of three tiers of risk classes. The insurer then enters its own terrorism risk exposure information into the model to price terrorism risk coverage. According to insurers we interviewed, other risk factors insurers consider to price terrorism risk coverage include building occupancy rates; industry; and proximity to airports, federal buildings, and subways. For example, one insurer stated it created a range for terrorism risk insurance pricing in which risks are slotted into low, medium, and high codes based on industry sectors to determine the starting point for pricing. Once assigned to a pricing slot, the insurer assesses risk-based factors such as location and occupancy. Depending on the number of high-risk characteristics that apply, the insurer will select a specific price from the range. Terrorism risk pricing for workers’ compensation lines of insurance— which cover an employer’s liability for medical care and physical rehabilitation of injured workers and helps to replace lost wages of injured workers—is more standardized when compared to other TRIA-eligible lines. For workers’ compensation lines, insurers in 38 states use rates developed by the National Council on Compensation Insurance (NCCI). According to NCCI, to help set workers’ compensation premium rates that include terrorism risk, NCCI uses information from modeling firms. These firms select various scenarios (different weapons, locations, damages, and frequencies), estimate the amount of damage to human life and the amount of losses under each scenario, and assign probabilities to each scenario. For workers’ compensation lines of insurance in general, NCCI representatives stated that they typically set rates using actual data on the number, types, and costs of workplace injuries. However, the paucity of data about actual terrorism events—and the workplace injuries that could result—necessitates the use of modeling techniques using various assumptions to estimate potential losses from terrorism events. According to NCCI officials, NCCI sets one rate for terrorism risk in each of the 38 states it manages (that is, rates do not vary within a state for other factors such as location, company size, or industry). Rates can vary across states because of the perceived risk of terrorism being higher in one state than another. Each state also may impose local surcharges on the NCCI rate for that state. States that do not rely on NCCI either use a state rating agency to set rates for terrorism risk, or require employers to obtain workers’ compensation insurance from a compulsory state fund. State rating agencies operate similarly to NCCI in setting one rate for terrorism risk insurance for the entire state. Our analyses showed that the federal government (initially) may sustain a greater share of losses in more catastrophic terrorism events, and in some scenarios recoupment may not be required. TRIA requires Treasury to reimburse affected insurers for a certain percentage of their losses above their individual deductibles and Treasury may recoup some or all of its losses through post-event premium surcharges on all TRIA-eligible policyholders. The federal share of losses depends on the size of the terrorism event and the aggregate direct earned premiums in TRIA- eligible lines among affected insurers (premium base). In addition, recoupment may be mandatory, discretionary, or a combination depending on the size of the event and the premium base. Finally, because the 2015 reauthorization incrementally shifts a greater share of losses from the federal government to insurers from 2016 to 2020, the date of the terrorism event also affects how losses would be shared and how the federal share of losses would be apportioned between mandatory and discretionary recoupment provisions. As shown in figure 4 and consistent with the manner in which the program is structured, our analyses showed that the initial government share of losses (amounts the federal government would reimburse insurers and prior to any recoupment from policyholders) would be greater following events with more total insured losses. Using a number of informed assumptions, we analyzed loss scenarios for hypothetical terrorism events in 2016 with varying amounts of total insured losses ($25 billion, $50 billion, $75 billion, and $100 billion) and affecting varying subsets of insurers. An individual insurer’s losses are the sum of its deductible and co-share. Under TRIA, an insurer’s individual deductible is calculated based on its direct earned premium in TRIA-eligible lines of insurance. For our analyses, we constructed insurers’ aggregate deductibles, which are equal to the sum of 20 percent of each affected insurer’s previous year’s direct earned premiums in TRIA-eligible lines (premium base). We used direct earned premiums of the top 4, top 10, top 20, and all TRIA-eligible insurers as proxies to represent various-sized premium bases. For more information on these scenarios and others, see appendix II. Additionally, our analyses showed that the initial proportion of losses that would be borne by the federal government and insurers depends, in part, on the amount of losses from the terrorism event relative to aggregate deductibles. The federal government would not bear the cost of any losses if the total losses were less than insurers’ aggregate deductibles. For example, in an event that affected insurers with a premium base equal to that of top 10 insurers and resulted in $5 billion in total losses, insurers likely would sustain all losses, because insurers’ losses likely would fall below their respective deductibles. Losses in excess of the insurers’ deductibles would be shared between insurers and the federal government. For example, in an event that affected insurers with a premium base equal to that of the top 10 insurers and resulted in $50 billion in total losses, insurers and the federal government would share the losses above the insurers’ deductibles. In this example, insurers’ deductibles would not cover all losses. As total losses increase above the insurers’ deductibles, the government share of losses increases at a higher rate than the insurers’ share of losses. Because the federal co-share is much larger (84 percent in 2016) than the insurer co-share (16 percent in 2016), the initial federal share of losses is much higher than insurers’ share of losses in events with higher total losses. For example, in an event that affected insurers with a premium base equal to that of the top 10 insurers and resulted in $75 billion or $100 billion in losses, the government share of losses would be much larger than the insurers’ share. In addition, our analyses showed that as losses are shared by insurers representing a larger premium base, the government share would decrease. This occurs, in part, because as the aggregate premiums of affected insurers increase, the aggregate insurers’ deductible also increases. For example, as illustrated in figure 5, in an event that resulted in $50 billion in losses, the portion of the losses covered by the federal government decreases as the aggregate premium base among the affected insurers increases. The federal share of losses that could be recouped may fall under the mandatory provision, the discretionary provision, or both. As illustrated in figure 6, our analyses showed that the proportion of mandatory and discretionary recoupment amounts depends on the total amount of terrorism losses (event size), the subset of insurers that sustained losses, and whether insurers’ losses were more or less than the industry aggregate retention amount. General recoupment scenarios and examples from our analyses are described below. The federal government may be required to recoup its total share of losses. If the total losses from the terrorism event were less than the industry aggregate retention amount, all government losses must be recouped under the mandatory recoupment provision. For example, in an event that resulted in $25 billion in losses regardless of the premium base of affected insurers, all recoupment would be mandatory because the total losses would be below the industry aggregate retention amount ($31.5 billion in 2016). The amount recouped would be the difference between total losses and the insurers’ share of losses. The federal government may not be required to recoup any of its losses. If insurers’ share of losses exceeded the industry aggregate retention amount, all government losses would fall under the discretionary provision and equal the difference between all losses or the maximum loss cap (whichever was lower) and the insurers’ share of losses. For example, our analyses showed that all recoupment would be discretionary for an event where all insurers were affected because the aggregate insurer deductible would exceed the industry aggregate retention amount. The federal government may be required to recoup only a portion of its losses. If total losses exceeded the industry aggregate retention amount and insurer losses were less than the retention amount, the government share of losses would fall under both the mandatory and discretionary provisions. The mandatory portion would be the difference between the retention amount and the insurers’ losses. The discretionary portion would be the difference between total losses or the maximum loss cap (whichever was lower) and the aggregate industry retention amount. For example, when affected insurers had an aggregate premium base equal to that of the top 4 or top 10 insurers and the total losses exceeded $31.5 billion ($50 billion, $75 billion, or $100 billion in total losses), recoupment would be split between mandatory and discretionary because the insurers’ losses were less than the industry aggregate retention amount but the total losses exceeded the aggregate retention amount. As figure 6 illustrates, our analyses also showed patterns in the portions of losses to be recouped under the mandatory and discretionary provisions. Losses recouped under the mandatory provision decreased as the aggregate premiums base of insurers with losses increased. Of the event sizes that we analyzed, events with $40-50 billion in losses generally resulted in the highest mandatory recoupment across the premium bases. The discretionary recoupment amount increased as the size of the event increased, and in very large events could exceed $60 billion. Discretionary recoupment generally was not affected by the aggregate premium base of affected insurers. For example, for all insurer subsets, the discretionary portion of recoupment increased with the event size, but the size of the discretionary portion for each event size generally would be the same whether the affected insurers’ aggregate premium base was equal to that of the top 4, top 10, or top 20 insurers. In addition to the total amount of losses and the aggregate premium base of insurers with losses, the date of a terrorism event would affect how losses would be shared after a terrorism event and how federal losses would be apportioned between recoupment provisions. Specifically, the 2015 reauthorization of TRIA incrementally shifts a greater share of losses from the federal government to insurance companies in 2016- 2020, as shown in table 2. For example, for loss sharing, the increases in the program trigger will increase the total insured losses that must occur before the government would incur any losses. In addition, the increase in insurer co-share will decrease the federal share for losses above the insurers’ aggregate deductible. For federal losses apportioned between mandatory and discretionary recoupment, the industry aggregate retention increases by $2 billion per year. This change potentially shifts a portion of the federal share of losses from the discretionary recoupment provision to the mandatory provision. The differing mandatory recoupment collection time frames from 2016 to 2020 could affect potential premium increases due to recoupment surcharges. While discretionary recoupment surcharges must not increase annual TRIA-eligible premiums by more than 3 percent, mandatory recoupment surcharges in part would be determined by the deadlines for collecting mandatory recoupment. For example, our analyses showed potentially large surcharges resulting from events that occurred in 2017 (the year with the shortest collection time frame). The two alternative funding options we analyzed would require trade-offs and present complexities. First, the option of a federal charge for terrorism risk insurance could be collected to pay for the federal share of potential losses or to cover the cost of the federal guarantee under TRIA. Second, the option of insurer set-asides—through which insurers would more explicitly address their terrorism exposure through their capital, assets, or liabilities—could be used to help cover insurers’ share of potential terrorism losses or both insurers’ and the federal government’s share of potential losses. Depending on the approach, a federal terrorism insurance charge may help promote some pricing objectives but could involve significant limitations and trade-offs. Based on our prior work on designing and assessing federal user fees, other government-collected funds, and user- based taxes, we identified pricing objectives and characteristics that could have some applications for policymakers in considering various approaches for a federal charge for terrorism risk insurance, as shown in table 3. Using four pricing objectives (promoting economic efficiency, equity, revenue adequacy, and limiting administrative burden), we evaluated two approaches for a voluntary or required federal charge for terrorism risk insurance: Premium-like charge: A charge that would be intended to help pay for the federal share of potential losses and could replace the current recoupment provision. Such a charge could be an amount based on risk using insurance principles or it could be designed as a flat rate or vary based on insurer or insured characteristics. Backstop charge: A charge or fee paid to the Treasury for the promise of payment of the federal share of losses, or backstop. Such a charge could be determined in a variety of ways, but it would not necessarily be based on insurance principles. In addition, recoupment could still be in place to cover the federal share of losses as the charge would not be intended to be adequate to cover potential losses. Policymakers may face tradeoffs among the pricing objectives as no single design will satisfy all parties on all dimensions, and the level of importance different policymakers may place on different objectives will vary, depending on how they value the characteristics of each. Economic efficiency. A premium-like charge could address risk and insurable value—characteristics of promoting economic efficiency—but would be difficult to price. Specifically, a premium-like charge could be based on existing premiums that policyholders paid insurers. However, this approach has some limitations because the existing, underlying premiums are not accurately priced according to risk. As we previously discussed, insurers lack data to accurately price terrorism risk insurance and may charge a nominal percentage of the underlying commercial property/casualty premium. Four industry associations, three insurers, an insurance broker, and a state regulator said a reliable method or model to estimate manmade catastrophes like terrorism events is currently not available and representatives questioned how the federal government could set risk-based prices to promote economic efficiency in the absence of a viable method to estimate terrorism risk. Taking into account risk mitigation activities—another characteristic of economic efficiency—would be challenging to incorporate into a premium- like charge. Three insurers stated that, due to the unpredictable nature of potential terrorism events, risk-mitigation measures could be more costly than beneficial because terrorists might change tactics in response to mitigation efforts. One of the insurers further noted that mitigation efforts would be less effective at an individual insurer or policyholder level than at a national level, and a state regulator said the federal government is the primary body for taking mitigation actions such as through national security. While mitigation efforts may be difficult to incorporate into the premium pricing process for terrorism risk, an insurer said it takes certain mitigation efforts into consideration when making decisions to accept a client. For example, the insurer said that 24-hour guard service, perimeter fencing, and intrusion detection devices are some mitigation efforts considered in the decision. In contrast, a backstop charge does not need to be closely tied to an estimate of each participant’s terrorism risk exposure or risk-mitigation activities. For example, a backstop charge would only need to cover the cost of its administration rather than potential losses and, therefore, does not need to be risk-based. Equity. With either a premium-like charge or backstop charge, policymakers may need to consider trade-offs between fairness and affordability. Industry stakeholders noted challenges in structuring a federal charge for terrorism risk to promote the characteristics of equity (fairness), which involve the extent to which the pricing structure (1) provides similar treatment to participants with similar levels of risk, and (2) considers affordability. One insurer stated that to achieve equity, all participants with similar levels of terrorism risk should be charged similar rates. Two other insurers and a state regulator cautioned that prices needed to be affordable to maintain participation rates and support a thriving market that offers coverage for terrorism risk. One insurer pointed out that affordability is important to consider because it was a factor in the withdrawal of insurers and reinsurers from offering terrorism coverage after the September 11 terrorist attacks. One state insurance regulator said an affordability goal could be problematic to the extent it competes with the goal of each participant paying their fair share, and another state regulator said that an equitable charge would be more achievable if collected after a terrorism event—as currently structured under TRIA’s recoupment provision—when more information would be known about the amount of losses. Revenue adequacy. If policymakers wanted to focus on raising adequate revenue to cover expected losses over time, a premium-like charge might be a better option than a backstop charge. However, some industry stakeholders we interviewed stated that a charge that could generate enough revenue to cover the federal share of losses would be cost- prohibitive. An annual charge would need to be very high to accumulate enough funds over time to cover very large losses which, an insurer surmised, could drive down take-up rates and require other sources of funds, such as surcharges or taxes. Also as discussed, industry associations, insurers and others we interviewed said that determining a price that would provide adequate revenues to pay for the federal share of potential losses would be difficult because estimating terrorism frequency and loss is not possible with any statistical accuracy. In contrast, a backstop charge approach would not be intended to cover expected losses. Administrative burden. If policymakers focused on reducing administrative burden, a backstop charge might result in lower administrative costs than a premium-like charge. Industry stakeholders generally held similar views that a premium-like charge would bring about higher levels of administrative burden than the current program—that is, the responsibilities of program staff or third parties to collect and process the charge and oversee the effort. Officials from the Congressional Budget Office and the Congressional Research Service cautioned that collecting and managing federal charges from the more than 1,000 commercial property/casualty insurers would entail a large resource commitment from the federal government. According to our interviews, if such a charge was implemented, increases in staff and expertise would be required to collect, manage, and oversee the charges. Officials from Treasury estimated that a program that collected an upfront charge would necessitate an additional 15-20 more full-time staff to collect the charges and audit primary insurers—up from the three staff that would be needed to administer third-party service contracts following a certified terrorism event. The officials said more staff could be needed to operate on an independent board and to manage the collection and investment of the funds. According to one insurer, pricing a federal charge as a flat rate on premiums across all TRIA-eligible insurance lines and including it on policyholders’ statements similar to premium taxes would be transparent and easily audited and thus may require less administrative burden than a variable rate. Two insurers emphasized that the current method— recouping the federal share of losses after a terrorism event—is the least burdensome, with relatively low administrative costs to Treasury. In contrast, under the backstop charge approach, administrative burden may be less than a premium-like charge because staff and operations would not be needed to estimate losses and set a corresponding pricing structure. Another alternative funding method for TRIA—designing and implementing an insurer set-aside—could be complex because of existing state laws and insurance accounting standards, among other reasons. An insurer set-aside could be designed to help cover insurers’ share of potential terrorism losses, the federal government’s share of potential losses, or both. Insurer set-asides could be structured in at least three ways: (1) loss reserves for events that have not yet occurred, (2) separate capital requirements for terrorism risk exposures, or (3) segregated assets that only can be used for a specific purpose such as potential terrorism losses. We identified four proposals or current programs with insurer set-aside approaches to illustrate ways a potential set-aside could be designed, according to the three structures (see table 4). These approaches also varied in participation requirements, target amounts, and use of the set-asides. The first two ways an insurer set-aside could be structured, establishing (1) loss reserves for events that have not yet occurred and (2) separate capital requirements for terrorism risk exposures, might not involve a formal or legal segregation and limitation of assets dedicated for a particular purpose, as illustrated in the following examples. The NAIC catastrophe reserve proposal uses loss reserves for events that have not yet occurred. Under this proposal, a participating insurer’s set-aside would be structured as a separate liability on its balance sheet—distinct from loss reserves for events that have already occurred—without specific segregation of assets. In addition, the set-side could be available to cover losses from multiple types of perils. The NAIC catastrophe risk weight uses capital requirements. In 2013, insurers began testing a weighted measure in their minimum risk- based capital determinations, taking into account earthquake and hurricane risks to determine a target amount of capital to maintain that would preserve their solvency following a natural catastrophe. Insurers’ capital is not limited to covering losses from hurricanes and earthquakes. A third way of structuring an insurer set-aside involves segregation of assets for a specific purpose and is illustrated by proposals by House members to build segregated insurer set-asides for terrorism losses to help stabilize the marketplace following a terrorism event. Under these proposals, insurers could set aside a portion of terrorism risk premiums collected from policyholders as specifically segregated assets to be used only for potential terrorism losses. We discuss these approaches and examples of set-asides used in other countries in more detail in appendix IV. Implementing a set-aside for TRIA could be complex. Specifically, a loss reserve similar to NAIC’s catastrophe reserve proposal could involve a significant departure from existing insurance statutory accounting standards. Implementing set-asides also could involve revisions of state insurer solvency laws, and federal tax law changes (for example, to provide a tax deduction). In addition, a catastrophe risk-weight approach would involve significant data and model development. Accounting practices. The four proposals and programs use a range of structures. One approach, in particular, would have implications for current practices related to statutory accounting for insurance losses. Specifically, an approach similar to the NAIC catastrophe reserve proposal—in which insurers would create loss reserves for events that have not yet occurred—would be contrary to the current general basis for recording insurance losses under SAP. As previously discussed, SAP states that insurers only may create loss reserves for an event that has occurred and for which the cost of the event is estimable. Implementing an approach for potential terrorism losses similar to the NAIC catastrophe reserve proposal would involve NAIC modifications of SAP to allow insurers to maintain loss reserves for events that have not yet occurred. In addition, using some of the set-aside approaches may affect the total amount of assets that an insurer holds to support its ability to pay current and future claims. Under a structure in which insurers would establish loss reserves for events that have not yet occurred (similar to the NAIC catastrophe reserve proposal), an insurer’s assets would remain available for all types of insured risks. However, initially establishing such new loss reserves would reduce capital. If, as a result, the insurer’s capital would fall below the minimum capital requirements, the insurer would need to raise additional funds to continue to meet capital requirements. However, most U.S. insurers hold several times more capital than states require. If the insurer’s capital still exceeded minimum capital requirements even after establishing such new loss reserves, creating a loss reserve for events that have not yet occurred might have no immediate impact on the amount of assets an insurer holds. Under a structure that would establish separate capital requirements for terrorism risk exposures similar to NAIC’s catastrophe risk weight, an insurer’s assets also would remain available for all types of insured risk. If an insurer holds several multiples of the minimum required capital, an additional minimum capital component might have no immediate impact on the amount of capital an insurer holds. A structure in which insurers would be required to establish segregated assets that could be used only for a specific purpose such as potential terrorism losses (similar to the legislative proposals) would limit such assets from being used for other types of insured risks. One state regulator and two insurers raised concerns that segregating assets for potential terrorism losses could prevent such assets from being used to pay for other losses. In addition, holding assets that are specifically segregated for terrorism losses may require insurers to raise additional capital. However, if an insurer holds several multiples of the minimum required capital, the segregation of assets might have no immediate impact on the amount of assets the insurer already holds. State laws. The complexity of implementing set-asides also could include revising state laws to recognize or account for how to treat the assets in the event states needed to oversee and resolve insolvent insurers. For instance, accommodating a set-aside with segregated assets might require amending NAIC’s model laws and later adoption and enactment by the states. Furthermore, two state regulators pointed out that implementing a set-aside with segregated assets for potential terrorism losses could affect state oversight related to laws and practices for insurers that become insolvent. For example, the two state regulators pointed out that policymakers should consider if funds in the TRIA set- aside would become part of state receivership procedures to pay non- TRIA claims of insolvent insurers. Federal tax laws. Some set-aside approaches also may have implications for federal tax laws. For example, current federal tax rules do not allow insurers to deduct potential losses. However, providing preferential tax treatment for potential terrorism losses—in conjunction with changes to SAP that would allow such reserves—could provide incentives for insurers to establish related loss reserves. Such revisions would need to incorporate limitations to prevent overestimation of potential loss reserves, as an overstatement of loss, if allowed, would improperly decrease the amount of taxable income. In addition, policymakers also would need to clarify the tax implications related to a set-aside with segregated assets. For example, policymakers would need to clarify whether amounts added to segregated assets would receive favorable tax treatment (such as tax credits or reductions to taxable income). The two legislative proposals we considered were not clear on the specific amounts available for insurers to use or if any amount contributed to the set-aside would receive favorable tax treatment. Data and model development. Implementing any of the terrorism set- aside approaches, particularly a risk weight as part of insures’ risk-based capital calculation requirements—similar to NAIC’s existing catastrophic risk weight—would require historical and reliable data on terrorism losses. It would also require models to estimate potential losses, which could take several years to develop, test, and implement if data were available. For example, NAIC officials said that detailed property location data and the ability to reasonably model losses helped in the creation of the catastrophe risk weight, a process that took more than 10 years. Similar types of information would be needed to develop target amounts and time frames for set-asides or a risk weight for terrorism risk. While insurers increasingly have used sophisticated modeling tools to assess terrorism risk, little data exist on which to base estimates of future losses in terms of frequency or severity, or both. NAIC officials told us that they have begun high-level discussions to consider adding a terrorism risk weight and weights for other risks. Although they expect that they could shorten the development time frame because of their experience developing the catastrophe risk weight, such an approach remained a challenge because of the difficulties of measuring and predicting losses associated with terrorism risks. TRIA’s current recoupment mechanism and alternative funding options could affect affordability and participation for policyholders, the flexibility of the use of insurers’ assets, and the exposure and role of the federal government. We examined the potential effects of TRIA’s current recoupment provisions and the alternative funding options of a federal charge for terrorism risk insurance and set-aside approaches as follows. Recoupment of federal share of losses (current TRIA structure). Following a certified terrorism event, Treasury recoups federal losses through premium surcharges on all policyholders with TRIA-eligible insurance line coverage. Federal charge for terrorism risk insurance. A federal charge on insurers or policyholders structured as either (1) a premium-like charge intended to help pay for the federal share of potential losses and replace the current recoupment provision, or (2) a backstop charge paid to the Treasury for the promise of payment of the federal share of losses with recoupment still in place to cover the federal share of losses. Terrorism set-asides. Insurers would be permitted or required to establish terrorism set-asides, potentially using one of the different types of set-aside approaches discussed in the previous section: (1) loss reserves for future terrorism losses, without segregating assets (similar to NAIC proposal); (2) separate or additional capital requirements for terrorism risk, without segregating assets (similar to catastrophe risk weight); or (3) segregated assets that could only be used for terrorism losses (similar to legislative proposals). We assumed the terrorism set-aside approaches would retain TRIA’s recoupment provision, but analyzed set-asides and recoupment independently of each other. We present illustrative estimates of potential market impacts that recoupment and a federal terrorism charge could have on the price of TRIA-eligible insurance line coverage, policyholder participation in purchasing TRIA-eligible insurance line coverage, and the volume of TRIA-eligible insurance written. The lack of data on the terrorism risk insurance market and the low frequency of terrorism events (certified or otherwise) relative to other catastrophic events make estimating potential market effects challenging. For example, our analysis of the size of effects was limited by the lack of data on prices and participation rates and uncertainty about insurer and policyholder reactions to recoupment and alternative funding options. As such, our numerical estimates of market effects rely on a number of informed assumptions (see app. I). Also, the results we discuss throughout this objective are based on average elasticity estimates (see app. V for results for the high and low elasticity estimates). Elasticities could be affected by factors such as location of policies. Our numerical estimates also are necessarily uncertain and speculative. A terrorism event could affect the demand or supply of insurance and thus affect premium rates and insurers’ volume of business. Actual market effects likely will differ depending on factors including insurer and policyholder behavior and federal actions for all options, and specifically for recoupment, the timing of the terrorism event, the amount of losses, and the subset of insurers affected. In addition to the potential effects we estimated, it is possible there could be no or minimal effects on the price that businesses pay for TRIA-eligible insurance, policyholder participation, or the volume of insurance written, as we note throughout this report. Finally, there are limitations on how the potential effects of the alternative funding options can be compared to the potential effects of recoupment because recoupment would occur only after an event, while the effects of the alternative funding would occur regardless of whether an event occurred. In this section, we discuss the most significant potential effects of recoupment and the alternative funding options on policyholders, insurers, and the federal government. See appendix V for more information about our analysis and additional results, including effects on reinsurers and state regulators. Generally, the magnitude of potential effects varies for recoupment by the amount of losses caused by the terrorism event, the proportion of losses borne by insurers and the federal government, and the length of the collection period. The magnitude of potential effects for the alternative funding options varies by the design and the implementation. Recoupment surcharges and alternative funding options could increase prices for policyholders; thereby, decreasing affordability and participation rates. According to Treasury’s 2016 report, if reporting insurers charged for terrorism risk coverage, they charged between 0.7 percent and 7.1 percent of the total policy premium depending on the TRIA-eligible line of coverage, and the participation rates among policyholders was about 70 percent. We estimated potential effects on prices, policyholder participation, and insurers’ net premium revenue of (1) recoupment surcharges or (2) a federal charge in specific scenarios. We used two potential pricing methods that rely on different assumptions about insurers’ pricing strategies in reaction to a recoupment surcharge or federal charge. Under one method (the percentage load method), we assumed that insurers fully pass through to policyholders a percentage increase in premium rates that may be specified by Treasury or future legislation. Under another method (the revenue target method), we assumed that insurers have the incentive to attempt to increase prices to collect as much additional direct earned premiums as possible up to the annual collection or target amount. Using these two methods, we estimated price and participation changes that could result from (1) recoupment and (2) a premium-like charge. For example For terrorism events that result in mandatory recoupment amounts exceeding $20 billion, when using the percentage load method, we estimated that recoupment surcharges would increase TRIA-eligible insurance line prices by about 3 percent and could decrease policyholder participation by about 2 percent on average. When using the revenue target method, we estimated that premiums could increase by about 8 percent on average and participation could decrease by about 5 percent on average. After the recoupment collection period, policyholders could see price decreases. Using multiple assumptions, we estimated the effects of a premium- like charge directly imposed only on policyholders with terrorism risk coverage. Specifically, when using the percentage load method, we estimated that terrorism risk insurance prices would increase by about 16 percent and participation on average could decrease by about 10 percent. When using the revenue target method, we estimated that prices on average would need to increase by about 43 percent to collect the target amount, which could result in an average participation decrease of about 27 percent. Set-aside approaches that result in the need for insurers to raise additional capital to cover other types of insured risks may result in price increases and participation decreases. In particular, the approach in the legislative proposals may result in price increases for two reasons: (1) a portion of insurers’ terrorism risk premiums would be shifted to a segregated asset account that could only be used for potential terrorism losses, and insurers might increase prices if, as a result, they needed to raise additional capital to cover other insured losses; or (2) depending on the size and timing of a terrorism event, some of the segregated assets might be used to pay for some of or all the federal share of losses, so that not all the premiums collected would necessarily be available to cover the insurer’s own share of losses. The potential size of any price increase may depend on such factors as the cost of raising any additional capital, the perceived likelihood of events resulting in payments towards the federal share of losses, the perceived likelihood and timing of any recoupment payments, and the perceived likelihood that the federal government would reimburse insurers whose assets contributed to covering the federal share of losses. Other set-aside approaches that do not require segregation of assets may have a minimal impact, if any, on pricing. An insurer only would pay for its own losses, and its assets would remain available for all types of insured risks. If the insurer already had enough capital to meet required standards, an additional reserve or capital requirement might have no impact on the amount of capital it needed to hold, as discussed earlier. Price increases from recoupment surcharges or alternative funding options could vary by the extent to which insurers passed costs to policyholders and by insurer size. Two stakeholders stated that insurers likely would pass recoupment surcharges to policyholders, but other stakeholders pointed out that some insurance companies could choose to absorb the cost to maintain competitive prices. Stakeholders’ opinions varied on whether insurers would pass on the federal charge to policyholders. Treasury officials said that insurers would want to pass the cost of a charge to policyholders, but market forces would dictate the extent to which they could. Another stakeholder said that most insurers likely would absorb the cost of a charge by spreading it across all the lines of business they wrote. This amount could be categorized as a general expense and might not be a significant addition to a premium rate. Additionally, the stakeholder said that the rate increase attributed to this expense likely would be too small to attract state regulators’ scrutiny and policyholders would not notice the additional cost. Small insurers might be less able to absorb the cost and, consequently, more likely to pass the cost to their policyholders, according to one stakeholder. Declines in policyholder participation from recoupment surcharges or alternative funding options could vary by industry and other factors. For example, any reduction in participation among commercial property builders likely would be constrained due to lender requirements to maintain terrorism risk coverage as a condition of financing development projects. Furthermore, state requirements to maintain workers’ compensation coverage (from which terrorism risk cannot be excluded) generally could moderate the reductions in policyholder participation. Our estimates indicated that longer time frames and broader application could help mitigate the potential adverse effects on policyholders of a recoupment surcharge, especially since the 2015 TRIA reauthorization increases the potential amount of funds that the government could collect through mandatory recoupment. Mandatory recoupment deadlines (ranging from 1 year and 9 months to 6 years and 9 months after a terrorism event) were introduced in the 2007 and continued in the 2015 reauthorizations of TRIA. Longer mandatory recoupment collection periods could result in smaller price increases and impacts on affordability compared to shorter time frames. Table 5 shows differences that the collection time (determined by the date of an event) can have on the required annual collection amount and the potential increase in premiums following a terrorism event resulting in $40 billion of losses. We estimated that the mandatory recoupment that would follow an event of that size occurring in 2017 could lead to a larger increase in TRIA-eligible commercial property/casualty premiums than an event of equal size occurring in 2019. For example, when using the percentage load method, we estimated prices would increase about 6 percent. When using the revenue target method, we estimated that prices on average could increase about 17 percent. These estimates are about two times the increase resulting from the case with a longer collection time for an event occurring in 2019. See appendix V for more information on this analysis. According to Treasury officials, minimizing disruption to the terrorism risk insurance market and maintaining affordability are key considerations for how they would determine a recoupment surcharge amount under TRIA’s mandatory recoupment provision. They said that longer recoupment time frames could give them more flexibility in considering affordability than shorter time frames. Our analysis also indicated that designing a federal charge for terrorism risk insurance to apply to a broad group of policyholders could mitigate potential price increases. Specifically, price increases could be significantly smaller if the charge were imposed on insurers that could spread the cost among a wide range of policyholders. For example, if insurers spread the charge among policyholders with TRIA-eligible lines, when using the percentage load methods, we estimated that prices would increase by 0.8 percent, and participation on average could decrease by 0.5 percent. When using the revenue target amount method, we estimated that prices on average could increase by 2.1 percent and participation on average could decrease by 1.3 percent. Similarly, if insurers spread the federal charge among policyholders of all property/casualty lines, when using the percentage load method, we estimated prices would increase by 0.3 percent and participation on average could decrease by 0.2 percent. When using the revenue target amount method, we estimated that prices on average could increase by 0.8 percent and participation on average could decrease by 0.5 percent. These changes were significantly smaller than changes we estimated from a charge imposed on policyholders with terrorism risk coverage. In that scenario, when using the percentage load method, we estimated that terrorism risk coverage prices would increase by about 16 percent, and participation on average could decrease by about 10 percent. When using the revenue target amount method, we estimated that prices on average could increase by 43 percent and participation on average could decrease by 27 percent. See appendix V for more details. Although applying a federal charge to a larger group of policyholders could reduce potential market disruptions by decreasing the impacts on price and participation, it also could create a cross-subsidy and might not be equitable. Because restricting the use of assets could hamper risk management, insurers likely would be more affected by the government requiring the type of set-aside involving segregated assets for potential terrorism losses than the other set-aside approaches. Insurers, state regulators, and NAIC officials we interviewed stated they were concerned with requiring segregated assets for potential terrorism losses because the funds might not be available for other types of losses. Industry stakeholders, including insurers, state regulators, and representatives from insurance trade associations, also stated that having the flexibility to use funds for a variety of purposes is an important tool for managing the risks of their various lines of business and related business operations. If required to be restricted for terrorism losses, assets that otherwise would be available to cover losses from any line of business would be reduced and insurers might need to raise additional capital to meet external requirements or internal assessments of capital adequacy. Insurers, state regulators, and NAIC officials also said that while segregated assets might help ensure solvency following a terrorism event, they could decrease the likelihood of solvency following more common events, such as natural catastrophes, and representatives of an association said that the impact could be greater on insurers with less capital. The set-aside approaches we reviewed that did not involve segregated assets (loss reserves or risk-based capital requirements for potential terrorism losses) would continue to allow insurer loss reserves or capital to be available to pay claims for other lines of insurance, which could mitigate the potential adverse effects on insurers. For example, under the NAIC proposal for recording reserves for future potential natural catastrophe losses, loss reserves would not be limited to one type of loss. Rather, the reserves would be available to pay claims for any type of catastrophic loss—man-made or natural. Under the NAIC proposal, the reserves also would be available to insurers to pay claims to protect their solvency if needed, subject to certain criteria. Another approach we reviewed—establishing separate capital requirements for terrorism losses—would not limit the use of insurer capital. NAIC has been implementing a similar approach for natural catastrophe risk to better measure an insurer’s ability to remain solvent following a catastrophic loss. Recoupment and alternative funding options result in federal fiscal exposure; however, some factors could mitigate the exposure. First, the federal government risks significant explicit fiscal exposure after a terrorism event because it is statutorily required to make payments (reimbursements to insurers) if losses exceed insurers’ deductibles following a certified event under TRIA. This exposure exists until the federal share of losses is recouped. However, if Treasury opted not to fully exercise the program’s recoupment provisions, an implicit fiscal exposure would remain. By statute, the federal government must recoup any mandatory portion of losses following a terrorism event but can choose not to recoup any discretionary portion of losses, which represents a fiscal exposure. Much of the estimated recoupment amounts resulting from larger, catastrophic losses would be considered discretionary under TRIA’s provisions, and could exceed $60 billion. Because the program mandates a 3 percent cap on the increase of premium rates in TRIA- eligible lines for the discretionary portion of recoupment, we estimate that in an extreme case Treasury might need to collect a premium surcharge for as long as 28 years to fully recoup the discretionary portion of losses. The effects of a protracted period of premium surcharges could be a factor in Treasury’s determination of whether to pursue discretionary recoupment in such a scenario. In addition, the weakened economic environment that resulted after the September 11, 2001, terrorism events suggests that an event large enough to trigger TRIA likely would result in a weakened economic environment. As such, one insurer questioned whether the federal government would follow through with mandatory or discretionary recoupment. As we previously discussed, mandatory recoupment could lead to large price increases, especially in shorter collection time frames, which could affect the political will to carry out recoupment. Our analysis indicates that the program could provide an economic subsidy to the extent that the federal government is not expected to recoup all of its losses. As we explain in appendix VI, we assess the presence of a subsidy on the basis of whether and to what extent the federal government would be expected to recoup its losses, regardless of whether a terrorism event or recoupment occurred. Using various assumptions and taking into account several limitations, we analyzed the potential size of any subsidy by estimating the annual forgone federal terrorism risk insurance premiums. We estimated the annual economic subsidy amount could be as high as $1.6 billion, if the government were not expected to recoup any of its losses. However, if the federal government were expected to recoup all of its losses as described in TRIA, the economic subsidy amount would be $0 (no subsidy). For more information on our subsidy analysis, see appendix VI. Second, a premium-like charge also could affect federal fiscal exposure but including a recoupment provision could mitigate such effects. A premium-like charge could reduce federal fiscal exposure if sufficient funds were collected to pay for losses. However, if terrorism losses exceeded funds collected from the charge and no recoupment provision was in place, the federal government would need to cover the difference. For example, it could take many years to accumulate sufficient funds to cover potential losses, and if the federal share of losses from a terrorism event exceeded the collected funds, the financial exposure to the federal government could be higher than under the current program in the absence of recoupment. However, the federal fiscal exposure could be mitigated if recoupment were to remain a part of the program, providing a mechanism by which the federal government could recover losses that exceeded the funds collected from a premium-like charge. Third, implementing a premium-like charge could result in increased prices for terrorism risk or TRIA-eligible insurance, which in turn could lead to decreased participation in private insurance and, therefore, fewer private funds available to fund recovery. However, a backstop charge (with recoupment to cover the actual federal share of losses) might result in lower price increases on policyholders and have less effect on affordability and policyholder participation. Finally, we found that implementation of some terrorism set-aside approaches likely would have minimal impacts on federal fiscal exposure due to losses and others could increase federal fiscal exposure by allowing deductions for taxes before a terrorism event occurs. Set-aside approaches that do not involve segregated assets might have minimal impact on federal fiscal exposure. However, to the extent that a segregated assets approach could be designed to cover the federal share of losses, it could reduce federal fiscal exposure. Additionally, officials from NAIC, Treasury, and state regulators expressed concerns that insurers could overstate any pre- event loss reserves or segregated assets for terrorism risk in an attempt to reduce their tax exposure. This could increase federal fiscal exposure. The overall net impact of a segregated assets approach is unclear. Alternative funding options could represent a major change to the federal role in the terrorism risk market and entail administrative costs. Under the current program, Treasury generally has a passive role in the insurance market but becomes active following a terrorism event. With a federal charge, the government would potentially take on administrative responsibilities (such as collecting and managing funds) before an event occurred. In an April 2016 report on terrorism risk insurance programs in other countries, we found that the costs for carrying out these responsibilities were generally a small percentage of the programs’ overall income. Similarly, the administrative costs to the federal government of implementing a federal charge could be low and funded by the charge collected. A set-aside approach also could involve some administrative costs for data collection. As NAIC officials pointed out, the data required to implement a set-aside requirement—for example, to reliably estimate a set-aside target amount—do not currently exist. In addition, in a segregated assets set-aside approach—which would require insurers to set aside funds that could be used for both their share and the federal share of terrorism losses—the federal government would need to determine the appropriate amount of the segregated assets that would be held for the federal share of losses. Pricing a premium-like charge could present significant challenges for the federal government, partly because Treasury has limited data on collected terrorism premiums, amounts of coverage, or location of coverage. In addition, setting an appropriate amount to charge or an appropriate target amount to collect could present significant challenges to Treasury because of the difficulty of estimating the magnitude and frequency of terrorism events. By using data and modeling, other nations’ terrorism risk insurance programs have developed methods to address limitations related to estimating the frequency and severity of terrorism events. Specifically, some programs use data on premiums collected, coverage amounts, and location in pricing the charge under their programs. For example, some programs base their charges on the amount of coverage or the terrorism risk premium that is charged by the primary insurers. Such programs also use models with specific terrorism event scenarios and frequencies. Alternatively, a backstop charge might require less data and be less challenging to implement. For example, the United Kingdom Treasury collects a backstop charge to reflect the potential cost of capital, which may present fewer data challenges than collecting a risk-based charge. Specifically, as we found in our April 2016 report on terrorism risk insurance programs in other countries, the program in the United Kingdom annually pays a backstop charge to the United Kingdom Treasury for access to an unlimited line of credit should it be needed to cover policyholder claims. According to a United Kingdom Treasury official, this charge is intended to reflect the potential cost of capital to the government for backing this liability. Additionally, in pricing a premium-like charge, the government would face decisions about which participants to charge and would need to consider whether the charge was affordable. For example, our analyses indicated that the government would need to charge a larger set of policyholders than those purchasing terrorism risk coverage (similar to its current recoupment methodology) to avoid potentially steep percentage increases in prices. Furthermore, as one stakeholder stated, an affordable federal charge only on policyholders that purchased terrorism risk coverage would not go far in helping the federal government accumulate funds for its share of losses (because of the potential size of the federal share under TRIA). Pricing a premium-like charge that is equitable, and provides adequate revenue could involve trade-offs between participation and covering expected losses. We provided a draft of this report for review and comment to Treasury, including the Federal Insurance Office, and NAIC. Treasury and NAIC provided technical comments, which we incorporated as appropriate. In addition, we provided relevant sections to NCCI, selected state programs (California Earthquake Authority; Florida Hurricane Catastrophe Fund; and Property/Casualty Insurance Security Fund, New York), and relevant government officials in selected countries (Austria, Australia, Canada, Finland, France, and Mexico) for their technical review. We incorporated technical comments we received from these entities, as appropriate. We are sending copies of this report to the appropriate congressional committees, Treasury, NAIC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. The objectives of our report were to examine (1) how insurers manage their terrorism risk exposure and price terrorism risk insurance; (2) the federal government’s recoupment requirements and how the federal share of terrorism losses would be affected in different scenarios; (3) how alternative funding approaches could be designed and implemented; and (4) the potential effects of the approaches. To address these objectives, we reviewed the Terrorism Risk Insurance Act of 2002 (TRIA), Terrorism Risk Insurance Extension Act of 2005, Terrorism Risk Insurance Program Reauthorization Acts of 2007 and 2015, implementing regulations, and congressional records. We also reviewed prior GAO work on this topic. We interviewed officials from the Department of the Treasury (Treasury), National Association of Insurance Commissioners (NAIC), Congressional Budget Office, and Congressional Research Service and reviewed relevant reports. We also interviewed and reviewed reports from academic researchers and several industry participants to obtain information for all our objectives, including insurers, reinsurers, state regulators, representatives from insurance trade associations, a rating agency, and insurance and reinsurance brokers. Specifically, we obtained information from six insurers, four reinsurers, and four state regulators. In all interviews, we asked participants about practices under TRIA’s current structure, the feasibility of the alternative funding options, the importance of key pricing objectives and set-aside design factors we identified, and the potential effects on different stakeholders of the alternative funding options and recoupment under the current program. We initially contacted 12 insurers—7 from among the largest U.S. commercial property/casualty insurers in TRIA-eligible lines of business according to SNL Financial’s insurance data and 5 additional small and mid-sized insurers recommended by insurance brokers and trade associations. Due to scheduling challenges and a lack of response from some insurers, we ultimately interviewed 6 insurers, including 4 from among the largest in TRIA-eligible lines. We determined that the information we obtained from these 6 insurers was sufficient for the purposes of our reporting objectives. To select reinsurers, we reviewed Treasury’s 2014 reinsurance report (which listed the top 50 global reinsurers of the reinsurance market) and Marsh and McLennan Companies, Inc.’s (Marsh) 2014 terrorism report (which listed stand-alone property terrorism risk reinsurers and insurers involved in the terrorism risk insurance market). We selected the top 2 reinsurers from each report and obtained suggestions from an industry association. To select state regulators, we identified states that are members of NAIC’s Terrorism Insurance Working Group, have cities considered to be at high- risk for terrorism, have top insurers headquartered there, or were recommended by NAIC officials. From these states, we selected California, Illinois, Massachusetts, Rhode Island, and New York. Due to scheduling conflicts, we held interviews with four of the five state regulators, which we determined were sufficient for the purposes of our reporting objectives. Our selections did not represent the views and practices of other insurers, reinsurers, or states not included. To describe current practices for managing terrorism risk exposure and pricing terrorism risk insurance, we interviewed selected insurers, NAIC and Treasury officials, brokers, and insurance associations about how insurers manage their terrorism risk, determine the terrorism risk premium, what that premium covers, how premiums are managed, and the extent to which insurers maintain funds to cover potential terrorism losses. We also reviewed NAIC guidance on terrorism risk premium disclosures for policyholders and information about insurance accounting standards and applicable insurance company tax laws. To describe how insurers and the federal government would pay for and recoup their share of losses, we reviewed laws and regulations related to how claims would be paid to policyholders and how insurers would be reimbursed for the federal share of losses. To determine the extent to which the federal government could recoup its share of terrorism losses, we first conducted analyses on how losses would be shared between the federal government and insurers in various scenarios, using insurance market data as described below. For more information about how the government and insurers would share losses under TRIA, see appendix II. Second, using the program’s recoupment structure, we analyzed how the federal share would be apportioned between mandatory and discretionary recoupment in various scenarios. To examine methods the federal government could consider if it were to implement a federal charge for terrorism risk, we developed a pricing framework and interviewed industry participants. To develop the pricing framework, we adapted economic principles and concepts from our prior work on assessing user fees, other government-collected funds, and user-based taxes to develop a framework of four pricing objectives (promoting economic efficiency, equity, and revenue adequacy, and limiting administrative burden) and related characteristics. We reviewed standard insurance pricing principles, such as actuarial standards of practice, but we did not rely on these standards to develop the framework because from a statistical perspective, existing data on terrorism events are not sufficient to meet some of the basic principles of insurance theory. To validate our pricing framework we obtained feedback on the four pricing objectives from stakeholders in the insurance industry, including insurers and reinsurers. We also interviewed insurers and state regulators to gain some insight about the importance and feasibility of the pricing objectives in relation to developing a charge for terrorism risk and the extent to which trade-offs among the objectives might exist. We also assessed seven selected state and federal catastrophic or insurance programs as well as two foreign terrorism risk insurance programs to observe the implementation of these objectives. The seven state and federal programs were selected to illustrate a range of approaches for structuring and collecting premiums and methods for managing and ensuring adequate funding is available to cover program costs. They also cover a variety of risks, including natural catastrophes, and have varying types of participants such as borrowers, pensioners or insurance policyholders, and sources of funding. We used publicly available information to make our assessments of the seven programs. We did not use all the programs to illustrate each pricing objective because the information was not publicly available, and some programs offered clearer examples than others. See appendix III for more information about the seven federal and state programs we reviewed. Based on our recent work on national terrorism risk insurance programs, we identified terrorism risk insurance programs in two countries, Australia and the United Kingdom, in which a charge is paid to the government for the benefit of a government backstop. We used documents and interviews from our prior work to observe the implementation of the pricing objectives in the charge component of the programs. To examine approaches the federal government could consider if it were to require or provide incentives for insurers to maintain terrorism set- asides for potential terrorism losses, we reviewed prior GAO work on designing fees and selected programs or proposals to identify key design factors and implementation considerations (such as, accounting practices and state laws) policymakers could consider if they implemented such an approach. We selected four proposals or current programs with set- aside approaches that illustrate variation among the design factors. For example, we selected some approaches that require participation and others that are voluntary. In addition, the approaches reflect different structures, including loss reserves (liabilities), insurers’ levels of capital, and segregation of assets. We selected an NAIC proposal, Austria’s terrorism risk insurance program, a combination of congressional proposals, and a catastrophe risk weight approach. Two of the four approaches applied to potential terrorism losses specifically, while the other two were for potential natural catastrophe losses. For the two natural catastrophe set-aside approaches, we consulted with relevant stakeholders about their application to potential terrorism losses, and for our work, found their application to potential terrorism losses appropriate. To describe the practices, laws, and rules the federal government could take into account, we reviewed documentation on the selected approaches and sources describing relevant accounting standards and laws. We also interviewed insurers, reinsurers, and state regulators on the approaches and reviewed documentation on the process for making changes to accounting standards. See appendix IV for more information on the selected proposals and current programs with set-aside approaches. Appendix IV also includes information on selected countries that allow insurers to establish set-asides to cover future losses. The countries were identified for review through external outreach efforts with international entities, literature review, and questionnaire and interview responses. To assess the potential effects of recoupment of the federal share of losses, a federal charge for terrorism risk insurance, and terrorism set- asides on policyholders, insurers, the federal government, state regulators, and reinsurers, we interviewed market participants. Additionally, for recoupment and a federal charge, we quantified the potential effects on policyholder price, participation, and insurers’ direct earned premiums. We also assessed the extent to which TRIA provides a subsidy and estimated the size of any subsidy. As we describe below, to conduct our analyses, we used U.S. property/casualty insurance market data, estimates of U.S. terrorism risk premiums, information on international reinsurance rates, models from various alternative funding approaches, and economic literature. We also describe inherent uncertainties related to our estimates and the informed assumptions we used in our analyses. See the remainder of this appendix for more details. Using U.S. insurance market data and industry estimates: To determine the direct earned premiums associated with the TRIA-eligible insurance lines and market share of subsets of insurers (top 4, top 10, top 20, and all) in 2014, we analyzed 2014 insurance data on direct earned premiums from SNL Financial. We used the top 4, top 10, and top 20, and all TRIA-eligible insurers as proxies to represent different sized premium bases. The direct earned premium associated with the insurers rather than the number of insurers is important because prior year direct earned premium determines the aggregate insurer deductible. For example, the total of direct earned premium for a different subset of insurers’ could equal the direct earned premiums of the top 4 insurers. To assess the reliability of SNL Financial’s data, we reviewed prior GAO assessments of the data and performed electronic testing. We determined that the data used in this report were sufficiently reliable for the purposes of our reporting objectives. To project the annual change in the size of the TRIA-eligible market from 2015 to 2018, we used estimations of terrorism risk premiums from 2004 through 2013 from A.M. Best. We found the data sufficiently reliable for this purpose. We calculated the annual change to be an increase of 2 percent. To estimate the percentage of TRIA-eligible premiums that was paid for terrorism risk insurance, we used estimates of the percentage of commercial property insurance premiums paid for terrorism risk insurance from Marsh and Treasury. We found the data sufficiently reliable for this purpose. We used peer-reviewed, published information on the price elasticity of corporate demand for insurance that we determined to be reliable and suitable for illustrating potential effects of recoupment and alternative approaches. Specifically, we used low and high premium elasticity of demand for commercial property/casualty insurance of -0.43, and -0.82 respectively. We calculated average elasticity of -0.63 from the high and low values. We used elasticity of demand for commercial property/casualty insurance rather than for corporate demand for terrorism risk insurance, which ranges from -0.31 to -0.71, because most premium adjustments would be imposed on commercial property/casualty policyholders. Although we did not analyze variations in the effects of the options by location, size of the insurer, industry covered by the insurance, or the extent to which some policyholders might be required to purchase terrorism risk coverage, we used a range of elasticities that may capture some of these differences. Estimating potential effects of recoupment: To show the potential upper range of effects of recoupment on TRIA-eligible policyholder price and participation, and insurers’ direct earned premium after a terrorism event, we estimated effects of (1) a large mandatory recoupment amount, (2) the shortest time frame for collecting the mandatory recoupment amount (which would result in an upper range effect), and (3) a large discretionary recoupment amount (to show the longest collection time frame). To illustrate potential effects of a very large mandatory recoupment amount, we used an event resulting in $40 billion of insured losses under TRIA that occurred in 2019 (year with the maximum industry aggregate retention) that affected a small set of insurers—insurers with a prior-year premium base equal to that of the top four insurers of TRIA-eligible lines ($49 billion in 2018). To maximize potential market effects of a very large mandatory recoupment amount, we used the same event size and group of insurers (direct earned premiums of $47 billion in 2016), but an event date of 2017 because the effective collection time frame for mandatory recoupment is the shortest (1 year 9 months), assuming collection starts in the January after the event. To illustrate potential effects of a very large discretionary recoupment amount, we used an event size of $100 billion in 2019 affecting insurers with a prior-year premium base equal to that of the top 10 TRIA-eligible insurers—direct earned premium of $87 billion in 2018. Estimating potential effects of a federal terrorism risk insurance charge: To estimate the potential effects of a federal charge for terrorism risk insurance on all property/casualty, TRIA-eligible, and terrorism risk policyholder price and participation, and insurers’ direct earned premiums, we constructed annual federal charges using (1) reinsurance rates and (2) frequency models. We used the amount estimated in our size of subsidy analysis as discussed below to calculate the increase in premium in 2016 for three different policyholder bases (all property/casualty policyholders, with prior-year direct earned premium of $572 billion in 2015; TRIA- eligible policyholders, with prior-year direct earned premium of $205 billion in 2015; and policyholders making actual payments for terrorism risk insurance under TRIA, estimated as 5 percent of TRIA- eligible prior-year direct earned premiums, or $10 billion in 2015). We used the maximum total government share of losses and assumed event frequencies of 20, 50, and 100 years to calculate the increase in premium for the three different policyholder bases (all property/casualty, TRIA-eligible, and terrorism risk premiums). Potential effects of insurer terrorism set-asides: We did not quantify the potential effects of insurers’ terrorism set-asides on TRIA-eligible policyholder price and participation, and insurers’ direct earned premium, because the approaches we chose were not expected to require significant changes in these measures. We considered the following set- aside approaches: The NAIC proposal specifies a set-aside buildup time frame of 20 years, with a targeted accumulation amount of $40 billion. The Austrian program specifies a set-aside build-up time frame of 10 years and a maximum total reserve amount equal to the potential share of losses—an estimated $35 billion for all U.S. insurers. One of the U.S. legislative proposals directs insurers to annually set aside 50 percent of terrorism risk premiums to cover future losses, but does not specify a reserve buildup time frame or a target reserve amount. We considered target reserve amounts of $43.6 billion (20 percent of estimated TRIA-eligible premiums in 2018) and $100 billion (the maximum for the total losses covered under the program). Finally, although TRIA’s current recoupment provision would remain in place under the terrorism set-asides option, we assess recoupment and set-asides independently of each other. Reflecting uncertainties in estimates of potential effects: While we calculated some illustrative estimates of potential market impacts, such numerical estimates are necessarily uncertain and speculative. None of the alternative funding options exist in the United States. Sources of uncertainty are explained below. Recoupment has not been tested because the United States has not experienced a terrorism event large enough to have triggered TRIA. Furthermore, the methodology for setting post-event recoupment surcharges would be based on a number of factors and parameters of the specific terrorism event. Our analysis is limited by the lack of data on the current insurance market, particularly prices and the participation rate. The reactions of insurers and policyholders in the TRIA-eligible insurance market to government actions (imposing a recoupment surcharge or a federal charge for terrorism risk insurance or requiring insurers to establish terrorism set-asides) is uncertain. We researched and found reliable estimates for elasticity of demand for commercial property/casualty insurance. However, we did not research other market dynamics related to insurers’ pricing behavior in response to recoupment or an alternative funding requirement, or insurers’ underwriting capacity following a terrorism event. We also did not research changes in businesses’ need for terrorism risk or property/casualty insurance before or after a terrorism event. In the case of recoupment, the terrorism risk and TRIA-eligible insurance market reactions may be more uncertain because a terrorism event could affect the demand or supply of insurance, and thus affect premium rates and insurers’ volume of business. While policyholders’ demand for terrorism risk insurance may increase following a terrorism event, demand for TRIA-eligible insurance likely would be less affected. At the same time, although TRIA requires insurers to make terrorism risk insurance available, a terrorism event could lead to price increases by participating insurers based upon their reassessment of the likelihood of future events, and thus depress demand. Such insurers may be reluctant to devote additional capital to potential terrorism losses. However, industry participants have indicated that a terrorism event could increase the availability of terrorism risk coverage because as premiums increased, new sources of capital could enter the market. To reflect the uncertainty of the process and outcomes, we used two methods (percentage load method and revenue target method) that rely on different hypothetical assumptions about the insurers’ pricing strategies in reaction to recoupment or a federal terrorism risk insurance charge. (See table 6 for a comparison of the methods.) Percentage load method. We assume that insurers fully pass through to policyholders a percentage increase in premium rates (that is, the load) based on the annual collection or target amount and insurers’ aggregate direct earned premium. For example, if the annual collection amount resulted in a 3 percent recoupment surcharge on premium rates, we assumed that insurers fully would pass through that percentage increase to policyholders and raise premium rates (by 3 percent). This method would result in larger effects than the revenue target method in the form of insurer loss of direct earned premium. Revenue target method. We assume that insurers have the incentive and would attempt to increase prices to collect, at the margin, as much additional direct earned premiums as possible up to the annual collection or target amount. Such price increases would be above and beyond a full pass through to make up for some or all of the loss of net revenue resulting from decreased policyholder participation in reaction to the price increase. For example, if Treasury annually recouped $6 billion from insurers, we assumed insurers would raise prices to collect, at the margin, an additional $6 billion in direct earned premiums. This method would result in larger effects than the percentage load method in the form of premium increases and decreases in participation rates. Generally, our analyses using the revenue target resulted in a larger impact for policyholders in terms of prices and participation, while the percentage load analysis resulted in a larger impact for insurers in terms of impact on direct earned premiums. Assumptions related to analyses of potential effects: Due to the prospective nature of this analysis, we made a number of assumptions. When necessary, we assumed market reactions to the changes in insurers’ prices due to recoupment or alternative funding options, such as the sensitivity of participation in TRIA-eligible insurance to price changes (price elasticity) and the pricing behavior of insurers. For all options, we assumed prices would increase. For each step in our analysis, we assumed that only the variables of interest changed, and all other variables remained constant during the collection or build-up period. For example, for recoupment we ignored other dynamics that could occur in the TRIA-eligible insurance market due to a terrorism event such as policyholders or insurers exiting or entering the market and price changes not directly related to terrorism risk. We assumed that a terrorism event or an additional terrorism event would not occur during the collection or build-up period. We assumed that insurers’ actions primarily would be to collect the federal recoupment surcharges or charge for terrorism risk insurance. In all cases, we used SNL Financial’s direct earned premium data to determine the size of the property/casualty insurance market and submarkets (see app. II for further details). We assumed that the baseline annual total direct earned premiums for commercial property/casualty insurance and the market share of subsets of insurers remained unchanged for the implementation period and after a terrorism event during the recoupment period. We assumed that the current average price elasticity of -0.63 remained constant over the range of price increases considered. We assumed insurers would impose any increase in the first year of the collection or build-up period and obtain the annual cost from the increase in direct earned premiums. In the second and the remaining years of the collection or build-up period, we assumed insurers would not adjust prices further since the annual cost would remain constant, and insurers would collect the same new direct earned premiums as in the first year. Assessing the presence and size of a subsidy: To evaluate the extent to which insurers and policyholders receive an economic subsidy under TRIA, we reviewed and synthesized literature on government subsidies taking into account the program’s recoupment feature. We determined that a subsidy exists in the program to the extent that the federal government is not expected to recoup all its losses. To estimate the potential annual size of the subsidy, we estimated forgone federal premiums for the federal share of losses. Because we could not determine premiums using an actuarial method, we used reinsurance rates paid by other national terrorism risk insurance programs. We were told by the broker of many reinsurance deals in other national terrorism risk insurance programs that the rates were fairly stable across countries and generally ranged from 2 percent to 3 percent of the amount of coverage. We use a rate-on-line (the ratio of premium paid to loss recoverable in a reinsurance contract) of 2.5 percent of reinsurance coverage purchased, assumed the federal government purchased coverage for its maximum annual losses under TRIA, and added 5 percent as a collection fee. We performed calculations in scenarios in which the government is expected to (1) recoup neither mandatory nor discretionary recoupment amounts, and (2) only recoup mandatory amounts. Specifically, to determine the maximum annual losses under TRIA, we used SNL Financial’s insurance market data as described above and modeled the maximum terrorism event size ($100 billion in insured losses) in 2016 and assumed insurers with an aggregate premium base equal to the top 20 insurers were affected by the event. We estimated the portion of federal losses that would be subject to mandatory and discretionary recoupment. To estimate the forgone federal premium amount, we multiplied the maximum federal loss amount by the reinsurance rate. For more information on our subsidy analysis including limitations and assumptions, see appendix VI. We conducted this performance audit from January 2015 to January 2017, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Under the Terrorism Risk Insurance Act (TRIA), the federal government and insurers potentially share losses from a certified terrorism event if total losses exceed the program trigger ($120 million in 2016) and losses exceed any individual insurer’s deductible. Each insurer’s share of losses is the minimum of its actual losses or the sum of its deductible (measured as 20 percent of its previous year’s direct earned premium in TRIA-eligible lines) and its co-share portion (16 percent of losses exceeding the deductible amount in 2016). Assuming an insurer has satisfied its deductible, its losses are capped in the event the program cap ($100 billion) for all losses (insurers and federal) has been reached. The federal share of losses is the difference between the total losses (or the $100 billion program cap, if smaller) and the sum of insurers’ losses. Some recent program changes shift a greater share of losses from the federal government to insurance companies over time, such as changes to the program trigger and the insurer co-share. To illustrate a range of potential terrorism loss scenarios for insurers and the federal government, we analyzed terrorism events affecting insurers with aggregate premium bases equal to the top 4, top 10, top 20, and all insurers with TRIA-eligible lines of insurance. Using SNL Financial’s data, we determined the direct earned premiums and market share earned by the top 4, top 10, top 20, and all insurers in 2014 in TRIA- eligible lines of business. As figure 7 shows, insurers earned $201 billion in direct earned premiums for TRIA-eligible lines in 2014, according to SNL Financial’s data, and the four insurers with the most direct earned premiums from TRIA-eligible lines of insurance earned 22.5 percent of such premiums. Using these subsets of insurers, we estimated the insurer and federal shares of terrorism losses under single-event scenarios, varying by size of event (losses of $5 billion, $25 billion, $40 billion, $50 billion, $75 billion, and $100 billion) and year of event (from 2016 to 2019). We estimated the insurer deductible for each insurer group by multiplying their direct earned premium by 20 percent. If the insurers’ deductible was greater than or equal to the loss total, there was no insurer co-share and no federal share. If the insurers’ deductible was less than the loss total, we estimated the insurers’ co-share by subtracting the insurers’ deductible from the loss total and multiplying the result by the insurers’ co-share percentage. The estimated federal share of losses is the difference between the total loss and the insurers’ share. Figure 8 illustrates an example of loss sharing under TRIA. The relative size of the federal share of losses depends on the amount of insured losses from the terrorism event and the direct earned premium of insurers affected, as shown in table 7. The federal share of losses would be greater in events with more insured losses. For example, in the scenarios in which affected insurers had an aggregate premium base equal to that of the top 20 insurers, the federal share of losses would increase from $1.9 billion in the case of an event with $25 billion in losses to $64.9 billion in an event with $100 billion in losses. This shows that the government plays a greater role in more catastrophic events, consistent with the manner in which the program is structured. Additionally, the share of losses the government sustains depends on the aggregate TRIA-eligible direct earned premium of the insurers with losses. Specifically, the federal share of losses is smaller when losses are shared among insurers with larger aggregate premium bases. For example, in a $25 billion loss scenario, the government share of losses would be $13.2 billion if the affected insurers had a premium base equal to that of the top 4 insurers and $1.9 billion if the affected insurers had a premium base equal to that of the top 20 insurers. The tables in this appendix provide general information on select state and federal insurance or insurance-like programs that address the risks of natural catastrophes and severe financial conditions, such as earthquakes and home foreclosures, respectively. These programs illustrate a variety of approaches for pricing and managing federal charges that could provide insight on how a charge for terrorism risk insurance could be designed. For example, policymakers could consider how aspects of these programs’ participation requirements, inputs used in setting charges, additional funding mechanisms, and oversight could apply to a federal charge for terrorism risk insurance. Selected programs are described below. California Earthquake Authority: A public entity that manages the privately funded program and operates as a primary insurer to provide catastrophe earthquake insurance to residential property owners and renters through participating insurance companies. Participating insurers must offer earthquake insurance to their residential property insurance policyholders such as those with homeowners/fire insurance. Some of the fund’s risk is ceded through reinsurance, which is financed through direct purchase of reinsurance or through capital markets using a special purpose reinsurance entity specifically designed to purchase reinsurance. Insurers apply to participate in the California program and must submit data for computer modeling to determine the insurer’s potential earthquake loss. Newly participating insurers with higher risk than other insurers of similar size may be charged an annual surcharge if the exposures they present to the Authority are higher than the normal risks of the Authority before they can participate in the program. Each year, this situation is examined and the surcharge is dismissed when the exposures of the new participating insurer closely match that of the normal Authority exposure base. Federal Crop Insurance Corporation: This program was first authorized in 1938 to alleviate the economic distress caused by crop failures during the Dust Bowl era. The program helps farmers manage the risks inherent in farming by allowing them to insure against losses caused by poor crop yields, declines in prices, or both. Farmers can insure against losses on more than 100 crops, including the five major crops of corn, soybeans, wheat, cotton, and grain sorghum. Approved private insurance companies share a percentage of the risk of loss and opportunity for gain on each policy. The federal government encourages farmer participation by subsidizing premiums and is the primary reinsurer to approved insurers. Florida Hurricane Catastrophe Fund: A tax- exempt trust fund created by the State of Florida to ensure ongoing reinsurance capacity to insurers for catastrophic wind losses from hurricanes and foster an affordable wind insurance market in Florida. Under this program, insurers pay premiums and are reimbursed for a portion of their losses. Mutual Mortgage Insurance Fund: Title II of the National Housing Act, enacted in 1934, authorized a single family mortgage insurance program and established the Mutual Mortgage Insurance (MMI) Fund to fund it. The program allows single-family homes to be purchased with small down payments and long-term mortgages. The Federal Housing Administration (FHA), through the MMI Fund, insures lenders against loss from defaulted loans. The Fund is funded primarily through premiums paid by borrowers and the proceeds of foreclosed homes. FHA’s single-family mortgage insurance program has provided mortgage credit to families of low and moderate income not adequately served by the conventional private mortgage market. National Flood Insurance Program: The program makes federally backed flood insurance available to property owners in communities that participate in the program. Communities participate by adopting and enforcing floodplain management regulations designed to prevent and mitigate the effects of flooding. The Federal Emergency Management Agency, an agency of the Department of Homeland Security, administers this program. Pension Benefit Guaranty Corporation: As a wholly-owned government corporation established to insure the pension benefits of participants in and beneficiaries of private-sector defined benefit plans, the corporation operates a single-employer program and a multi-employer program. Under both programs, plan sponsors pay per-participant flat premiums. In addition, under the single-employer program, a plan sponsor pays a variable rate premium based on its plan underfunding. Property/Casualty Insurance Security Fund, New York: This fund receives premiums from insurers doing business in New York, based on each insurer’s direct written premiums on policies it writes. This fund pays certain insurance claims of insolvent insurance companies when such payments are allowed in accordance with New York Insurance Law. The payments are subject to policy limits and a statutory cap. The superintendent of New York’s Department of Financial Services is the administrator of the fund. Table 8 provides information about the type of individual or business entity that participates in each program, whether participation in the program is mandatory, and briefly describes the benefit payment transaction. Table 9 briefly describes upfront charges, other upfront funding sources, and post-event funding sources for each program and statutory appropriation requirements. Information on these charges includes general information on the key inputs used in setting the level of or structuring the charge. The table also presents various statutory funding mechanisms. Although this table provides primary revenue sources, it is not intended to be an exhaustive list of each program’s revenue sources. Table 10 provides examples of who—program staff or third parties— perform the day-to-day operations of key functions and oversight for each program. The table highlights certain large activity components such as issuing policies, collecting premiums, and servicing policies and is not intended to provide an exhaustive list of all operational activities. The table also provides information about the oversight approach each program uses to help ensure its participants report accurate information and pay the correct charges. Finally, the table also highlights the reviews or assessments each program performs to determine the adequacy of its charges and the financial viability of its fund. This appendix provides an overview of U.S. insurance accounting practices, in-depth information about identified domestic approaches, and examples of set-aside approaches used in other countries. The current U.S. insurance accounting standards (statutory accounting principles, or SAP) permit insurers to establish loss reserves only after an event has occurred and U.S. insurers have the flexibility to manage their capital to cover unexpected or catastrophic losses across all lines of business. Although, insurers may consider terrorism risk exposure in their assessments of the adequacy of capital, they do not segregate assets that are restricted for potential terrorism losses or establish loss reserves for events that have not yet occurred. Loss reserve. For this report, we define a loss reserve as the company’s estimate of amounts needed to cover indemnity payments that will come due on policies already written for losses from events that have already occurred and the administrative expenses of dealing with the associated claims. Loss expenses related to increases in loss reserves reduce an insurer’s taxable income. Such liabilities are typically the largest single liability on an insurer’s balance sheet. Capital. For this report, we define capital as the excess of an insurance company’s assets above its liabilities. Capital generally is not segregated for specific purposes. It provides a cushion to an insurer against insolvency for any unexpected or underestimated losses. For example, if the recorded loss reserves are insufficient, the insurer’s capital is available to pay claims. Insurers are generally free to manage their capital as long as they satisfy external solvency and liquidity requirements and internal assessments of capital adequacy. Insurers may also use their capital to expand their business. Accounting standards and financial reporting. Insurers must report their financial holdings on an individual legal entity basis to the regulator in their state of domicile, using statutory accounting principles of the National Association of Insurance Commissioners (NAIC). According to NAIC, SAP are designed to assist state insurance departments in the regulation of the solvency of insurance companies. The ultimate objective of solvency regulation is to ensure that policyholder, contract holder, and other legal obligations are met when they come due and that companies maintain capital and surplus at all times and in such forms as required by statute to provide a margin of safety. In addition to SAP, insurance groups may issue audited financial statements using U.S. generally accepted accounting principles (GAAP), which in the United States are promulgated by the Financial Accounting Standards Board and are designed to provide decision-useful information to investors and other users of financial reporting. SAP stress the measurement of ability to pay claims in the future. SAP and GAAP recognize certain items differently and therefore may result in different capital and net income amounts. Accounting standards for recording insurance liabilities state that insurers only may create a loss reserve for a covered event that has occurred and for which the cost of the event is estimable. No liability exists without the occurrence of a covered event. Unless otherwise noted, references in this report to accounting for or recording liabilities refer to SAP. Tax deduction. Federal tax laws allow tax deductions for an increase to loss reserves that result from incurred losses for events that occurred during the period. Risk-based capital requirements. State regulators require insurance companies to maintain specific levels of capital to continue to conduct business. Regulators determine the minimum amount of capital appropriate for an insurer to support its overall business operations, taking into consideration its size and risk profile. All state regulators have adopted NAIC’s Risk-Based Capital for Insurers Model Act and also use formulas that NAIC has developed to establish a minimum capital requirement based on the types of risks to which a company is exposed. NAIC has separate models for different lines of insurance. Assets. For this report, we define assets to represent the resources that contribute to an entity’s future net cash flow and that an entity might use to pay its debts. Insurers’ assets are available for potential covered losses and generally are not segregated or restricted for limited uses. But in some instances, an insurer’s assets may be segregated or restricted for specific purposes. For example, insurers may segregate or restrict their assets for specific purposes such as holding assets for collateral. Liabilities. For this report, we define liabilities as present obligations to transfer assets or provide services to other entities in the future as a result of past transactions or events. For further details on the domestic approaches described in the report, see the following. NAIC catastrophe reserve proposal. According to a study on the NAIC catastrophe reserve proposal, the proposal’s design includes voluntary participation, a specific formula for an insurer to calculate its annual set- aside amount to cover catastrophic losses, and a loss reserve structure for catastrophic events that have not yet occurred. The proposal was developed to address constraints faced by insurance companies when catastrophes occur and pose significant challenges to the economy. Under the proposal, a reserve cap for each participating insurer would be calculated with a formula based on each insurer’s net written premiums written on qualifying business lines. Each insurer participating on a voluntary basis would determine the amount to set aside each year in a catastrophe reserve over a 20-year time frame. The proposal would establish a target amount of $40 billion across the property/casualty insurance industry, which was based on written premiums in 1999. Individual companies’ contributions would occur over a 20-year period. Each participating insurer’s set-aside would be structured as a separate liability on its balance sheet (distinct from other loss reserves and unearned premium reserves) without specific segregation of assets. The proposed federal tax treatment would provide for tax deductions over a period of 20 years for contributions into the set-aside. According to the proposal, an insurer could use its reserve to cover its share of losses for multiple perils, among other uses. The reserve would be used primarily for catastrophic losses resulting from multiple perils such as wind, hail, or earthquake—and the drawdowns would be subject to criteria designed to protect solvency and limit use of the reserve for only catastrophic losses. Insurers also could use the reserve when the reserve balance exceeded the reserve cap or if their domiciliary state commissioners required them to release the catastrophe reserve as a rehabilitation, conservation, or liquidation measure or to forestall insolvency. Catastrophe risk weight. In 2013, NAIC began testing a catastrophe risk weight (a weighted measure included in assessments of the adequacy of insurers’ capital) to better measure an individual insurer’s ability to remain solvent, when taking into account the insurer’s earthquake and hurricane risk exposures. When fully implemented, the catastrophe risk weight will be incorporated into an insurer’s risk-based capital requirements, and will have no limitations on its use. All insurers will include an estimate of their hurricane and earthquake exposure as part of their risk-based capital calculations. The catastrophe risk weight was developed from historical data on catastrophe losses. An insurer enters its individually calculated modeled losses into a formula to determine a target amount of capital to maintain against these two exposures. However, the company’s available capital is not limited to covering losses from hurricanes and earthquakes and could be used for any purpose, such as paying claims for other types of catastrophic losses. Hurricane and earthquake risks were included as part of the catastrophe risk weight because they were the two perils most likely to cause losses that could significantly affect an insurer’s solvency and the models for these risks were considered advanced enough to estimate the effect of such losses on insurers’ business. NAIC officials leading this effort also told us that they plan to consider similar risk-based capital weights for other risks, including for terrorism. Selected aspects of legislative proposals. Two proposals by House members that we reviewed included provisions to establish set-asides with segregated insurer assets for terrorism losses to help stabilize the marketplace following a terrorist event. Key aspects of one or both of the proposals included (1) requiring insurers that sold terrorism coverage to participate, (2) providing for a specific target annual contribution (a percentage of the premiums collected from policyholders for terrorism coverage to be set aside), and (3) specifying the use of funds for insurer and federal shares of terrorism losses. Each insurer participating in the Terrorism Risk Insurance Act (TRIA) program could establish and maintain a set-aside with segregated assets for terrorism losses (the TRIA Reserve Fund) in fiduciary capacity on behalf of the Secretary of the Treasury. Other aspects of one or both proposals provide additional details on the structure of a set-aside. Specifically, each year an insurer would place 50 percent of the premiums collected from policyholders for terrorism coverage in a set-aside with segregated assets. The set-aside would be maintained in a segregated account, be held by the insurer on behalf of the Secretary of the Treasury, and kept until the program terminated. Therefore, the premium income diverted to this set-aside likely would not be part of an insurer’s taxable income, according to an insurer we interviewed. Funds in these set-asides would be collected and used by the Secretary of Treasury to offset the federal share of compensation provided to any affected insurers under TRIA in the event of a certified terrorism event, except that insurers could use these funds first to pay for any of their own covered terrorism losses (including losses below TRIA’s program trigger level). If the Department of the Treasury (Treasury) used insurers’ set-asides to offset any of the federal share of losses, Treasury would reimburse the companies for such amounts after the federal share of terrorism losses had been recouped. We identified some countries in which insurers are allowed to establish set-asides for events that have not yet occurred, although the use, participation, and structures of set-asides differed among some of these countries (see fig. 9). Generally, the set-asides could be available for multiple perils and solvency purposes, and three of the selected countries explicitly allowed insurers to recognize potential terrorism losses. In addition, four of the selected countries mandated insurer participation to establish set-asides, two allowed for voluntary participation, and one country had mandatory and voluntary set-asides. The structures of set- asides took distinct forms such as loss reserves and specific risk-based capital requirements, and some provide a tax deduction for the increases to such set-asides. We discuss the identified international set-aside approaches for events that have not yet occurred on the following pages. Australia. Australian insurers must establish a mandatory set-aside, generally structured as a specific risk weight in capital requirements. It is intended to help insurers to maintain adequate capital against the risks associated with insurance concentration in their activities, including natural catastrophe risks. Natural catastrophes can include all natural events, including earthquakes and storms. This insurance risk weight represents the net financial impact on the insurer from either a single large event or a series of smaller events, within a 1-year period. The risk weight takes into account all possible perils in all regions to determine the size of loss that could occur from a single event. The insurer must retain at least this calculated target amount of capital for all these risks, including natural catastrophes, at all times. Austria. Austria’s Terrorism Risk Insurance Program design includes voluntary insurer participation and loss reserve structure with specified insurer target amounts equal to their deductibles. The set-aside in this program was established for potential terrorism losses, according to the Austrian program representative. It is unknown whether the set-aside may be used for other purposes such as solvency. Participating insurers would structure the set-aside as a loss reserve to cover their program deductible on terrorism losses. The reserve is structured as a loss reserve in which the amount that is contributed to the set-aside reduces an insurer’s taxable income, according to a program representative. This set-aside would help to cover potential terrorism losses and the target amounts are based on each insurer’s market share of terrorism coverage, according to the program representative. Initially, the reserves were accumulated over 10 years. The time frame was increased to 15 years to allow insurers additional time to accumulate increased program deductibles. Canada. According to insurance regulatory officials, Canadian insurance companies must include mandatory set-asides for specific risk exposures as part of minimum capital requirements and also have the option to establish a voluntary set-aside in capital for earthquake events that have not yet occurred. The mandatory capital set-asides account for potential losses from earthquake, nuclear, and mortgage risks in risk-based capital requirements. The Canadian insurance regulator provides guidance for an insurer to determine the supervisory target level of required capital for each risk (risk weights) that would be included as part of an insurer’s risk- based capital requirements. The supervisory capital target for earthquake risk is calculated based on a 500 year probable maximum loss. In addition, for potential earthquake losses, insurers also may participate in establishing a voluntary set-aside in capital. Officials said that this voluntary set-aside is not typically used by insurers for various reasons including the fact that it locks-in capital that may otherwise be used elsewhere. Officials also told us that assets are not specifically segregated in either the mandatory or voluntary set-asides and that the voluntary set-asides provide a deduction for income tax purposes. Finland. According to insurance department representatives, nonlife insurers in Finland must establish set-asides structured as equalization reserves—a type of loss reserve—that serves as a buffer for exceptionally high claims. These mandatory insurers’ equalization reserves can cover events that have not yet occurred, including losses from natural catastrophes and terrorism. The calculations of target amounts and contributions to such equalization reserves are based on European Union capital requirements. The equalization reserve also has a maximum amount, which can be up to four times the target amount. Representatives also told us that the set-aside is not subject to tax on an ongoing basis but may be subject to income taxes as the set-aside is decreased. France. According to French insurance regulatory officials, insurance companies in France may establish set-asides generally structured as equalization reserves to cover events that have not yet occurred, including natural disasters and terrorism. Insurers establish equalization reserves on a voluntary basis that allow for tax deduction by the French Tax Administration, but these reserves are not allowed to be included on financial statements prepared under international accounting standards because the event has not occurred, according to officials. Insurers benefit from a tax deduction to reduce an insurer’s taxable income for contributions to these set-asides if certain standards are met, such as not exceeding a target amount set by the French Tax Administration. The target amount and time frames depend on the risk that the equalization reserve is meant to cover. For example, an equalization reserve that includes terrorism risk can be kept for 12 years. Mexico. According to a representative from the insurance regulator, insurers in Mexico are required to establish set-asides for catastrophic events that have not yet occurred and structure these set-asides as special catastrophe reserves to cover natural catastrophe risks as well as other catastrophe risks. Insurers create the catastrophe reserves with a target amount based on probable maximum loss that is calculated at the end of each fiscal year. In addition, insurers record these set-asides as loss reserves on the balance sheet and contributions to these set-asides are tax deductible. Once a catastrophic event occurs, insurers must first exhaust their reinsurance options before utilizing their own catastrophic set-asides. In this appendix, we discuss additional details about our methodology and results regarding how recoupment under the Terrorism Risk Insurance Act (TRIA) and alternative funding options could affect market participants. As noted previously, while our analyses of potential market effects of TRIA recoupment and alternative funding options rely on assumptions regarding price increases under each option, it is also possible that there could be no change in price, policyholder participation, or insurers’ volume of premium. In addition, we discuss additional information we obtained from our interviews with industry participants such as insurers and state regulators. We estimated the potential upper ranges of effects of recoupment on TRIA-eligible policyholder price and participation, and insurers’ direct earned premium after a terrorism event, under scenarios that result in the very large mandatory and discretionary recoupment amounts. Specifically, we estimated effects of (1) a very large mandatory recoupment amount, (2) the shortest time frame for collecting the mandatory recoupment amount (which would result in a very large annual surcharge), and (3) a very large discretionary recoupment amount (to show a very long collection time frame). See table 11 for a summary of the results. To illustrate potential effects of a very large mandatory recoupment amount we used an event size of $40 billion that occurs in the year 2019 (year with the maximum industry aggregate retention) that affected a small set of insurers—insurers with aggregate prior-year direct earned premium equal to those of the top four TRIA-eligible insurers. Using SNL Financial’s 2014 data, the top four insurers of TRIA-eligible lines carried 22.5 percent of the prior-year TRIA-eligible direct earned premium. We estimated the TRIA-eligible insurance line direct earned premium in 2018 to be $218 billion. To maximize potential market effects from a very large mandatory recoupment amount, we used the same event size and subset of affected insurers, but the event occurs in 2017 when the time frame for mandatory recoupment is the shortest (1 year and 9 months when collection begins the January following the event). We estimated the TRIA-eligible insurance line direct earned premium in 2016 to be $209 billion. To illustrate potential effects of a very large discretionary recoupment amount, we used an event size of $100 billion in 2019 that affected insurers with aggregate prior-year direct earned premium equal to those of the top 10 TRIA-eligible insurers. Using SNL Financial’s 2014 data, the top 10 insurers of TRIA-eligible lines carried 39.8 percent of the prior-year TRIA-eligible direct earned premium. We discuss our results from each of the two analysis methods used (percentage load method or revenue target method). These two methods rely on different assumptions about insurers’ pricing strategies. In addition to the effects previously discussed, recoupment could lead to a loss of direct earned premium for insurers following a terrorism event. However, according to one insurer, recoupment may not affect the marketwide availability of terrorism risk coverage. We describe some results and observations from our illustrative analyses relating to insurers’ potential loss of premium revenue and findings from literature and interviews. The potential losses of TRIA-eligible direct earned premium to insurers vary with factors such as the size of the event, the year of the event, the assumed elasticity of demand, and the assumed insurer pricing methodology. The potential direct earned premium loss could be higher or lower depending on whether coverage was mandatory for the line. For example, states require workers’ compensation coverage, so any increase of prices due to a requirement to collect a recoupment surcharge likely would have a small effect on the premium revenue of a workers’ compensation insurer. In certain scenarios, insurers’ potential loss of direct earned premium could be less when losses were recouped under discretionary recoupment compared with mandatory recoupment, because the price increase to policyholders is capped for discretionary recoupment. For losses recouped under mandatory recoupment, the time allowed to recoup determine the size of the effect. Greater loss of direct earned premium could occur if the time frame to collect the losses were short. For example, the time frame to collect is shorter for an event that occurs in 2017 (less than 2 years) compared to 2019 (less than 5 years). Using scenarios that had $40 billion in losses and resulted in high mandatory recoupment (exceeding $20 billion), we estimated that if premiums were increased by a specified percentage, the 2017 event, on average could lead to about a 4.4 percent loss of TRIA-eligible direct earned premium, compared to about 1.9 percent for the 2019 event. Or if insurers increased prices to collect target amounts, direct earned premiums on average could decrease by 1.8 percent for the 2017 event, compared to about 0.4 percent for the 2019 event. Federally imposed premium increases in the form of recoupment surcharges might limit the ability of state insurance regulators to restrain potential price increases. State regulators told us that when they review rate increases from insurers, they consider whether the increases are excessive, inadequate, or unfairly discriminatory to policyholders. Two state regulators stated that to the extent the federal government mandated an increase in policyholder premiums through recoupment surcharges, they likely would not have a reason to deny such premium increases. However, regulators might be able to influence the size of any increase that insurers might submit above the federal surcharge. In addition to the potential effects of mandatory recoupment, we also estimated potential policyholder price increases and participation decreases resulting from discretionary recoupment. TRIA-eligible insurance prices could increase by a maximum 3 percent and we estimated participation on average could decrease by about 2 percent. For terrorism events that result in very large discretionary recoupment amounts (more than $60 billion), we estimated the collection period to be as much as 28 years. We estimated the potential effects of a premium-like federal charge for terrorism risk insurance under TRIA on policyholder price and participation, and insurers’ direct earned premium, using (1) international reinsurance rates and (2) frequency assumptions. We estimated the annual federal charge in 2016 as the cost of reinsurance for the maximum federal share of losses and analyzed the effect if the charge was imposed on three different policyholder bases (all property/casualty, with estimated prior-year direct earned premium of $572 billion in 2015; TRIA-eligible, with estimated prior- year direct earned premiums of $205 billion in 2015; and TRIA, with direct earned premiums estimated to be 5 percent of TRIA-eligible direct earned premium or $10 billion in 2015). We estimated annual federal charges using the maximum government share of losses from assumed events that occurred at frequencies of every 20, 50 and 100 years and calculated the increase in premium for the three different policyholder bases described above. Table 12 shows a summary of potential market effect for the federal charge by policyholder group charged. Insurers’ overall volume of premium could be negatively affected under recoupment and a federal charge for terrorism risk insurance. Additionally, potential insurer loss of direct earned premium resulting from a federal charge for terrorism risk insurance would depend on several factors, including the extent to which insurers pass the cost to policyholders and insurer characteristics. If insurers were to spread the cost among policyholders from all TRIA- eligible lines of business, our modeled results show potential losses of direct earned premium of 0.5 percent on average if premiums were increased by a specified percentage or less than 0.1 percent on average if insurers increased prices to collect a target amount. If spread among policyholders from all property/casualty lines, the losses of direct earned premium could be 0.2 percent on average if premiums were increased by a specified percentage or less than 0.1 percent on average if insurers increased prices to collect a target amount. A federal charge for terrorism risk insurance could affect insurers’ loss of business unevenly. One stakeholder said that the potential loss of business due to a federal charge could vary by insurer size. In particular, small insurers—which may not collect explicit premiums for terrorism— might be affected more than larger ones. An insurer and broker said that small insurers would be less able to absorb the cost from a federal charge and, consequently, more likely to pass the charge to their policyholders. This could increase loss of business of small insurers in two ways: retention of existing clients would decline and attraction of new clients would become increasingly difficult, especially if larger insurers did not pass the charge to their clients and had lower prices. However, one large insurer said it would not be able to absorb a federal charge and would need to pass this cost to its policyholders. According to insurers, they would incur administration costs associated with a federal charge for terrorism risk insurance. Administration costs would be incurred if insurers would need to collect premiums from policyholders on behalf of the government. Implementation of a federal charge for terrorism risk insurance could affect reinsurers differently based on policymakers’ design decisions, according to stakeholders. For example, a charge could be designed as voluntary or mandatory. In a voluntary charge, the federal role matches that of a reinsurer, and if Congress allowed private reinsurance to compete with the government under this option, reinsurers could see an increase in market opportunity. If Congress mandated that insurers purchase reinsurance from the government (disallowed competition), reinsurers might see their business decline because insurers would have less funds available to purchase private reinsurance. Similar to recoupment surcharges, state regulators’ ability to restrain potential price increases might be limited under the federal charge option. For example, one stakeholder said that if a federal charge substantially increased prices, states might stop requiring terrorism coverage if it harmed the workers’ compensation market. One stakeholder stated that the portion of rate increases that insurers imposed to make up for premium revenue losses likely would receive more scrutiny from state regulators. We obtained information on the potential effects of insurer terrorism set- aside approaches on different industry stakeholders. Approaches that did not involve segregated assets likely would result in no or minimal impact on price, participation, or insurer direct earned premium. Approaches that involved segregated assets (as in our interpretation of the legislative proposals) could have some impact on price, participation, and insurer direct earned premium because they might require insurers to raise additional capital. However, we did not determine the size of any increase. To the extent that prices increased, insurers’ overall direct earned premium could be negatively affected under the terrorism set-aside approaches. Stakeholders said that if a segregated set-aside were required, insurers might need to raise more capital to increase capacity for their other lines of business and would need to earn an acceptable return on capital for shareholders. Stakeholders reported that capital could be raised in the markets, which would increase the insurer’s cost of doing business, or it could be raised by increasing premiums on policyholders. Insurers may also incur administrative costs associated with a terrorism set-aside for terrorism risk. Specially, three insurers indicated increased business costs associated with a terrorism set-aside and one insurer stated that any effect on its business costs would depend on the tax implications for any capital considerations. Reinsurers could experience decreases in market opportunities under the terrorism set-aside option, but reinsurance availability likely would not be affected. As discussed above, most insurers fund reinsurance purchases from premiums. To the extent that insurers would need to build terrorism set-asides, they might have less capital available to purchase reinsurance unless one purpose of the set-aside was to build funds to purchase reinsurance. For this reason, reinsurers also could experience a loss of business as a result of a requirement for a terrorism set-aside. Furthermore, if the government were to give the same tax advantage to the set-asides as post-event loss reserves, insurers could have less need for reinsurance. Two industry participants stated that the potential supply of reinsurance for terrorism likely would be unaffected by a terrorism set-aside requirement. For example, one industry participant stated that insurer actions would not strain reinsurance supply or affect reinsurance pricing. Unlike with a federally mandated premium increase, state regulators might have purview over any insurer increases to cover the cost of a terrorism set-aside because increases likely would be determined by insurers rather than be federally mandated. In this case, insurers likely would need to follow normal rate increase protocol. If premium increases to cover the cost of a terrorism set-aside were minimal, regulators and policyholders might not notice the increase. However, regulators may have additional oversight responsibilities under set-aside approaches. For example, two state regulators pointed out that implementing a set-aside for potential terrorism losses with specifically segregated assets could affect state oversight related to laws and practices involving receivership and liquidation should an insurer become insolvent. Stakeholders had differing opinions on whether insurers would increase premiums as a result of terrorism set-asides. For example, three insurers stated that such a set-aside would require insurers to increase premiums, while another stakeholder stated that any effect on premiums might be negligible. One stakeholder estimated that companies with exposure in high-risk areas such as New York City had a need for a large set-aside, while others may not; however, these estimations assume that the set- aside only would be for insurers’ share of the losses. The approach in which insurers would set aside funds that could be used for both insurer and federal losses likely would require a larger target amount. One insurer said that insurers could spread costs across all lines of business. Similar to the federal charge, that type of cost sharing for the set-aside would have a negligible impact on policyholder premiums. In this appendix, we discuss the extent to which the role of the federal government under the Terrorism Risk Insurance Act (TRIA) creates an economic subsidy for market participants and the potential size of such a subsidy. We determined that TRIA could produce a federal government economic subsidy to the extent the government was not expected to fully recoup its losses. Estimating the size of an economic subsidy depends on many factors and requires several assumptions that we discuss later in this appendix. Under TRIA, the federal government initially shares responsibility for some of the insured losses with private insurers in the event of a certified terrorism event and may recoup all or some of its losses through policyholder surcharges. Unlike private insurers or reinsurers, the government does not charge premiums for its potential share of terrorism losses but may recoup some or all of its losses post-event. Specifically, Treasury reimburses an insurer for a certain percentage of its insured losses above its deductible. If insurers’ aggregate losses are equal to or below the industry aggregate retention amount, TRIA requires mandatory recoupment of the federal losses up to the retention amount reached by the losses through post-event premium surcharges on all policyholders with TRIA-eligible insurance, including those with no insured losses and those without terrorism risk coverage. In addition, TRIA allows for discretionary recoupment when losses exceed the industry aggregate retention amount. As structured, the program potentially exposes the federal government to a significant amount of financial risk and it does not require recoupment of all losses and expenses in some scenarios. For the purpose of this report, an economic subsidy can involve a payment by the government that reduces the buyer’s price below the seller’s price. Whether there is no payment involved or in addition to a payment, an economic subsidy can involve the full or partial absence of a charge by the government for an action that benefits private market participants. Either case implies a payment or benefit from the government to private market participants, for which the government receives no commensurate benefit. Certain types of government intervention could produce an economic subsidy. Table 13 lists some government interventions and their applicability to TRIA. While TRIA is designed to recoup at least a portion of the federal share of losses, in some scenarios recoupment could adversely affect the market. For example, in some scenarios, the mandatory recoupment time frames could lead to large increases in policyholders’ premiums, which could affect the political will to carry out mandatory recoupment. In addition, the discretionary portion of recoupment could require a protracted collection period of premium surcharges, which could be a factor in Treasury’s determination of whether and to what extent to pursue discretionary recoupment. As a result, market participants and others may not expect the government to fully implement TRIA’s recoupment provisions. As such, in this report we assess the presence and potential size of an economic subsidy on the basis of whether and to what extent the government would be expected to recoup the federal share of losses, regardless of whether a terrorism event occurs. To the extent the losses to the federal government were not expected to be fully recouped, the federal government would be providing an economic subsidy because insurers and policyholders would receive a benefit from the federal government in the absence of a charge. In certain ways, the economic subsidy could benefit private insurers and the policyholders. However, if the government was not expected to fully recoup its losses, the primary recipients of the subsidy would be the policyholders because they would have received insurance coverage without paying either pre-event premiums for the federal share of losses or the actual full costs post-event. Using various assumptions and taking into account several limitations, we analyzed the potential size of an economic subsidy under different recoupment scenarios. We estimated the size of a subsidy by determining the value of insurance coverage without knowledge of whether, and to what extent, claims would occur. Specifically, using the cost of private reinsurance in other national programs we estimated annual forgone federal terrorism risk insurance premiums in scenarios in which the government would be expected to (1) recoup only the mandatory amount of losses, and (2) recoup no losses. We estimated the annual cost of the economic subsidy could be as high as $1.6 billion if the government were not expected to recoup any of its losses. However, if the government were expected to recoup all of its losses as described in TRIA, the economic subsidy would be $0 (no subsidy). Certain factors, such as limited data, limited our estimation method and affected our ability to accurately estimate the potential size of any subsidy. We used terrorism risk reinsurance rates in other national programs to estimate forgone premiums in the TRIA program because we lack data to determine premiums that would be required for any other estimation method. Currently premiums are not collected from policyholders for the federal share of losses, and the government would need to consider many factors in potentially setting a federal charge for terrorism risk insurance. Using reinsurance rates charged by private reinsurers to other national terrorism risk insurance programs for our estimate of forgone premiums could provide information on how the private market values terrorism risk insurance and the cost of capital to provide that insurance. We were told by the broker of many reinsurance deals in other national programs that premium rates were fairly stable across countries and generally ranged from 2 percent to 3 percent of the dollar amount of coverage. For example, the Australian terrorism risk insurance program purchased reinsurance in 2014 for six layers of reinsurance, with the rates ranging from 1.85 percent to 5.5 percent of coverage. Rates decreased with each additional layer of cover. For our analysis, we used an average cost of 2.5 percent of coverage and we assumed that the government would purchase coverage for its maximum payout. Use of other data or another estimation method could result in a different estimate of a subsidy amount. Finally, the estimation of the size of any subsidy could be affected by different assumptions about the private reinsurance market. There are limitations to our use of the cost of private reinsurance in other national programs to estimate forgone federal terrorism risk insurance premiums to the United States. First, the private reinsurance market may price coverage differently for the U.S. program or may be limited in the amount it is willing to cover. For example, the private reinsurance market may not consider the risk within the United States commensurate with risks in other countries; U.S. risk may not be sufficiently geographically diverse to qualify for the same pricing; or Treasury may not have sufficient information on the risk of the underlying portfolio for reinsurers to price the coverage. Second, the capacity of the private reinsurance market may not be sufficiently large for the amount that we assume would be purchased in this analysis. Programs in other countries purchase much smaller amounts of reinsurance. For example, the Australian program purchased just under $2 billion (about A$3 billion) of reinsurance coverage in 2014. Our analysis assumed that the United States would need to purchase more than 30 times that amount (about $65 billion in reinsurance coverage). Third, the government might not choose to purchase this much reinsurance. For example, the government might choose to self- reinsure or charge less in premiums. Fourth, our analysis assumes the entire amount would be covered by private reinsurance. However, reinsurance purchasers generally must pay a deductible, co-share, or both, which would decrease the actual amount of reinsurance coverage purchased. In addition to the contact named above, Jill Naamane (Assistant Director); Charlene J. Lindsay (Analyst in Charge); Robert Dacey; Pamela Davidson; Karen Jarzynka-Hernandez; DuEwa Kamara; John Karikari; Alma Laris; Emei Li; Efrain Magallan; Scott McNulty; Susan Murphy; Joseph O’Neill; Laurel Plume; Angela Pun; Oliver Richard; Barbara Roesmann; Jessica Sandler; Jena Sinkfield; Shannon Smith; Andrew J. Stephens; and Frank Todisco made key contributions to this report.
After the terrorist attacks of September 11, 2001, insurers generally stopped covering terrorism risk because losses could be too high relative to the premiums they could charge. Congress enacted TRIA to share losses from a certified act of terrorism between insurers and the government, address market disruptions, and help ensure widespread availability and affordability of terrorism coverage. TRIA does not include an up-front federal charge for the government's share of potential losses. The act mandates that, when private industry's losses are below a certain amount, the federal government recoups some or all of the federal share of losses through policyholder surcharges. The Terrorism Risk Insurance Program Reauthorization Act of 2015 includes a provision for GAO to review alternative funding approaches for TRIA. Among other things, this report examines (1) how insurers manage their terrorism exposure and federal recoupment of losses, (2) how alternative funding approaches could be designed and implemented, and (3) the potential effects of these approaches as well as the current structure. To assess these funding approaches, GAO reviewed related studies, analyzed several terrorism loss scenarios for each funding approach to estimate potential effects on market participants, and interviewed industry participants. Treasury and NAIC provided technical comments on a draft of this report, which GAO incorporated as appropriate. GAO also incorporated technical comments received from selected third parties, as appropriate. Under the Terrorism Risk Insurance Act's (TRIA) current structure, insurers manage their terrorism exposure to cover their share of losses and not the federal share of losses, which may be recouped from policyholders after an event. Specifically, insurers do not assume the risk of the federal share of potential losses and, thus, do not consider the potential federal share of losses in how they manage their terrorism risk exposure and price coverage. Many insurers include a nominal charge for terrorism risk coverage, if they charge for it at all. Most insurers manage their exposure by limiting the amount of coverage they provide in certain geographic areas. Under the current structure, in some scenarios federal losses must be recouped through premium surcharges on policyholders with TRIA-eligible insurance coverage after a certified terrorism event. However, depending on the size of the terrorism event and the aggregate premiums of affected insurers, the federal government may not be required to recoup all of its losses. To date, no terrorism events have been certified under TRIA. Designing and implementing alternatives to TRIA's current funding structure, such as a federal terrorism risk insurance charge or set-aside of insurer funds, would require trade-offs among various policy goals and involve complexities. For example, Federal charge. A charge on insurers or policyholders could either (1) be a risk-based charge intended to help pay for the federal share of potential losses, replacing the current recoupment structure, or (2) be a charge, or fee, paid to the Treasury for the promise of payment of the federal share of loses with recoupment in place to cover the actual losses. A federal charge could help cover potential losses, but determining a price based on risk would be difficult. Terrorism set-asides. An insurer set-aside to explicitly address terrorism exposure through liabilities, capital, or assets could be designed as (1) loss reserves for future terrorism losses, (2) separate or additional capital requirements for terrorism risk, or (3) separate assets that only could be used for terrorism losses. A set-aside of insurer funds could help cover insurers' potential losses but some approaches would be complex to implement due to implications related to current accounting practices and state laws. TRIA's current recoupment structure and some alternative approaches could increase prices for policyholders and have various effects on market participants and the federal government. GAO's analysis indicated that the current structure and some alternative approaches could affect the price of coverage and policyholder decisions to purchase terrorism coverage. In addition, one set-aside approach could restrict the flexibility with which insurers can use assets (generally, for a variety of risks) and thus hamper risk management. Under each option, federal fiscal exposure exists. For example, a charge to cover the federal share of losses may be insufficient to cover losses in the near term. However, the design of an alternative approach can, in part, mitigate the magnitude of these effects. For example, lengthening recoupment time frames, charging a broad group of policyholders, or allowing flexibility in applying a set-aside could help mitigate the effects.
The majority of satellite programs we have reviewed over the past 2 decades experienced problems during their acquisition that drove up costs and schedules and increased technical risks. Several programs were restructured by DOD in the face of delays and cost growth. At times, cost growth has come close to or exceeded 100-percent, causing DOD to nearly double its investment in face of technical and other problems without realizing a better return on its investment. Along with the cost increases, many programs are experiencing significant schedule delays—as much as 6 years—postponing delivery of promised capabilities to the warfighter. Outcomes have been so disappointing in some cases that DOD has had to go back to the drawing board to consider new ways to achieve the same capability. It is in such a position today, with its Space-based Infrared System (SBIRS) High program and possibly its National Polar-orbiting Operational Environmental Satellite System (NPOESS) program, both of which have been mired in expanding cost and schedule setbacks. More specifically, DOD’s investment in SBIRS High, a critical missile warning system, has been pushed to over $10.5 billion from the initial $4.1 billion estimate made over 9 years earlier. This 160-percent increase in estimated costs triggered a fourth Nunn-McCurdy breach (see 10 U.S.C. 2433), requiring a review by the Secretary of Defense and a report to Congress, and resulted in the program being restructured for a third time, in late 2005. With costs and timelines spiraling out of control, DOD reduced the number of satellites it plans to procure—pushing the average per unit procurement cost up to 224-percent above 2002 baseline costs—and is now pursuing an alternative to SBIRS High while it continues with the scaled back program. Initial cost and schedule estimates for NPOESS—a new satellite constellation intended to replace existing weather and environmental monitoring satellites—have also proven unreliable. NPOESS is managed by a tri-agency Integrated Program Office consisting of DOD, the National Oceanic and Atmospheric Administration, and National Aeronautics and Space Administration. In January 2006, the program reported a Nunn- McCurdy unit cost breach, at the 25-percent threshold, due to continuing technical problems, including problems with the development of key sensors. Specifically, in early 2005, DOD learned that a subcontractor could not meet cost and schedule targets due to significant technical issues on an imaging sensor known as the visible/infrared imager radiometer suite (VIIRS) sensor—including problems with the cryoradiator, excessive vibration of sensor parts, and errors in the sensor’s solar calibration. These technical problems were further complicated by subcontractor management problems. To address these issues, DOD provided additional funds for VIIRS, capped development funding for other critical technologies, and revised its schedule to keep the program moving forward. We also reported that based on our own analysis of contractor trends, the program will most likely overrun costs by $1.4 billion. Given the challenges currently facing the program, the scheduled first launch date slipped 17 months to September 2010. Another recent example of problems is evident in the Advanced Extremely High Frequency (AEHF) program. We reported in the past that this program experienced cost increases due to requirements changes, inadequate contract strategies, and funding shortfalls. We also reported that DOD had to cut back its planned purchase of satellites from five to three as a result. The outcome has been an 84-percent unit cost increase— each AEHF satellite is now estimated to cost about $2.1 billion. More recently, we reported that scheduling delays and the late delivery of cryptographic equipment have culminated into nearly a 3-year delay in the launch of the first satellite and that the program still faces schedule risk due to the continued concurrent development of two critical path items managed and developed outside the program. Acquisition problems have not been limited to the development of home- grown systems. DOD’s purchase of an ostensible commercial satellite for the use of communications, the Wideband Gapfiller Satellite (WGS), is experiencing about a 70-percent cost growth, due in part to the problems a subcontractor was experiencing in assembling the satellites. Improperly installed fasteners on the satellites’ subcomponents have resulted in rework on the first satellite and extensive inspections of all three satellites currently being fabricated. The cost for WGS has increased about $746.3 million but DOD estimates that about $276.2 million of this amount is largely due to cost growth associated with a production gap between satellites three and four. The launch of the first satellite has now been delayed for over 3 years and is currently scheduled for June 2007. The delay will increase program costs and add at least 22 months to the time it takes to obtain an initial operational capability from the system. Figure 1 shows that, overall for fiscal years 2006 through 2011, estimated costs for DOD’s major space acquisition programs have increased a total of about $12.2 billion—or nearly 44-percent in total—above initial estimates. Figure 2 breaks out this trend among key major space acquisitions. As both figures illustrate, cost increases have had a dramatic impact on DOD’s overall space portfolio. To cover the added costs of poorly performing programs, DOD has shifted scarce resources away from other programs, creating a cascade of cost and schedule inefficiencies. For example, to fund other space programs, DOD has had to push off the start of a new version of the Global Positioning System (GPS), which has forced costs to increase for the current version under development. Meanwhile, DOD is also contending with cost increases within its Evolved Expendable Launch Vehicle (EELV) program. These are largely due to misjudgments about the extent to which DOD could rely on commercial demand to leverage its investment. Nevertheless, the resulting $12.6 billion increase has added pressures to make tradeoffs. At the same time that DOD is juggling resources on existing programs, it is undertaking two new efforts—the Transformational Satellite Communications System (TSAT) program and Space Radar program— which are expected to be among the most ambitious, expensive, and complex space systems ever. Moreover, DOD is relying heavily on their planned capabilities to fundamentally enable DOD to transform how military operations are conducted. In fact, many other weapon systems will be interfaced with these satellites and highly dependent on them for their own success. Together, these systems have been preliminarily estimated to cost about $40 billion. While DOD is planning to undertake the new systems, broader analyses of the nation’s fiscal future indicate that spending for weapon systems may need to be reduced, rather than increased, to address growing deficits. Our reviews have identified a number of causes behind the problems just described, but several consistently stand out. First, on a broad scale, DOD starts more weapon programs than it can afford, creating a competition for funding which encourages low cost estimating, optimistic scheduling, over promising, suppressing bad news, and for space programs, forsaking the opportunity to identify and assess potentially better alternatives. Programs focus on advocacy at the expense of realism and sound management. Invariably, with too many programs in its portfolio, DOD and even Congress are forced to continually shift funds to and from programs— often undermining well-performing programs to pay for poorly performing ones. Second, DOD starts its space programs too early, that is, before it has assurance that the capabilities it is pursuing can be achieved within available resources and time constraints. This tendency is caused largely by the funding process, since acquisition programs attract more dollars than efforts concentrating solely on proving out technologies. Nevertheless, when DOD chooses to extend technology invention into acquisition, programs experience technical problems that have reverberating effects and require large amounts of time and money to fix. When programs have a large number of interdependencies, even minor “glitches” can cause disruptions. A companion problem for all weapon systems is that DOD allows new requirements to be added well into the acquisition phase. Many times, these significantly stretch the technology challenges (and consequently, budgets) the program is already facing. This was particularly evident in SBIRS High up until 2004. While experiences would caution DOD not to pile on new requirements, customers often demand them fearing there may not be another chance to get new capabilities since programs can take a decade or longer to complete. Third, space programs have historically attempted to satisfy all requirements in a single step, regardless of the design challenge or the maturity of the technologies to achieve the full capability. Increasingly, DOD has preferred to make fewer, but heavier, larger, and complex “Battlestar Galactica-like” satellites, that perform a multitude of missions rather than larger constellations of smaller, less complex satellites that gradually increase in sophistication. This has stretched technology challenges beyond the capability of many potential contractors and vastly increased the complexities related to software—a problem that affected SBIRS High and AEHF, for example. Our reviews have identified additional factors that contribute to space acquisition problems, though less directly affecting cost and schedule problems we have reported on. For example, consolidations within defense supplier base for space programs have made it more difficult for DOD to incorporate competition into acquisition strategies. Since 1985, there were at least ten fully competent prime contractors competing for the large programs and a number that could compete for subcontracts. Arguably today, there are only two contractors that could handle DOD’s most complex space programs. DOD has exacerbated this problem by not seeking opportunities to restructure its acquisitions to maximize competition, particularly for the small suppliers who have a high potential to introduce novel solutions and innovations into space acquisitions. In the 1990s, DOD also structured contracts in a way that reduced oversight and shifted key decisionmaking responsibility onto contractors. DOD later found that this approach—known as Total System Performance Responsibility, or TSPR—magnified problems related to requirements creep and poor contractor performance. Another factor contributing to problems is the diverse array of officials and organizations involved with a space program, which has made it even more difficult to pare back and control requirements. The Space Radar system, for example, is expected to play a major role in transforming military as well as intelligence-collecting operations and other critical governmental functions, such as homeland security. As a result, its constituency includes combatant commanders, all of the military services, intelligence agencies, and the Department of Homeland Security. The Global Positioning System not only serves the military, it provides critical services to civilian users, the transportation sector, the information technology sector, among many other industries. In addition, short tenures for top leadership and program managers within the Air Force and the Office of the Secretary of Defense has lessened the sense of accountability for acquisition problems and further encouraged a short-term view of success, according to officials we have interviewed. Though still in a pre-acquisition phase, TSAT and Space Radar have already had one program director each. The SBIRS High program, meanwhile, has seen at least three program directors. At the highest levels of leadership, for many years, DOD did not invest responsibilities for its space activities in any one individual—leaving no one in charge of establishing an integrated vision for space or of mediating between competing demands. In 1994, it established such a position within the Office of the Secretary of Defense, but dissolved this position in 1998. In 2002, DOD established a space leadership position within the Under Secretary position in the Air Force, combined it with the directorship of the National Reconnaissance Office in order to better integrate DOD and intelligence space activities, and allowed the Under Secretary to have milestone decision authority for major space systems acquisitions. After the first Under Secretary of the Air Force in charge of space retired in 2005, DOD split these responsibilities and temporarily reclaimed milestone decision authority for all major space programs. Changes in leadership and reorganizations are common across DOD, but again, they make it more difficult to enforce accountability and maintain the right levels of support for acquisition programs. Lastly, there are capacity shortfalls that have constrained DOD’s ability to optimize and oversee its space programs. These include: shortages in the pipeline of scientists and engineers, shortages of experts in systems and software engineering, and uneven levels of experience among program managers. Contractors are also facing workforce pressures similar to those experienced by the government, that is, not enough technical expertise to develop complex space systems. In addition, we have reported that there is a lack of low-cost launch opportunities, which are needed to increase the level of experimental testing in space. DOD has recently expressed a commitment to improve its approach to space acquisitions and embrace many of the recommendations we have made in the past. Our previous recommendations have been focused on providing a sound foundation for program execution. Namely, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstratable knowledge to make decisions to move to next phases. In addition, we have called on DOD to develop an overall investment strategy for space in order to help DOD rebalance its investments in space acquisition programs as it continues to contend with cost increases from its programs. These recommendations are based on a body of work that we have undertaken over the last several years that examines weapon acquisition issues from a perspective that draws upon lessons learned from best product development practices. Leading commercial firms expect that their program managers will deliver high-quality products on time and within budget. Doing otherwise could result in the customer walking away. Thus, those firms have created an environment and adopted practices that put their program managers in a good position to succeed in meeting these expectations. Collectively, these practices comprise a process that is anchored in knowledge. It is a process in which technology development and product development are treated differently and managed separately. The process of developing technology culminates in discovery—the gathering of knowledge—and must, by its nature, allow room for unexpected results and delays. Leading firms do not ask their program or product managers to develop technology. Rather, they give responsibility for maturing technologies to science and technology organizations. The process of developing a product culminates in delivery and, therefore, gives great weight to design and production. The firms demand—and receive—specific knowledge about a new product before production begins. A program does not go forward unless a strong business case on which the program was originally justified continues to hold true. While the practices we have recommended represent commonly accepted sound business practices, until recently, they have not been accepted by DOD’s space acquisition community for large space acquisitions. By contrast, these practices were implemented for the development of a small, experimental satellite, intended for direct use by a combatant command, (known as TacSat 1). We recently reported that by including only mature technologies and limiting new requirements, DOD was able to develop the satellite for less than $10 million (including surplus hardware valued at $5 million) and within 12 months. In disagreeing with our recommendations, DOD asserted its desire to push programs to advance technologies as far as possible. Other reasons that space officials have given for extending technology development into acquisition include the greater ability to secure funding for costly technology development within an acquisition program versus a science and technology program, a belief among the acquisition community that labs in charge of developing space technologies do not understand their needs, as well as communication gaps between the science and technology (S&T) and acquisition communities. Moreover, while DOD officials told us they were pursuing evolutionary development for space systems, we found that they were beginning programs by challenging programs managers to achieve significant leaps in capability with the intention of abandoning those efforts later in the development cycle should too many problems be encountered. This is not a true evolutionary approach, as it leaves DOD facing increased technical challenges at the beginning of a program and thus, increased risks, and it raises the expectations on the part of stakeholders who may be unwilling to accept less capability later on. Two of the systems we were most concerned about in this respect were and TSAT and Space Radar —they were already expected to cost about $40 billion. DOD was planning to start these acquisitions even when many of their critical technologies were still immature and it was pursuing a highly ambitious path in terms of the technology push. Given that these systems were among the most complex programs ever undertaken for space, they were being counted on to enable wider DOD transformation efforts, and DOD was already contending with highly problematic space efforts, we believed DOD could not afford to pursue such risky approaches for TSAT and Space Radar. Since we last testified before this subcommittee in July 2005, DOD has appointed a new Under Secretary of the Air Force to be in charge of space acquisitions, who, in turn, has embraced adopting best practices, or, as he terms it, “going back to the basics.” Specifically, the Under Secretary has expressed a desire to Delegate the maturation of technologies—to the point of being tested in a relevant environment or operational environment, if appropriate— to the S&T community. Adopt an evolutionary development approach in which new systems would be developed in a series of increments, or blocks. Any desired technology that is not expected to be matured in time to start a new block would be assigned to a later block. Each block would have a discrete beginning and end point. Fund S&T appropriately so that significant technology breakthroughs can be continually pursued. Improve collaboration on requirements—consulting with warfighters on the content of each new block. In addition, the Under Secretary is focused on estimating cost and funding new acquisitions to an 80-percent confidence level; strengthening systems engineering and strengthening the acquisition workforce. Aspects of this approach have recently been incorporated in to DOD’s TSAT program. For the first block, satellites 1 and 2, the Air Force has reduced its expectations in the level of sophistication of these satellites to increase the confidence in the schedule for launching the first satellite in 2014. Higher performing levels of the technologies to support laser communications and an Internet-like processor router will be pushed off to a subsequent block, along with the multi-access laser communications—a more robust laser capable of transmitting vast amounts of data within seconds. Program officials have also stated that the TSAT program will not enter into product development, that is, formal acquisition, until its critical technologies are proven. These are good steps when looking at TSAT as an individual program. It is important, however, that the Air Force ensure warfighters accept lower capability and that it makes sense to pursue the current approach versus the alternative of buying more AEHF or WGS satellites. DOD’s desire to adopt best practices for space acquisition is a positive and necessary first step toward reform. However, these changes will not be easy to undertake. They require significant shifts in thinking about how space systems should be developed; changes in incentives and perceptions; as well as further policy and process changes. Moreover, they will need to be made within a larger acquisition environment that still encourages a competition for funding and consequently pressures programs to view success as the ability to secure the next installment rather than the end goal of delivering capabilities when and as promised. In addition, DOD’s space leaders will be challenged to sustain a commitment to adopting best practices, given the myriad of missions and programs that compete for the attention of DOD’s leadership and resources, frequent turnover in leadership positions, and potential resistance from the many diverse organizations involved with space acquisitions. There are steps, however, that DOD can take to substantially mitigate these challenges. First, DOD can guide its decisions to start space acquisition programs with an overall investment strategy. More specifically, DOD could identify overall capabilities and how to achieve them, that is, what role space will play versus other air-, sea-, and land-based assets; identify priorities for funding space acquisitions; and implement mechanisms that would enforce the strategy and measure progress. Optimally, DOD would do this for its entire weapon system investment portfolio so that space systems that are expected to play a critical role in transformation could be prioritized along with other legacy and transformational systems and so that DOD could reduce pressures associated with competition for funding. But in the absence of a departmentwide strategy, DOD could reexamine and prioritize its space portfolio with an eye toward balancing investments between legacy programs and new programs as well as between S&T programs and acquisition programs. In addition, DOD could prioritize S&T investments. This is particularly important since DOD is undertaking a range of initiatives— collectively known as operationally responsive space (ORS)—designed to facilitate evolutionary development, more testing of technologies before acquisition, and ultimately enable DOD to deliver space-based capabilities to the warfighter much faster and quicker. While ORS investments hold great potential, there are other S&T projects competing for the same resources, including those focused on discovering and developing technologies and materials that could greatly enhance future capabilities, reduce costs, and maintain U.S. superiority in space. Second, DOD could revise policies and processes supporting space as needed to adopt the best practices being embraced. For example, DOD’s space acquisition policy could be further revised to ensure that a true evolutionary approach is being pursued and that blocks, or increments, will include only technologies that have been sufficiently matured. DOD could also implement processes and policies, as needed, that stabilize requirements, particularly for acquisitions that are being shared with other stakeholders, such as the intelligence community, and that ensure warfighters are bought into capabilities being pursued for each new system increment. In recent years, it has instituted processes for some individual systems, such as SBIRS High, that could serve as a model. Third, DOD could continue to address other capacity shortfalls. These include shortages of staff with science and engineering backgrounds; shortages of experience within the program manager workforce; limited opportunities and funding for testing for space technologies; and the lack of low-cost launch vehicles. At the same time, DOD could continue to work toward strengthening relationships between the S&T and acquisition communities and coordination within the S&T community. The Under Secretary is uniquely positioned to do this given his previous position as DOD’s Director of Defense Research and Engineering and his participation in previous efforts to develop a strategy for space S&T. Fourth, we have recommended that DOD take steps departmentwide to hold people and programs accountable when best practices are not pursued. This will require DOD to empower program managers to make decisions related to funding, staffing, and moving into subsequent phases and to match program manager tenure with development or delivery of a product. It may also require DOD to tailor career paths and performance management systems to incentivize longer tenures. Until these actions have been taken, space leaders could take steps now to ensure space program managers have the right levels of experience to execute large programs and have sufficient authority so that they can be held accountable. Likewise, DOD’s space leaders can take steps to hold its contractors accountable by structuring contracts so that incentives actually motivate contractors to achieve desired acquisition outcomes and withholding award fees when those goals are not met. In closing, we are encouraged with the acquisition approach being embraced by DOD’s space leadership. It can enable DOD to begin to match resources to requirements before starting new programs and therefore, better position programs for success. Successful implementation, however, will hinge on the ability of DOD’s current space leaders to instill and sustain commitment to adopting best practices over the short and long term. In doing so, best practice approaches should be reflected in policy and manifested in decisions on individual programs or reform will be blunted. They should also be accompanied by an investment strategy for space, and ultimately DOD, to separate wants from needs and to alleviate long-standing pressures associated with competition within DOD to win funding. By embracing a model that incorporates all these elements, DOD can achieve better outcomes for its space programs. In preparing for this testimony, we relied on previously issued GAO reports on assessments of individual space programs, incentives and pressures that drive space system acquisition problems, common problems affecting space system acquisitions, space science and technology strategy, and DOD’s space acquisition policy, as well as our reports on best practices for weapon systems development. We also analyzed DOD’s Selected Acquisition Reports to assess cost increases and investment trends. In addition, we met with the Air Force Under Secretary to discuss his “back to basics” approach. We conducted our review between March 6 and April 3, 2006 in accordance with generally accepted government auditing standards. For future information, please contact Cristina Chaplain at 202-512-4841 or [email protected]. Individuals making contributions to this testimony include, Art Gallegos, Robert Ackley, Maricela Cherveny, Sharron Candon, Jean Harker, Leslie Kaas Pollock, and Karen Sloan. Table 1 highlights recent findings from our reports on cost and schedule overruns for DOD’s current and planned space programs. The table also notes that many programs are still addressing past mistakes in acquisition approaches and contractor oversight as well as technical, design, and manufacturing problems.
DOD's space system acquisitions have experienced problems over the past several decades that have driven up costs by hundreds of millions, even billions of dollars, stretched schedules by years, and increased performance risks. GAO was asked to testify on its findings on space acquisition problems and steps needed to improve outcomes. DOD's space acquisition programs continue to face substantial cost and schedule overruns. At times, cost growth has come close to or exceeded 100-percent, causing DOD to nearly double its investment in face of technical and other problems without realizing a better return on its investment. Along with the cost increases, many programs are experiencing significant schedule delays--as much as 6 years--postponing delivery of promised capabilities to the warfighter. Outcomes have been so disappointing in some cases that DOD has had to go back to the drawing board to consider new ways to achieve the same capability. These problems are having a dramatic effect on DOD's space investment portfolio. Over the next 5 years, there will be about $12 billion less dollars available for new systems as well as for the discovery of promising new technologies because of cost growth. And while DOD is pushing to start new, highly ambitious programs such as the Transformational Satellite and Space Radar, broader analyses of the nation's fiscal future indicate that spending for weapon systems may need to be reduced, rather than increased, to address growing deficits. GAO has identified a number of causes behind these problems, but several stand out. First, DOD starts more space and weapons programs than it can afford, which pressures programs to under estimate costs and over promise capabilities. Second, DOD starts its space programs too early, that is, before it is sure the capabilities it is pursuing can be achieved within available resources and time constraints. DOD has also allowed new requirements to be added well into the acquisition phase. DOD has appointed a new leadership to oversee space acquisitions who have committed to adopting practices GAO has recommended for improving outcomes. These include delegating the maturation of technologies to the S&T community; adopting an evolutionary development approach in which new systems would be developed in a series of discrete increments, or blocks; fund S&T appropriately so that significant technology breakthroughs can be continually pursued; and improving collaboration on requirements. Adopting best practices for space acquisitions will not be an easy undertaking. DOD, as a whole, still operates in an environment that encourages competition for funding, and thus, behaviors that have been detrimental to meeting cost and schedule goals. Moreover, the changes being proposed will require significant shifts in thinking about how space systems should be developed and changes in incentives. By establishing investment priorities, embedding best practices in policy, and addressing capacity shortfalls, DOD can mitigate these challenges and better position programs for success.
Fragmentation refers to circumstances in which more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national interest. Overlap involves programs that have similar goals, devise similar strategies and activities to achieve those goals, or target similar users. Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same assistance to the same beneficiaries. In some instances, it may be appropriate for multiple agencies or entities to be involved in the same programmatic or policy area due to the nature or magnitude of the federal effort. However, we have previously identified instances where multiple government programs or activities have led to inefficiencies, and we determined that greater efficiencies or effectiveness might be achievable. In September 2000, we reported that there is no commonly accepted definition for economic development. Absent a common definition, we subsequently developed a list of nine activities most often associated with economic development. In general, we focused on economic activities that directly affected the overall development of an area, such as job creation, rather than on activities that improved individuals’ quality of life, such as housing and education. The nine economic activities are supporting business incubators and accelerators, constructing and renovating commercial buildings, constructing and renovating industrial parks and buildings, strategic planning and research, marketing and access to new markets for products and industries, supporting telecommunications and broadband infrastructure, supporting physical infrastructure, and supporting tourism. Appendix II provides illustrative examples of each of these economic activities. Appendix III provides more information on the 52 economic development programs we focused on for this report. Appendix IV includes a list of additional programs that are administered by federal agencies we identified that can fund at least one of these activities. In January 2011, Congress updated the Government Performance and Results Act of 1993 (GPRA) with the GPRA Modernization Act of 2010 (GPRAMA). GPRAMA establishes a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. Effective implementation of GPRAMA could play an important role in clarifying desired outcomes; addressing program performance spanning multiple organizations; and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Among other things, GPRAMA requires the Office of Management and Budget (OMB) to coordinate with agencies to establish outcome-oriented federal government priority goals covering a limited number of policy areas, as well as goals to improve management across the federal government. It also requires OMB—in conjunction with the agencies—to develop a federal government performance plan that outlines how they will make progress toward achieving goals, including federal government priority goals. The President’s 2013 budget submission includes the first interim federal government priority goals, including one to increase federal services to entrepreneurs and small businesses with an emphasis on start-ups and growing firms and underserved markets. The identified economic development programs that support entrepreneurs overlap based on both the type of assistance they provide and the characteristics of the beneficiaries they target. This overlap among fragmented programs can make it difficult for entrepreneurs to navigate the services available to them. In addition, while agencies have taken steps to collaborate more in administering these programs, they have not implemented a number of good collaborative practices we have previously identified, and some entrepreneurs struggle to find the support they need. Federal efforts to support entrepreneurs are fragmented, which occurs when more than one agency or program is involved in the same broad area of national interest. Commerce (8), HUD (12), SBA (19), and USDA (13) administered 52 programs that could support entrepreneurial efforts in fiscal year 2011. Several types of overlap—which occurs when programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries—exist among these programs, based on the type of assistance the programs offer and characteristics of the programs’ targeted beneficiaries. Many of the programs provide entrepreneurs with similar types of assistance. The programs generally can be grouped according to at least one of three types of assistance that address different entrepreneurial needs: help obtaining (1) technical assistance, (2) financial assistance, and (3) government contracts. Many of the programs can provide more than one type of assistance, and most focus on technical assistance, financial assistance, or both: Technical assistance: Thirty-five programs distributed across the four agencies can provide technical assistance, including business training, counseling and research, and development support. Financial assistance: Thirty programs distributed across the four agencies can support entrepreneurs through financial assistance in the form of grants and loans. Government contracting assistance: Five programs, all of which are administered by SBA, can support entrepreneurs by helping them qualify for federal procurement opportunities. We reviewed the statutes and regulations for each program and found that overlap tends to be concentrated among programs that provide a broad range of technical and financial assistance. Within the technical assistance category, 24 of the 35 programs are authorized to provide or fund a broad range of technical assistance both to entrepreneurs with existing businesses and to nascent entrepreneurs—that is, entrepreneurs attempting to start a business—in any industry. This broad range of support can include any form of training or counseling, including start-up assistance, access to capital, and accounting. Examples of programs in this category include Commerce’s Minority Business Centers, five of HUD’s Community Development Block Grant (CDBG) programs, SBA’s Small Business Development Centers, and USDA’s Rural Business Opportunity Grants. Eight additional programs can support limited types of technical assistance or industries. For example, Commerce’s Trade Adjustment Assistance for Firms only supports existing businesses negatively affected by imports, and USDA’s Small Socially- Disadvantaged Producer Grants only serves agricultural businesses. Similarly, 16 of the 30 financial assistance programs can provide or guarantee loans that can be used for a broad range of purposes to existing businesses and nascent entrepreneurs in any industry. Examples of programs in this category include Commerce’s Economic Adjustment Assistance programs, six of HUD’s CDBG programs, SBA’s 7(a) Loan Program, and USDA’s Business and Industry Loans. Five other programs can support loans for a more narrow range of purposes or industries, while the other nine programs can only support other types of financial assistance, such as grants, equity investments, and surety guarantees. In addition, a number of programs overlap based on the characteristics of the targeted beneficiary. Most programs either target or exclusively serve one of four types of businesses: businesses in rural areas, businesses in economically distressed areas, disadvantaged businesses, and small businesses. support to entrepreneurs are focused on serving beneficiaries in economically distressed areas or target benefits at low- to moderate- income individuals. SBA’s 19 programs are all limited to serving small businesses, with several programs that either target or exclusively serve disadvantaged businesses and microenterprises. Eight of USDA’s 13 programs are limited to rural service areas, and four of these programs are limited to small businesses or microenterprises. Among Commerce’s eight programs, six are limited to serving beneficiaries in economically distressed areas, while two exclusively serve disadvantaged businesses. The definition of rural varies among these programs, but according to USDA—the agency that administers many of the economic development programs that serve rural areas—the term rural typically covers areas with population limits ranging from less than 2,500 to 50,000. Based on statutory language, we characterize economically distressed areas as communities with high concentrations of low- and moderate-income families or high rates of unemployment and/or underemployment. See, e.g., 42 U.S.C. § 3141; 42 U.S.C. § 5301. Likewise, based on statutory language, we characterize disadvantaged businesses as those owned by women, minority groups, and veterans, among other factors. See, e.g., 15 U.S.C. § 637(a); 15 U.S.C. § 656. The definition of small business varies among these programs, but according to SBA—the agency that administers many of the economic development programs that serve small businesses—the term small business refers to businesses that have annual receipts or total employee numbers under an agency-defined value for their specific industry. Entrepreneurs may fall into more than one beneficiary category—for example, an entrepreneur may be in an area that is both rural and economically distressed. Therefore, these entrepreneurs would be eligible, based on program authority, for more than one subset of program. For example, a small business in a rural, economically distressed area, such as Susquehanna County, Pennsylvania, could, in terms of program authority, receive a broad range of technical assistance through at least nine programs at all four of the agencies, including: Commerce’s Economic Adjustment Assistance; HUD’s CDBG/States, Rural Innovation Fund, and Section 4 Capacity SBA’s SCORE and Small Business Development Centers; USDA’s1890 Land Grant Institutions, Rural Business Enterprise Grants, and Rural Business Opportunity Grants. Similarly, a small business that is both minority- and women-owned in an urban, noneconomically distressed area, such as Seattle, Washington, could in terms of program authority, receive a broad range of technical assistance through at least seven programs at three of the four agencies, including: Commerce’s Minority Business Centers; HUD’s CDBG/Entitlement and Section 4 Capacity Building; and SBA’s Program for Investment in Micro-entrepreneurs (PRIME), SCORE, Small Business Development Centers, and Women’s Business Centers. HUD’s Rural Innovation Fund program did not receive funding in fiscal year 2011 but is still active. USDA’s1890 Land Grant Institutions received an unspecified amount of funding through USDA’s Salaries and Expense account rather than program appropriations. Entrepreneurs may also be eligible for multiple subsets of financial assistance programs based on their specific characteristics. For example, a small business in a rural, economically distressed area, such as Bourbon County, Kansas, could in terms of authority, receive financial assistance in the form of guaranteed or direct loans for a broad range of uses through at least eight programs at the four agencies, including: Commerce’s Economic Adjustment Assistance; HUD’s CDBG/States, Rural Innovation Fund and Section 4 Capacity SBA’s 7(a) Loan Program and Small Business Investment USDA’s Business and Industry Loans and Rural Business Enterprise Grants. A small business that is both minority and women-owned in an urban, noneconomically distressed area, such as Raleigh, North Carolina, could receive financial assistance in the form of guaranteed or direct loans for a broad range of uses through at least four programs at two of the four agencies, including: HUD’s CDBG/Entitlement and Section 4 Capacity Building; and SBA’s 7(a) Loan Program and Small Business Investment Companies. Five programs provide government contracting assistance to entrepreneurs, but our analysis did not identify significant overlap in the types of assistance these programs provide or the types of entrepreneurs they serve. While these five programs are all administered by SBA and can serve businesses in any industry, they tend to target specific types of entrepreneurs and provide unique types of assistance. For example, the Procurement Assistance to Small Businesses program coordinates access to government contracts for small and disadvantaged businesses with other federal agencies, while the 8(a) Business Development Program coordinates certification of eligible disadvantaged businesses for the contracts made available at these other agencies, in addition to providing business development assistance during their 9-year term. While many programs overlap in terms of statutory authority, entrepreneurs may in reality have fewer options to access assistance from multiple programs. Agencies often rely on intermediaries (that is, third-party entities such as nonprofit organizations, higher education institutions, or local governments that use federal grants to provide eligible assistance directly to entrepreneurs) to provide specific support to entrepreneurs, and these intermediaries vary in terms of their location and the types of assistance they provide. For example, while entrepreneurs seeking technical assistance in Susquehanna County, Pennsylvania, are eligible to receive this support through USDA’s1890 Land Grant Institutions program, the closest funded intermediary is in Delaware, making it unlikely that such an entrepreneur would utilize services through this program. Additionally, intermediaries we spoke to in several areas said they typically provide a more limited range of services to entrepreneurs than are allowed under their statutory authority. For example, two intermediaries that we interviewed in Texas that were authorized to provide a broad range of technical support to entrepreneurs through SBA’s Small Business Development Center and Commerce’s Minority Business Center noted that they each specialized in a narrower subset of services and referred beneficiaries to each other and other resources for some services outside of their niches. Specifically, the intermediary at the Small Business Development Center noted that they provide a range of long-term services to small businesses over different phases of development, while the intermediary at the Minority Business Center noted that they focused specifically on larger minority-owned firms as well as start-up companies. Overlapping programs may also employ different mechanisms to provide similar types of support to entrepreneurs. For example, programs may support technical assistance through different types of intermediaries that provide services to entrepreneurs. USDA’s Rural Business Opportunity Grants program can provide technical assistance through local governments, nonprofit corporations, Indian tribes, and cooperatives that are located in rural areas, while SBA’s SCORE program utilizes retired business professionals and others that volunteer their time to provide assistance. Additionally, programs may support financial assistance in the form of loans through loan guarantees, direct loans, or support for revolving loan funds. SBA’s 7(a) Loan program provides guarantees on loans made by private sector lenders, while USDA’s Intermediary Re- lending program provides financing to intermediaries to operate revolving loan funds. Additionally, some programs distribute funding through multiple layers of intermediaries before it reaches entrepreneurs. For example, HUD’s Section 4 Capacity Building program is only authorized to provide grants to five national organizations, which pass funding on to a number of local grantees, including community development corporations that may use the funding to provide technical or financial assistance to entrepreneurs. HUD officials also noted that most of their programs allow local grantees discretion on whether to use funds to support entrepreneurs or for other authorized purposes. Other programs may competitively award grants to multiple intermediaries working jointly in the same community to serve entrepreneurs. For example, Commerce’s Economic Adjustment Assistance program can provide grants to intermediaries, such as consortiums of local governments and nonprofits, which in turn provide technical or financial assistance to entrepreneurs. Although we identified a number of examples of statutory overlap, we did not find evidence of duplication among these programs (that is, instances when two or more agencies or programs are engaged in the same activities to provide the same services to the same beneficiaries) based on available data. However, most agencies were not able to provide the programmatic information, such as data on users of the program that is necessary to determine whether or not duplication actually exists among the programs. The agencies’ data-collecting practices will be discussed at greater length later in this report. As previously discussed, 35 programs distributed across the four agencies provide technical assistance, including business training and counseling. While the existence of multiple programs in and of itself is not a problem, the delivery system of these fragmented and overlapping technical assistance programs contains many components (see fig. 1). Several entrepreneurs and various technical assistance providers with whom we spoke—including agency field offices, intermediaries, and other local service providers—told us that the system can be confusing and that some entrepreneurs do not know what services are available or where to go for assistance. As discussed earlier, federal funds typically flow from the federal agencies to different eligible intermediaries, which are third- party entities that receive federal funds, such as nonprofits or universities. These intermediaries in turn may provide technical assistance to entrepreneurs by, for example, helping them to develop a business plan or put together a loan package to obtain financing. For instance, SBA’s Women’s Business Center and Commerce’s Minority Business Center programs can provide technical assistance through different intermediaries, such as the Arkansas Women’s Business Center and the University of Hawaii. Although intermediaries are the primary providers of technical assistance, agency field offices may also provide some technical assistance. For example, USDA’s Rural Development state offices may provide advice on how to complete their respective grant applications. SBA’s district offices may also discuss the different business structures available. Technical assistance providers sometimes attempt to help entrepreneurs navigate the system by referring them to other programs, but these efforts are not consistently successful. Some of these providers told us that they assess the entrepreneur’s needs to determine whether to assist them or refer them to another entity that could provide the assistance more effectively. For example, if an 1890 Land Grant intermediary were not able to assist an entrepreneur, it might refer the entrepreneur to SBA, USDA, or a local provider. However, such referrals are not always successful. For example, an entrepreneur we spoke with described a case in which he needed assistance with developing a business plan but was unable to receive this assistance, even after several referrals. Some technical assistance providers that we spoke with either did not appear to fully understand other technical assistance programs or thought that others did not fully understand their programs. For example, one technical assistance provider told us that some technical assistance providers were focused on more established businesses, but when we reached out to some of these providers, they said they served all entrepreneurs. This lack of understanding could prevent providers from making helpful referrals and leveraging other programs and limit the effectiveness of the programs. In addition, programs’ Internet resources can also be difficult to navigate. Each agency has its own separate website that provides information to entrepreneurs, but they often direct entrepreneurs to other websites for additional information. For example, the SBA website directs users to another website that lists the Small Business Development Centers, which then directs users to another website that provides some information on the centers’ available services. SBA, Commerce, USDA, and other agencies have recently collaborated to develop a joint website called BusinessUSA with the goal of making it easier for businesses to access services. However, the site was not fully operational as of June 2012, and none of the entrepreneurs and almost all the technical assistance providers we spoke with were not yet aware of it. As of June 2012, this website listed a number of potential technical assistance programs across different federal agencies with links to the programs’ websites. Some technical assistance providers and entrepreneurs suggested that a single source to help entrepreneurs quickly find information instead of sorting through different websites would be helpful. Enhanced collaboration between agencies could potentially address some of the difficulties entrepreneurs experience and improve program efficiency. In prior work we identified practices that can help to enhance and sustain collaboration among federal agencies, which can help to maximize performance and results, and have recommended that the agencies follow them. These collaborative practices include identifying common outcomes, establishing joint strategies, leveraging resources, determining roles and responsibilities, and developing compatible policies and procedures. In addition, GPRAMA requires agencies to describe in annual performance plans how they are working with other agencies to achieve their performance goals and relevant federal government performance goals. The agencies have taken initial steps to improve how they collaborate to provide technical assistance to entrepreneurs by, for example, entering into formal agreements with each other, but they have not pursued a number of other good collaborative practices we have previously identified, as the following examples illustrate: USDA and SBA entered into a formal agreement in April 2010 to coordinate their efforts aimed at supporting businesses in rural areas. In April 2011, USDA began to survey its state offices to help the agency gauge the level of collaboration between its field staff and SBA, as well as to identify additional opportunities to enhance collaboration. However, the agencies’ business development programs that can support start-up businesses—USDA’s Rural Business Enterprise Grant and SBA’s Small Business Development Centers—have yet to determine roles and responsibilities, find ways to leverage each other’s resources, or establish compatible policies and procedures to collaboratively support rural businesses. The Appalachian Regional Development Initiative is a formal agreement, which began in November 2010, among the Appalachian Regional Commission (which coordinates economic development activities in the Appalachian region), the four agencies, and other agencies.Appalachian economy through better deployment and coordination of federal resources. According to officials at the Appalachian Regional Commission, the agencies did participate in a joint workshop to present the locally available resources from business development to infrastructure in the fall 2011, and USDA is one of its stronger partners. However, the agencies have not established joint strategies, determined roles and responsibilities, or developed compatible policies and procedures for carrying out the common outcomes outlined in their agreements at the local level where technical assistance is provided. This agreement is intended to strengthen and diversify the In August 2011 SBA and the Delta Regional Authority (which coordinates economic development activities in the Delta region) entered into a formal agreement to better deploy and coordinate resources for small businesses located in the Delta region. As part of this agreement, in April 2012 the two entities announced a joint effort to launch an program to support entrepreneurs called Operation JumpStart. Operation JumpStart is designed as a hands-on, microenterprise development program that is intended to help entrepreneurs test the feasibility of their business ideas and plan to launch new ventures. However, their effort thus far has been limited. While they entered into a formal agreement to launch the program, this agreement did not include any determinations of specific roles and responsibilities or establish compatible policies and procedures to collaboratively support these small businesses. In June 2011, the President created the White House Rural Council to promote economic prosperity in rural areas. It is chaired by the Secretary of Agriculture and includes HUD, Commerce, SBA, and other agencies. The council is working to better coordinate federal programs in order to maximize the impact of federal investment in rural areas. Even though the council has announced a number of initiatives, such as helping rural small businesses access capital, the agencies have yet to implement many of our other good collaborative practices. In addition, while most of these agencies at the headquarters level have agreed to work together by signing formal agreements to administer some of their similar programs, the agencies generally have yet to develop compatible guidance to implement these agreements in the field. As noted previously, some intermediaries we spoke with that provide technical assistance through agency programs collaborate by referring entrepreneurs to other federal programs and agencies that they believe may better meet their needs. However these efforts are inconsistent and do not always result in entrepreneurs obtaining the services they are seeking. OMB and the four agencies also have recently taken steps to implement GPRAMA, which requires them to coordinate better; however, implementation was still in the early phases as of May 2012 and had not yet affected how they administer their programs. Implementing additional good collaborative practices could improve how the federal government supports entrepreneurs by, for example, helping agencies make more useful referrals, meet more diverse needs of entrepreneurs, and present a more consistent delivery system to entrepreneurs: Collaborating agencies that agree upon roles and responsibilities can clarify who will do what, organize their joint and individual efforts, and facilitate coordinated decision making. This effort could help agencies not only initiate and sustain collaboration but also determine who is in the best position to support an entrepreneur based on the client’s need, which could lead to more effective referrals. Because collaborating agencies bring different resources and capacities to their efforts, they can look for opportunities to leverage each other’s resources, thus obtaining additional benefits that would not be available if they were working separately. Being able to leverage each other’s resources could help agencies more effectively and efficiently support entrepreneurs because they may be able to meet more diverse needs by drawing on one another’s strengths. Compatible standards, policies, procedures, and data systems could help to sustain collaborative efforts. As agencies standardize, for example, procedures for supporting entrepreneurs, they can more efficiently support entrepreneurs through more consistent service- delivery methods across agencies and programs. This could be particularly helpful for entrepreneurs who are not familiar with the federal programs. In addition, GPRAMA’s crosscutting framework requires that agencies collaborate in order to address issues such as economic development that transcend more than one agency, and GPRAMA directs agencies to describe how they are working with each other to achieve their program goals. As discussed previously, without more substantial collaboration, the delivery of service to entrepreneurs, particularly those who are unfamiliar with federal economic development programs, may not be as effective and efficient as possible. Agencies do not maintain information in a way that would enable them to track activities for most of their programs. Further, the agencies lack information on why some programs have failed to meet some or all of their goals. While information from program evaluations can help measure program effectiveness, agencies have conducted evaluations of only 20 of the 52 active programs since 2000. While the four agencies collected at least some information on program activities in either an electronic records system or through paper files, most were unable to summarize the information in a way that could be used to help administer the programs. Promising practices of program administration that we have identified include a strong capacity to collect and analyze accurate, useful, and timely data. According to OMB, being able to track and measure specific program data can help agencies diagnose problems, identify drivers of future performance, evaluate risk, support collaboration, and inform follow-up actions. Analyses of patterns and anomalies can also help agencies discover ways to achieve more value for the taxpayer’s money. In addition, agencies can use this information to assess whether their specific program activities are contributing as planned to the agency goals. In addition, government internal control standards state that agencies should promptly and accurately record transactions to maintain their relevance and value for management decision making. Furthermore, this information should be readily available for use by management and others so that they can carry out their duties with the goal of achieving all of their objectives, including making operating decisions and allocating resources. This guidance calls for agencies to go beyond merely collecting information, stating that they should systematically analyze, or track, it over time to inform decision making. For example, the agencies could track this information to identify trends on how the programs are being used in different areas of the country. This information could help the agencies strategically target program resources to support the unique needs in each geographic area. All four agencies collect program information but do not track detailed, readily available information for most programs, such as the type of technical assistance that their programs provide or fund, which is necessary to effectively administer their programs. For example, Commerce’s Economic Adjustment Assistance, HUD’s Section 4 Capacity Building, SBA’s PRIME, and USDA’s Rural Business Opportunity Grant Program can all support a broad range of technical assistance to various types of entrepreneurs, but agencies are unable to provide information on the types of services provided that would be necessary to compare activities across programs. Similarly, the agencies typically do not track detailed information on the characteristics of entrepreneurs that they serve, such as whether they are located in rural or economically distressed areas or the entrepreneurs’ type of industry. Most of the agencies collect detailed information on several of their programs in a way that could potentially help them more efficiently administer their programs, as the following examples illustrate: SBA collects detailed information on the type of technical assistance provided and type of entrepreneur served for 5 of its 10 technical assistance programs. SBA categorizes the types of technical assistance it provides by 17 categories of training and counseling, such as helping a business develop its business plan. All of this information is maintained in an electronic database that is accessible by agency staff. For all of its programs, USDA collects detailed information on the industry of each of the entrepreneurs it supports. In addition, USDA collects detailed information (19 categories) on how entrepreneurs use proceeds, such as for working capital, provided through five of its financial assistance programs. USDA maintains this information in an electronic database, and officials stated that they can provide this type of detailed information upon request. For all eight of its technical assistance programs, Commerce collects information on the type of entrepreneur served and the entrepreneurs’ industry. While HUD tracked limited program information on the type of support it provides to entrepreneurs, the agency collects information on other program activities and uses it to monitor program compliance. HUD staff meet quarterly with the Secretary of HUD to discuss these program data and determine changes that should be made to improve how they carry out program activities. Table 1 summarizes the type of information that agencies maintain in a readily available format that could be tracked to help administer the programs. Officials who administer these programs provided a number of reasons why they do not track detailed program information for all programs in a way that could be used for program administration purposes. For example, some officials stated they do not rely on program information with this level of detail to make decisions about their programs. As previously discussed, many of these programs are administered by intermediaries, and these intermediaries may maintain detailed information on the services they provide. Agencies do not always require the intermediaries to forward all of this detailed information to headquarters. Rather, an intermediary may, for example, submit data summaries of the support they have provided during the reporting period in a narrative format—a format that cannot be easily aggregated or analyzed. Other agency officials noted that this type of summary-level information they collect and maintain at headquarters is sufficient for their purposes and complies with OMB reporting guidelines. However, without tracking more detailed program information, such as the specific type of support provided and the entrepreneurs served, agencies may not be able to make informed decisions or identify risks and problem areas within their programs based on factors such as how entrepreneurs make use of program services or funding. Furthermore, agencies may not be able to understand the extent that their programs are serving their intended purposes. Our review found that for fiscal year 2011, a number of programs that support entrepreneurs failed to meet some or all of their performance goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers crucial information on which to base their organizational and management decisions. Leading organizations recognize that performance measures can create powerful incentives to influence organizational and individual behavior. Some of their good practices include setting and measuring performance goals. GPRAMA requires agencies to develop annual performance plans that include performance goals for an agency’s program activities and accompanying performance measures. According to GPRAMA, these performance goals should be in a quantifiable and measurable form to define the level of performance to be achieved for program activities each year. The agencies should also be able to identify which external factors might affect goal accomplishment and explain why a goal was not met. Such plans can help to reinforce the connection between the long-term strategic goals outlined in their strategic plans and the day-to-day activities of their managers and staff. We found that of the 33 programs that support entrepreneurs and set goals, 19 did not meet any of their goals or only met some of their goals (see table 2). These programs include Commerce’s Economic Development/Support for Planning Organizations, HUD’s Indian Community Development Block Grant, SBA’s 504 loan, and USDA’s Rural Business Opportunity Grant programs. Appendix III provides more information on fiscal year 2011 goals and accomplishments for each program that has goals and accomplishment data available. Agency officials provided a number of reasons why they thought these programs did not meet their goals, including that the goals were estimates and program funding was lower than anticipated. In addition, some agency officials could not identify any causes for the failure to meet goals nor had they attempted to determine the specific reasons for the failures. Programs that are failing to meet performance goals without a clear understanding of the reasons could result in agencies not being able to identify and address specific parts of programs that may not be working well. Additionally, without more detailed data on the activities of individual intermediaries, determining which of these third-parties are effectively administering these programs and helping meet program goals is difficult. Making decisions without this information could result in scarce resources being directed away from programs, or intermediaries, that are effective and towards those that are not meeting their objectives or struggling to meet their objectives. Over the past 12 years, agencies have conducted program evaluations of 20 of the 52 programs that support entrepreneurs. Most of these 20 programs were evaluated once in the past decade. The studies that were conducted focus on a variety of areas, including customer satisfaction and the programs’ economic impacts, and report an array of findings related to the effectiveness of the programs. For example, some evaluations reported the actual number of jobs produced as a result of program investments, while one evaluation reported that programs were more useful for larger firms than smaller firms. Some of the differences among the findings are tied to the varying questions the studies sought to answer and the methods that were used to answer them. The questions and methods employed are typically informed by the organization’s purpose for pursuing these studies. These purposes could include, for example, assessing program impact, identifying areas for improvement, or guiding resource allocation. Figure 2 describes the scope of each program evaluation and the findings related to program effectiveness. Appendix V provides more information on each program evaluation. Although GPRAMA does not require agencies to conduct formal program evaluations, it does require agencies to describe program evaluations that were used to establish or revise strategic goals as well as program evaluations they plan to conduct in the future. Additionally, while not required, agencies can use periodic program evaluations to complement ongoing performance measurement. Program evaluations that systematically study the benefits of programs may help identify the extent to which overlapping and fragmented programs are achieving their objectives. In addition, program evaluations can help agencies determine reasons why a performance goal was not met and give an agency direction on how to improve program performance. For instance, 8 of the 33 programs that were not evaluated by the administering agency failed to meet all of their performance goals. Performance evaluations could have helped agencies understand why these programs’ goals were not met. Further, program evaluations, which examine a broader range of information than is feasible on an ongoing basis through performance measures, can help assess the impact and effectiveness of a program. In July 2007, we recommended that SBA further utilize the loan performance information it already collects to better report how small businesses fare after they participate in the 7(a) program. While SBA agreed with the recommendation, the agency has not implemented it. See GAO, Small Business Administration: Additional Measures Needed to Assess 7(a) Loan Program’s Performance, GAO-07-769 (Washington, D.C.: Jul. 13, 2007). of information, Congress and the agencies may not be able to better ensure that scarce resources are being directed to the most effective programs and activities. In order to support entrepreneurs, federal economic development programs must be efficient and accessible to the people they are intended to serve. However, navigating these overlapping and fragmented programs can be an ongoing challenge for some entrepreneurs. While the agencies have a number of interagency agreements in place, our review found that agency field staff do not consistently collaborate and may not be able to help entrepreneurs navigate the large number of programs available to them. We have identified practices that can help to support collaboration among federal agencies and programs. In addition, greater collaboration is one way agencies can help overcome overlap and fragmentation among programs within and across agencies. Moreover, without enhanced collaboration and coordination, agencies may not be able to make the best use of limited federal resources and may not reach their intended beneficiaries in the most effective and efficient manner. In addition, given the number of federal programs focused on supporting entrepreneurs, agencies need specific information about these programs to best allocate limited federal resources and make decisions about better administering and structuring the programs. In our February 2012 report on duplication, overlap, and fragmentation, we expected to recommend that Congress tie funding to program performance and that OMB and the agencies explore opportunities to restructure programs through such means as consolidation or elimination. However, decisions about funding and restructuring would be difficult without better performance and evaluation information. Thus, making these recommendations would be premature until the agencies address a number of deficiencies. Specifically, agencies typically do not collect information that would enable them to track the services they provide and to whom they provide those services. This practice is not consistent with government standards for internal controls. Without such information, the agencies may not be able to administer the programs in a way that will result in the most efficient and effective federal support to entrepreneurs. Moreover, most of the programs that set goals did not meet them or only met some of them, and agency officials could not always identify reasons why program goals were not met. Additionally, many of these programs have not been evaluated in 10 years or more. GPRAMA requires agencies to set and measure annual performance goals, and recognizes the value of program evaluations because they can help agencies assess programs’ effectiveness and improve program performance. Agencies’ lack of understanding of why programs have failed to meet goals may limit decision makers’ ability to understand which programs are most effective and allocate federal resources accordingly. To help improve the efficiency and effectiveness of federal efforts to support entrepreneurs, we make the following recommendations: The Director of the Office and Management and Budget, the Secretaries of the Departments of Agriculture, Commerce, and Housing and Urban Development, and the Administrator of the Small Business Administration should work together to identify opportunities to enhance collaboration among programs, both within and across agencies. The Secretaries of the Departments of Agriculture, Commerce, and Housing and Urban Development, and the Administrator of the Small Business Administration should consistently collect information that would enable them to track the specific type of assistance programs provide and the entrepreneurs they serve and use this information to help administer their programs. The Secretaries of the Departments of Agriculture, Commerce, and Housing and Urban Development, and the Administrator of the Small Business Administration should conduct more program evaluations to better understand why programs have not met performance goals and their overall effectiveness. GAO provided a draft of this report to OMB, Commerce, HUD, SBA, and USDA for review and comment. We also provided excerpts of appendix IV to all of the agencies with programs listed for their review. Commerce, HUD, and USDA provided written comments. Commerce, HUD, and SBA also provided technical comments, which were incorporated where appropriate. OMB did not provide comments on the draft report. All written comments are reprinted in appendixes VI, VII and VIII. The Acting Secretary of Commerce stated that we may wish to consider the complementary role many agencies play in the field of economic development and the need for varied but complementary activities to address the complexities of entrepreneurs. She commented that what may appear as duplication at a higher level is in reality a portfolio of distinct services meeting unique needs. Our report notes that in some instances it may be appropriate for multiple agencies or entities to be involved in the same programmatic or policy area due to the nature or magnitude of the federal effort. We found that many of the 52 programs we examined overlap in terms of statutory authority; our report does not state that duplication exists among these programs. However, we found that most of these agencies were not able to provide programmatic information, such as data on users of the programs that is necessary to determine whether or not duplication actually exists. The Acting Secretary also stated that federal agencies do successfully collaborate and forge policy partnerships, and noted that EDA plays a key role in leading and shaping federal policy for fostering collaborative regional economic development. As noted in our report, Commerce, HUD, SBA, and USDA have taken initial steps to improve how they collaborate to provide technical assistance to entrepreneurs and cites specific examples of these collaborative efforts. However, GAO found that the four agencies, including Commerce, have not pursued a number of other good collaborative practices we have previously identified. For example, our report states that the White House Rural Council, comprised of Commerce and other federal agencies, is working to better coordinate federal programs in order to maximize the impact of federal investment in rural areas. Although the council has announced a number of initiatives, such as helping rural small businesses access capital, we found that the agencies have yet to implement many of our other good collaborative practices such as developing compatible guidance to implement inter- agency agreements. For example, we found that while most of these agencies at the headquarters level have agreed to work together by signing formal agreements to administer some of their similar programs, the agencies generally have yet to develop compatible guidance to implement these agreements in the field. Finally, the Acting Secretary stated that EDA agrees with our report’s focus on the need for more specific information tracking and more frequent performance evaluation. She noted that EDA has established performance measures for each of its programs, and that these performance measures were subject to thorough review and validation procedures. She also noted that EDA routinely conducts evaluations of its programs (often limited only by lack of resources). However, the Acting Secretary stated that efforts to monitor and track project progress seem to have been outside of the scope of our report, based on many of the general statements made in the report about the need for additional work in this area. As previously stated, we found that most of the agencies were not able to provide programmatic information for programs that can support entrepreneurs. Our report also states that Commerce does collect information on the type of entrepreneur served and the entrepreneur’s industry for all eight of its programs that can provide technical assistance; however, the report notes that Commerce does not collect information on the specific type of technical assistance provided to entrepreneurs for six of these eight programs—information necessary to compare activities across programs. We provided summary information on evaluations conducted by the agencies in the report, including Commerce. We also found that Commerce, HUD, SBA, and USDA had not evaluated the majority of the 52 programs that can support entrepreneurs, including four of the eight programs Commerce administers. We concluded that program evaluations, when combined with efforts to collect information, can be a positive step toward greater understanding of programs’ effectiveness. HUD’s Assistant Secretary for Public and Indian Housing expressed concern regarding our reference on the highlights page of the report to the Indian CDBG program as one of 19 economic development programs that failed to meet their entrepreneurial performance goals. She stated that the entire program may be unfairly perceived as ineffective as a result of this statement. Our report states that 33 of 52 programs we examined set goals related to entrepreneurial assistance and that 19 of these 33 programs did not meet any of their goals or only met some of their goals. Our report does not state that these 19 programs were ineffective. We added language on the highlights page of the report to clarify that our findings were only based on each program’s goals related to entrepreneurial assistance. The Assistant Secretary also stated that our report misrepresents the Indian CDBG program as an economic development program. She noted that while economic development is an eligible program activity, only 3 percent of the dollars awarded under the program since 2005 funded economic development activities. She further noted that most of the program’s grants were used for community development activities, such as building community buildings, developing infrastructure of various types, and rehabilitating housing units on Indian lands. As noted in our report, the 52 programs we examined for this report typically fund a variety of activities in addition to supporting entrepreneurs. In addition, the report notes that most of these programs either target or exclusively serve particular types of businesses. The Assistant Secretary noted that an independent evaluation of the Indian CDBG program was conducted in 2006. HUD had not previously provided us with this evaluation. We revised our report to state that the Indian CDBG program had been evaluated within the past 12 years. Finally, the Assistant Secretary stated that HUD supports efforts to accurately measure the performance of its programs. She noted that HUD’s Office of Native American Programs had recognized limitations in its method of projecting and measuring performance in the Indian CDBG program. She also stated that the office had begun drafting a revised form to be used at grant application and grant closeout to better collect performance measurement data, and that the office was examining its data collection procedures as well as the methodology used to establish program targets. These actions are consistent with our recommendation that the agencies collect program information and use it to help administer their programs. USDA’s Under Secretary for Rural Development stated that he agreed with our report’s statements that entrepreneurs play a vital role in the U.S. economy and that no duplication exists among federal programs that assist entrepreneurs. However, he disagreed with some of the other observations in our report. First, he stated that our report broadly portrays federal programs that assist entrepreneurs and does not highlight the unique characteristics of each agency, such as USDA’s Rural Development’s specialization in rural economic development and its network of state and local area offices. Our report notes that most of USDA’s 13 programs that can support entrepreneurs are limited to areas with a rural statutory definition. We also include discussion based on our outreach to participants in rural economic development, including regional commissions and authorities, on their experiences with the four federal agencies in rural economic development efforts. More importantly, however, when considering the unique characteristics of the various programs, we emphasize the need for agencies to conduct program evaluations to assess effectiveness. While the Under Secretary suggests that the rural focus and the network of state and local area offices enhance program effectiveness, USDA has not conducted evaluations to support this conclusion. Second, USDA’s Under Secretary stated that our report highlights examples where entrepreneurs may be eligible for multiple federal programs based on an entrepreneur’s specific characteristics, but that the report does not mention whether this was a pervasive or problematic issue. He stated that rural entrepreneurs may be eligible for multiple programs, and that a business’s unique situation dictates which programs best meets its needs. Again, our report emphasizes the need for evaluations to determine the relative effectiveness of different programs serving similar purposes. Third, regarding our findings related to the information agencies collect on program activities, the Under Secretary cited a number of tools that the Rural Business-Cooperative Service (RBS) uses to identify and improve the effectiveness of its programs. As noted in this report, we determined that USDA collected detailed information on the industry of each of the entrepreneurs it supports for all of its programs. In addition, we determined that USDA collected detailed information (19 categories) on how entrepreneurs use proceeds provided through 5 of its financial programs. However, we found that over the past 12 years USDA had conducted a program evaluation for only 1 of its 13 programs that can support entrepreneurs, including USDA programs that RBS does not administer. Finally, the Under Secretary stated that the recommendations in our report are not explicit, which makes it unclear how RBS would effectively address them. Our report does provide information on how agencies could address our recommendations. First, we recommended that OMB, Commerce, HUD, SBA, and USDA work together to identify opportunities to enhance collaboration among programs, both within and across agencies. Our report identifies several practices that can help agencies and their offices enhance and sustain collaboration, which include indentifying common outcomes, establishing joint strategies, leveraging resources, determining roles and responsibilities, and developing compatible policies and procedures, among others. Second, we recommended that Commerce, HUD, USDA and SBA consistently collect information that would enable them to track the specific type of assistance provided and the entrepreneurs they serve and use this information to help administer their programs. Our report identifies programs that Commerce, HUD, SBA, and USDA administer for which the agencies did and did not maintain information in a readily available format that could be tracked to help administer the programs. Finally, we recommended that Commerce, HUD, SBA, and USDA conduct more evaluations to better understand why programs have not met performance goals and their overall effectiveness. Our report acknowledges that program evaluations can be costly; however, the report also notes that there are various methods agencies can employ to make the evaluations more cost- effective, such as relying on their own data instead of purchasing data from a vendor. We are sending copies of this report to the appropriate congressional committees and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact William B. Shear, at (202) 512-8678, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. This report discusses (1) the extent of overlap, fragmentation, and duplication and their effects on entrepreneurs, and agencies’ actions to address them; and (2) the extent to which agencies collect information necessary to track program activities and whether these programs have met their performance goals and been evaluated. To determine the extent of overlap and fragmentation among federal programs that fund economic development activities, we focused our analyses on 52 programs administered by the Departments of Agriculture (USDA), Commerce, and Housing and Urban Development (HUD) and the Small Business Administration (SBA) that are authorized to support entrepreneurs. Based on past work, these programs appeared to overlap the most within the four agencies with missions focused on economic development. We reviewed the statutes and regulations that authorize the activities that can be conducted under each program. We categorized the types of activities into three categories: (1) technical assistance, (2) financial assistance, and (3) government contracting assistance. Many of the programs can provide more than one type of assistance, and most focus on technical assistance, financial assistance, or both. To identify the effects of overlap and fragmentation on entrepreneurs and agencies’ actions to address them, we focused on 35 of the 52 programs that provide technical assistance because there was significant overlap and fragmentation among these programs. We reviewed agency documents, such as inter-agency agreements, and conducted interviews to determine how technical assistance is provided to entrepreneurs, including the extent of agency collaboration at the local level. More specifically, we interviewed technical assistance providers, including 14 federal agency officials from four federal agencies located in the field, nine officials from two regional commissions, and 14 representatives of intermediaries (that is, third-party technical assistance providers); four entrepreneurs who have received assistance federal support; and five state and local partners in three geographic areas. These geographic areas included both urban and rural areas. We selected geographic areas based on, the presence of an active regional commission and evidence of collaboration among at least two of the four federal agencies being located within the same region. We assessed this technical assistance information against promising collaborative practices that we have previously identified. To determine the extent to which agencies collect information necessary to track program activities, we reviewed agency manuals and data collection forms that describe information collected on program activities and methods for analyzing and using the information. Specifically, we assessed each agency’s capacity to track specific types of entrepreneurial assistance they provided to specific types of beneficiaries, as well as their ability to report this information in a readily available format at the program level. We compared these processes against standards for internal controls we have previously identified to determine how well agencies track the support they provide to entrepreneurs. To determine the extent to which these 52 economic development programs have met their performance goals, we reviewed agency documents on their fiscal year 2011 program goals and accomplishments. We also interviewed agency officials to determine reasons why goals were not met (see app. III). To describe results from program evaluations related to the effectiveness of the 52 economic development programs that we reviewed, we requested all studies that have been conducted on these programs from the four agencies that administer the programs. Our document request resulted in 19 studies. We refined the list of 19 studies by choosing to focus on studies that were published in or after 2000. The resulting list of program evaluations totaled 16. Because some evaluations studied more than one program, these 16 evaluations covered 20 of the 52 programs in our review. We reviewed the methodologies of these studies to ensure that they were sound and determined that they were sufficiently reliable for our purpose, which was to report high-level findings related to the program’s overall effectiveness (see app. V). Other evaluations of these programs may exist. To provide illustrative examples of each of the nine economic activities related to economic development that we previously identified (see app. II), we conducted a review of the literature that has been published in the past 5 years.including academic journals and trade publications. These sources contained examples of how these economic activities were being conducted at the national, state, and local levels in the United States. The list of examples we developed is not meant to be comprehensive but is intended to provide a range of economic activities that could be funded by federal programs. This review included publications from a variety of sources, We also used these nine economic activities to identify additional federal programs that may be able to fund at least one of the activities (these programs are listed in app. IV). During previous reviews, we focused on federal programs at Commerce, HUD, SBA, and USDA because these agencies have missions focused on economic development. For this report, we identified additional federal programs that could fund the nine economic activities. While many of the agencies that administer these additional programs do not have missions that focus on economic development, their programs may be able to fund at least one of the nine economic activities. We reviewed information on all programs contained in the 2011 Catalog of Federal Domestic Assistance (CFDA) and This list provided the list of programs to all of the administering agencies. of additional federal programs may not be comprehensive because not all agencies provide data to CFDA (see app. IV). We have previously identified incomplete or inaccurate data in the CFDA, but we chose to rely on it for our purposes in this report because it is the only source that contains information on programs from many different federal agencies. We did not assess the data reliability of the CFDA. OMB has compiled initial lists of agencies and programs that contribute to crosscutting goals, as required by GPRAMA, on performance.gov, including those related to the entrepreneurship and small business goal. However, OMB noted that this was not meant to be comprehensive of all programs with any contribution to the crosscutting goals, and that they are continuing to update these lists. We conducted this performance audit from June 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In September 2000, we reported that there is no commonly accepted definition for economic development. Absent a common definition for economic development, we subsequently developed a list of nine activities most often associated with economic development. In general, we focused on economic activities that directly affected the overall development of an area, such as job creation and economic growth, rather than on activities that improved individuals’ quality of life, such as housing and education. We previously relied on these economic activities to identify 80 economic development programs administered by the U.S. Departments of Agriculture (USDA), Commerce, and Housing and Urban Development (HUD) and the Small Business Administration (SBA) because these agencies have missions that focus on economic development.the nine economic activities. The following examples, which resulted from a review we conducted of academic journals and trade publications, illustrate a range of activities that could be supported by programs that can fund at least one of the economic activities. Examples include projects that are both publicly and privately funded, with many receiving funding from multiple sources in both sectors. They also had an explicit or implicit economic development goal, such as job creation or economic growth. 1. Supporting entrepreneurial efforts. This activity is the focus of this report, with programs grouped according to at least one of three types of assistance that address different entrepreneurial needs: help obtaining (1) technical assistance, which includes business training and counseling and research and development support; (2) financial assistance, which includes grants, loans, and venture capital; and (3) government contracts, which involves helping entrepreneurs qualify for federal procurement opportunities. Illustrative examples of this activity include the following initiatives: Individuals in an Iowa community formed an association of entrepreneurs to provide a broad range of services to entrepreneurs, including technical assistance in the form of mentor counseling, training sessions on various topics, and hosting conferences. A California community provided both financial and technical support to local small businesses in order to redevelop a business district. Businesses received micro-grants—small grants of $5,000 each—and were also required to participate in free workshops designed to give them additional tools and resources to succeed in a challenging marketplace. These workshops were produced by an SBA-funded Small Business Development Center. Iowa provided financial assistance to entrepreneurs through loan guarantees and a publicly funded limited liability corporation that could coordinate venture capital investments. The initiative was designed to increase capital levels and stimulate the creation of more local seed funds. 2. Supporting business incubators and accelerators. This activity can include all of the elements of entrepreneurial efforts, but combines these types of assistance with a facility that supports multiple businesses and may provide shared access to office space, technology, and other support services. Illustrative examples of this activity include the following initiatives: A technology business incubator was established at a Florida university so its faculty and service partners can provide business opportunities to client companies. The facility has grown to support a number of services to assist start-up businesses, including office and laboratory space, educational programs, and networking and mentoring opportunities with other experienced entrepreneurs. An Ohio community created a business accelerator that is designed to assist small, established companies, rather than businesses in their infancy, in becoming financially viable and creating jobs in the region. This facility includes office space, access to technology, and a variety of support services. The accelerator also collaborates with a center funded by SBA’s Small Business Development Centers program and a local community college, which provide coaching and mentoring sessions, business plan reviews, workshops, training, referrals, and assistance in obtaining capital. An economic development organization in Pennsylvania created a network of business incubators and accelerators focused on developing and commercializing technology to create high-paying, sustainable jobs. The initiative supports early-stage and established companies with funding, support services, and a network of experts in related industries and academia. 3. Constructing and renovating commercial buildings. This activity can include support for the construction and renovation of buildings established for commercial purposes, such as for retail and office space. Illustrative examples of this activity include the following initiatives: A community in Iowa renovated a historic building that used to be a store to attract a large technology firm’s service center. The renovations were designed to meet the firm’s sustainability vision and were financed by public and private sources. A community in Arizona renovated a high school to create a new research laboratory. Further buildings were constructed in the area around this project to create a biomedical campus for both commercial and academic purposes. A community in Iowa renovated buildings in a historic millwork district to create urban mixed-use developments, which are designed to attract both commercial and residential activity. 4. Constructing and renovating industrial parks and buildings. This activity can include support for the construction and renovation of buildings and campuses established for industrial purposes, such as for manufacturing. Illustrative examples of this activity include the following initiatives: A public-private partnership in Nevada constructed an industrial park with new access to a freeway and energy infrastructure. The facility was zoned for heavy industry and designed to be away from population centers. A community in Massachusetts administered the transition of a former military base into a light industrial area focused on sustainable development and attracted both small and large firms to the redeveloped area. A public-private partnership in a North Carolina created several multi-jurisdictional business parks intended to improve local economies. These parks serve a number of industrial purposes, including technology, manufacturing, distribution, and logistics. Local governments obtained funding to conduct site evaluations and certification through Commerce’s Economic Development Administration and HUD’s Community Development Block Grant program. 5. Strategic planning and research. This activity includes plans for recruiting new businesses or industry clusters, economic research and analyses, and regional coordination and planning across jurisdictions and sectors. Illustrative examples of this activity include the following initiatives: Local officials in a southeastern state formed a regional economic development organization to better coordinate economic and workforce development. The organization engages in marketing and recruitment of businesses and fosters partnerships between various public- and private-sector entities in the region. A California community developed a plan for a business district to create jobs and produce savings for businesses. The plan defined resources, timeframes, and types of assistance needed to execute this strategy. A regional consortium operating in areas of two southern states conducted research on their area’s economic strengths and developed an action plan to leverage these strengths. Research included the identification of industry clusters that could be well suited to the area. 6. Marketing and access to new markets for products and industries. This activity may include marketing of both new and existing products and industries, facilitating access to new markets, and supporting new uses for existing products. Illustrative examples of this activity include the following initiatives: A publicly funded regional technology center in New York provides a range of resources for local manufacturing and technology companies, including assistance with developing sales and growth strategies, conducting marketing activities for increased market share and revenue in existing or new markets, and identifying new customers and market niches. A regional economic development organization in North Carolina formed an energy industry cluster that included a bio-energy facility where businesses are colocated with a landfill. These businesses are able to sell what were formerly waste products in new markets, such as alternative fuels and wood pallets. Several southern and Midwestern states have leveraged federal and state funds to assist rural businesses with e-commerce strategies, including assistance reaching global markets and strengthening competitive market advantages. Both USDA and Commerce provided some funding for this initiative. 7. Supporting telecommunications and broadband infrastructure. This activity may include building, refurbishing, and enhancing infrastructure used to expand access and improve the speed and reliability of Internet access, wireless phone services, and other electronic communication methods. Illustrative examples of this activity include the following initiatives: A public-private partnership in a city in Ohio provides businesses and residents with an underground conduit network that supports multiple fiber-based systems for voice, data, and video communications, intended to provide high-speed access to the global marketplace. A multi-state rural regional development organization in the southwestern United States coordinated the construction of a broadband Internet network that was intended to generate new opportunities for economic development. The initiative was funded by both private and public investments and covered a large geographic area. Regional leaders collaborated with a state commission to expand broadband infrastructure to businesses, schools, and industrial parks in a Virginia city. The high-speed network is noted to be comparable to or faster than that of any other metropolitan area of the country, is available at a relatively low cost, and is intended to attract businesses to the area. 8. Supporting physical infrastructure. This activity includes constructing and repairing infrastructure related to (1) transportation, such as roads, airports and rail; (2) water and sewer; (3) energy; and (4) other amenities, such as pedestrian areas, parking, and beautification projects. Illustrative examples of this activity include the following initiatives: A community in New York is planning to renovate a business district by creating new rail service, a pedestrian mall, and green space. A community in Ohio renovated their underdeveloped downtown area by constructing better roads and pedestrian space, improving green space, and moving power lines underground. The project was part of a plan to reduce blight and make the area more accessible for visitors. A community in North Carolina renovated a vacant textile manufacturing space and downtown area to create a scientific research campus, facilitating this work through water line replacements, the addition of a pedestrian tunnel, and road improvements. 9. Supporting tourism. This activity includes marketing, infrastructure improvement, planning, and research specifically related to developing and improving tourism, as well as supporting special events and festivals to attract visitors. Illustrative examples of this activity include the following initiatives: A community in Kentucky improved trails in natural areas to attract tourists for horseback riding and other recreational uses. In addition to trail improvements, the community utilized survey research, marketing, and special events to draw visitors to the area. A community in North Carolina entered into public-private partnerships to construct a cluster of tourist venues that included sports and arts museums, an arena, convention center, and performing arts venues. The community utilized a strategic plan for development and a branded name to market the area. A county in Mississippi partnered with other regional entities to market their gaming industry and other amenities as part of a broader regional campaign. This new partnership promoted region-wide tourism and focused on key markets that the area may draw visitors from. Program Name and Mission Grants for Public Works and Economic Development Facilities Supports the construction or rehabilitation of essential public infrastructure and facilities necessary to support job creation, attract private-sector capital, and promote regional competitiveness, innovation, and entrepreneurship, including investments that expand and upgrade infrastructure to attract new industry, support technology-led development, accelerate new business development, and enhance the ability of regions to capitalize on opportunities presented by free trade. Fiscal year 2011 Actual Performance Private investment leveraged–9 year totals (in millions): $3,960 Private investment leveraged–6 year totals (in millions): $1,617 Private investment leveraged–3 year totals (in millions): $1,475 leveraged (3, 6, and 9 years after award) Grants for Public Works and Economic Development Facilities (3, 6, and 9 years after award) Program Name and Mission Economic Adjustment Assistance Supports economically distressed communities in their ability to compete economically by stimulating private investment and promoting job creation in targeted areas. Current investment priorities include proposals that foster innovation and enhance regions’ global economic competitiveness by supporting existing industry clusters, developing emerging new clusters, or attracting new regional economic drivers. Fiscal year 2011 Actual Performance Private investment leveraged–9 year totals (in millions): $3,960 Private investment leveraged–6 year totals (in millions): $1,617 Private investment leveraged–3 year totals (in millions): $1,475 leveraged (3, 6, and 9 years after award) Jobs created/retained– 9 year totals: 56,058 Jobs created/retained– 6 year totals: 26,416 Jobs created/retained– 3 year totals: 14,842 (3, 6, and 9 years after award) Global Climate Change Mitigation Incentive Fund Supports economic development projects that create jobs through, and increase private capital investment in, efforts to limit the nation’s dependence on fossil fuels, enhance energy efficiency, curb greenhouse gas emissions, and protect natural systems. The program helps to cultivate innovations that can fuel “green growth” in communities suffering from economic distress. Private investment leveraged–9 year totals (in millions): $3,960 Private investment leveraged–6 year totals (in millions): $1,617 Private investment leveraged–3 year totals (in millions): $1,475 leveraged (3, 6, and 9 years after award) Global Climate Change Mitigation Incentive Fund (3, 6, and 9 years after award) Program Name and Mission Economic Development/Technical Assistance Provides focused assistance to public and nonprofit leaders to help in economic development decision making (e.g., project planning, impact analyses, feasibility studies). The program also supports the University Center Economic Development Program, which makes the resources of universities available to the economic development community. Economic Development/Support for Planning Organizations Provides planning assistance to provide support to Planning Organizations (as defined in 13 CFR 303.2) for the development, implementation, revision, or replacement of a Comprehensive Economic Development Strategy, short- term planning efforts, and state plans designed to create and retain higher- skill, higher-wage jobs, particularly for the unemployed and underemployed in the nation’s most economically distressed regions. Program Name and Mission exports and thereby create jobs. The program provides technical assistance to U.S. businesses that have lost sales and employment due to increased imports of similar or competitive goods and services. Technical assistance is provided through a nationwide network of eleven Economic Development Administration-funded Trade Adjustment Assistance Centers. Native American Business Enterprise Centers (NABEC) The program promotes the growth and competitiveness of businesses owned by Native Americans and eligible minorities. NABEC operators leverage project staff and professional consultants to provide a wide range of direct business assistance services to Native American tribal entities and eligible minority-owned firms. NABEC services include, but are not limited to, initial consultations and assessments, business technical assistance, and access to federal and nonfederal procurement and financing opportunities. Program Name and Mission consultants to provide a wide range of direct business assistance services to eligible minority-owned firms. Services include initial consultations and assessments, business technical assistance, and access to federal and nonfederal procurement and financing opportunities. MBDA currently funds a network of 30 MBC projects located throughout the United States. Community Development Block Grant (CDBG)/Insular Areas HUD annually allocates $7 million of CDBG funds to the Insular Areas program in proportion to the populations of the eligible territories. The program is administered by HUD’s field offices in Puerto Rico and Hawaii. The CDBG programs allocate annual grants to develop viable communities by providing decent housing, a suitable living environment, and opportunities to expand economic opportunities, principally for low- and moderate-income persons. Program Name and Mission develop viable communities by providing decent housing, a suitable living environment, and opportunities to expand economic opportunities, principally for low- and moderate-income persons. CDBG/States The primary statutory objective of the CDBG States program is to develop viable communities by providing decent housing, a suitable living environment, and opportunities to expand economic opportunities, principally for low- and moderate-income persons. The state must ensure that at least 70 percent of its CDBG grant funds are used for activities that benefit low- and moderate- income persons over a 1-, 2-, or 3-year time period selected by the state. CDBG/Non-entitlement CDBG Grants in Hawaii HUD continues to administer the program for the non-entitlement counties in the state of Hawaii because the state has permanently elected not to participate in the State CDBG program. The CDBG programs allocate annual grants to develop viable communities by providing decent housing, a suitable living environment, and opportunities to expand economic opportunities, principally for low- and moderate-income persons. Program Name and Mission CDBG/Section 108 Loan Guarantees Section 108 is the loan guarantee provision of the CDBG program. Section 108 provides communities with a source of financing for economic development, housing rehabilitation, public facilities, and large-scale physical development projects. It allows them to transform a small portion of their CDBG funds into federally guaranteed loans large enough to pursue physical and economic revitalization projects that can renew entire neighborhoods. CDBG/Brownfields Economic Development Initiative (BEDI) The purpose of the BEDI program is to spur the return of brownfields to productive economic use through financial assistance to public entities in the redevelopment of brownfields and enhance the security or improve the viability of a project financed with Section 108-guaranteed loan authority. CDBG Disaster Recovery Grants Grantees may use CDBG Disaster Recovery funds for recovery efforts involving housing, economic development, infrastructure, and prevention of further damage to affected areas, if such use does not duplicate funding available from the Federal Emergency Management Agency, the Small Business Administration, and the U.S. Army Corps of Engineers. The mission and goals of the CDBG Disaster Recovery Grants program may be expanded or limited per the individual appropriation that it receives each year. Permanent jobs created (tracked by low income, moderate income and total) (tracked by low income, moderate income and total) Section 4 Capacity Building for Affordable Housing and Community Development Through funding of national intermediaries, the Section 4 Capacity Building program enhances the capacity and ability of community development corporations and community housing development organizations to carry out community development and affordable housing activities and to attract private investment for housing, economic development, and other community revitalization activities that benefit low-income families. $50,000,000 Number of trainings created and provided to Community Development Corporations (CDC) Program Name and Mission programs. The program is designed to support (1) job creation through business development and expansion, (2) investment in human capital through job training and education; and (3) expanding the supply of affordable housing with access to job centers or transportation. Rural Innovation Fund grantees are selected through a competitive process. Hispanic-Serving Institutions Assisting Communities The Hispanic-Serving Institutions Assisting Communities program helps Hispanic-Serving Institutions expand their role and effectiveness in addressing community development needs in their localities, including revitalization, housing, and economic development, principally for persons of low and moderate income. Accredited Hispanic-Serving Institutions of higher education that provide 2- and 4-year degrees are eligible to participate in this program. For an institution to qualify as a Hispanic-Serving Institution, at least 25 percent of the undergraduate enrollment must be Hispanic students. Program Name and Mission Alaska Native/Native Hawaiian Institutions Assisting Communities The Alaska Native/Native Hawaiian Institutions program helps these institutions expand their role and effectiveness in addressing community development needs in their localities, including revitalization, housing, and economic development, principally for persons of low and moderate income. The program encourages colleges and universities to integrate community engagement themes into their curriculum, academic studies, and student activities. Indian CDBG The purpose of the Indian CDBG program is the development of viable Indian and Alaska Native communities, including the creation of decent housing, suitable living environments, and economic opportunities primarily for persons with low and moderate incomes as defined in 24 CFR 1003.4. Funds may be used to improve housing stock, provide community facilities, improve infrastructure, and expand job opportunities by supporting the economic development of the communities in some instances. Program Name and Mission The 7(a) Loan Program is SBA’s primary program for helping start-up and existing small businesses, with financing guaranteed for a variety of general business purposes. 7(a) loans are the most basic and most commonly used type of loans. They are also the most flexible, since financing can be guaranteed for a variety of general business purposes, including working capital, machinery and equipment, furniture and fixtures, land and building (including purchase, renovation and new construction), leasehold improvements, and debt refinancing (under special conditions). Jobs supported Active lending partners Underserved markets– 504 Loan Program The 504 Loan Program provides growing businesses with long-term, fixed-rate financing for major fixed assets, such as land and buildings. A typical 504 project includes a loan secured from a private- sector lender with a senior lien covering up to 50 percent of the project cost, a loan secured from a Certified Development Company (backed by a 100 percent SBA-guaranteed debenture) with a junior lien covering up to 40 percent of the total cost, and a contribution from the borrower of at least 10 percent equity. Microloan Program SBA’s Microloan Program provides small businesses with small, short-term loans for working capital or the purchase of inventory, supplies, furniture, fixtures, machinery or equipment. SBA makes funds available to specially designated intermediary lenders, which are nonprofit organizations with experience in lending and technical assistance. These intermediaries then make loans to eligible borrowers in amounts up to a maximum of $50,000. Surety Bond Guarantee Program SBA provides and manages surety bond guarantees for qualified small and emerging businesses through the Surety Bond Guarantee Program. Participating sureties receive guarantees that SBA will assume a predetermined percentage of loss in the event the contractor should breach the terms of the contract. Program for Investment in Micro- Entrepreneurs (PRIME) PRIME provides assistance to various organizations. These organizations help low-income entrepreneurs who lack sufficient training and education to gain access to capital to establish and expand their small businesses. Program Name and Mission Women’s Business Centers (WBC) WBCs provide long-term training as well as counseling and mentoring services. By statute, WBCs fill a gap by focusing on women who are socially and economically disadvantaged. WBCs offer classes during regular working hours as well as during the evenings and weekends to serve clients who work during the day. The WBCs often provide counseling in multiple languages. Women’s Business Centers Women’s Business Centers SCORE SCORE is a nonprofit association comprised of more than 13,000 volunteer business professionals in more than 350 chapters and on-line nationwide, dedicated to educating and assisting entrepreneurs and small business owners in the formation, growth, and expansion of their small businesses through mentoring, business advising and training. Veterans Business Outreach Centers The Veterans Business Outreach program is designed to provide entrepreneurial development services such as business training, counseling and mentoring, and referrals for eligible veterans owning or considering starting a small business. Fiscal year 2011 Actual Performance $65.65 7(j) Technical Assistance The 7(j) program provides qualifying businesses with counseling and training in the areas of financing, business development, management, accounting, bookkeeping, marketing, and other small business operating concerns. $6,502,000 Small businesses assisted 3,550 7(j) Technical Assistance 8(a) Business Development Program The 8(a) Business Development program provides various forms of assistance (management and technical assistance, government contracting assistance, and advocacy support) to foster the growth and development of businesses owned and controlled by socially and economically disadvantaged individuals. SBA assists these businesses, during their nine year tenure in the 8(a) Business Development program, in gaining equal access to the resources necessary to develop their businesses and improve their ability to compete. $58,274,000 Small businesses assisted 9,457 8(a) Business Development Program 8(a) Business Development Program disadvantaged businesses, which includes 8(a) program participants (%) Program Name and Mission small businesses that obtain HUBZone certification in part by employing staff who live in a HUBZone. The company must also maintain a “principal office” in one of these specially designated areas. Procurement Assistance to Small Businesses The program assists small businesses in obtaining federal government contracts and subcontracts. For prime contracting, statutory goal is 23%; for subcontracting, there is no statutory goal, but SBA has set a goal of 35.9%. Small Business Innovation Research Program (SBIR) The SBIR program encourages small businesses to explore their technological potential and provides the incentive to profit from its commercialization. Each year, 11 federal departments and agencies are required by SBIR to reserve a portion of their research and development funds for awards to small businesses. SBA is the coordinating agency for the SBIR program. It directs the agencies’ implementation of SBIR, reviews their progress, and reports annually to Congress on the program’s operation. Small Business Technology Transfer Program (STTR) The STTR program encourages small businesses to explore their technological potential and provides the incentive to profit from its commercialization. Each year, five federal agencies are required to reserve a portion of their research and development funds for awards to small businesses. SBA is the coordinating agency for the STTR program. It directs the agencies’ implementation of STTR, reviews their progress, and reports annually to Congress on its operation. STTR requires cooperation with a university or approved research institution. Program Name and Mission Small Business Investment Company (SBIC) Program The SBIC program aims to increase the availability of venture capital to small businesses. SBICs are privately owned and managed investment funds, licensed and regulated by SBA, that use their own capital plus funds borrowed with an SBA guarantee to make equity and debt investments in qualifying small businesses. New Markets Venture Capital (NMVC) Program The purpose of the NMVC program is to promote economic development and the creation of wealth and job opportunities in low-income geographic areas and among individuals living in such areas through developmental venture capital investments in smaller enterprises located in such areas. Through public- private partnerships between SBA and businesses, the program is designed to serve the unmet equity needs of local entrepreneurs through developmental venture capital investments, provide technical assistance to small businesses, create quality employment opportunities for low-income area residents, and build wealth within low-income areas. Program Name and Mission Federal and State Technology Partnership (FAST) Program The purpose of the FAST program is to strengthen the technological competitiveness of small business concerns in the U.S. by improving the participation of small technology firms in the innovation and commercialization of new technology. International Trade The International Trade program helps small business exporters by providing loans for a number of activities specifically designed to help them develop or expand their export activities. Intermediary Relending Program The purpose of the program is to alleviate poverty and increase economic activity and employment in rural communities. Under the program, loans are provided to local organizations (intermediaries) for the establishment of revolving loan funds. These revolving loan funds are used to assist with financing business and economic development activity to create or retain jobs in disadvantaged and remote communities. Program Name and Mission businesses, help fund business incubators, and help fund employment- related adult education programs. To assist with business development, the program may fund a broad array of activities. Rural Business Opportunity Grant Program The program promotes sustainable economic development in rural communities with exceptional needs through provision of training and technical assistance for business development, entrepreneurs, and economic development officials and to assist with economic development planning. Rural Microentrepreneur Assistance Program The purpose of the program is to support the development and ongoing success of rural microentrepreneurs and microenterprises. Direct loans and grants are made to selected microenterprise development organizations. Rural Cooperative Development Grants The primary objective of this grant program is to improve the economic condition of rural areas through the creation or retention of jobs and development of new rural cooperatives, value-added processing, and other rural businesses. Grant funds are provided for the establishment and operation of centers that have the expertise or that can contract out for the expertise to assist individuals or entities in the start- up, expansion, or operational improvement of rural businesses, especially cooperative or mutually owned businesses. Program Name and Mission Business and Industry Guaranteed Loans The purpose of the program is to improve, develop, or finance business, industry, and employment and improve the economic and environmental climate in rural communities. This purpose is achieved by bolstering the existing private credit structure through the guarantee of quality loans. Value Added Producer Grants The purpose of this program is to assist eligible independent agricultural commodity producers, agriculture producer groups, farmer and rancher cooperatives, and majority-controlled producer-based businesses in developing strategies and business plans to further refine or enhance their products, thereby increasing their value to end users and increasing returns to producers. Program Name and Mission upon the professional skills of rural entrepreneurs, and to provide outreach and promote USDA Rural Development programs in small rural communities with the greatest economic need. Agriculture Innovation Center Award grants to centers around the country to provide technical and business development assistance to agricultural producers seeking to enter into ventures that add value to commodities or products they produce. Small Business Innovation Research This program aims to stimulate technological innovation in the private sector; strengthen the role of small businesses in meeting federal research and development needs; increase private-sector commercialization of innovations derived from USDA- supported research and development efforts; and foster and encourage participation by women-owned and socially disadvantaged small business firms in technological innovation. Data collection ongoing because performance data are collected over a 2-year time period. Biomass Research and Development Initiative Competitive Grants Program This program awards grants to support the research and development and demonstration of biofuels and biobased products. It is a joint effort between USDA and the U.S. Department of Energy. Program Name and Mission Woody Biomass Utilization Grant Program This program provides financial grants to businesses and communities that use woody biomass removed from National Forest System hazardous fuel reduction projects. Grants are awarded on a competitive basis. We reviewed the 2011 Catalog of Federal Domestic Assistance (CFDA) and identified 95 additional federal programs that can support at least one of the nine economic activities identified in appendix II (see table 3). These programs, while not comprehensive, are in addition to the 80 economic development programs administered by Commerce, HUD, SBA, and USDA that we included in previous reports. We identified these 94 programs based on our comparison of CFDA program descriptions with the nine economic activities as illustrated in appendix II. However, others conducting similar analyses may come to different conclusions on which federal programs support economic development. Additionally, 32 of the 64 federal agencies and departments listed in the CFDA did not provide descriptions for their programs within the 2011 CFDA, which prevented us from assessing whether those programs are related to economic development. Many of the agencies that administer these additional programs have missions that do not directly focus on economic development. For example, a number of the programs listed for the Department of Health and Human Services focus on health-related research, but also participate in at least one of the economic development activities we have identified. Purpose of the study To assess the economic impacts and federal costs of EDA’s construction program, and to improve upon EDA’s prior study in 1997 in terms of using a more robust regression model. Data and methods used Data for this study were taken from EDA’s Operations and Planning and Control System for construction projects’ status and funding between fiscal years 1990-2005 and Bureau of Labor Statistics county employment data. Study used ordinary and two-stage least squares regression. To evaluate the local Technical Assistance program for fiscal years 1997 and 1998 to determine the extent to which the program has achieved its mission of helping communities solve specific problems, respond to economic development opportunities, and build and expand organizational capacity in distressed areas. The evaluation is based on data collected from project files and data obtained from EDA headquarters and six regional offices, surveys of 121 grant recipients, and two on-site case studies in each EDA region. Study collected data from numerous sources: effectiveness in meeting economic development needs, effectiveness in targeting distressed areas, distribution of centers being optimal under EDA budget constraints, duplication or overlap with other federal programs, and leveraging resources. interviews with EDA national and regional staff, compilation of a database on University Center characteristics and activities from documents such as grant applications, interviews with Center directors, Center client survey, and site visits. To evaluate the overall impact of EDA’s Economic Development District (EDD) Planning program, which funds the EDDs; highlight commonalities and differences among the various EDDs; as well as to assess if the program promotes regional cooperation towards making an impact on the economic development goals of the community. Data were gathered in several progressive stages: site visits, general survey, additional site visits, and a second survey to respondents of first survey. Analysis of these data was done using statistical techniques such as principle- component analysis. To find indicators for the effect of CDBG spending and track changes in these indicators. To report on neighborhoods that had received a large amount of CDBG funding. Data and methods used Classified cities into two categories: those that had available data that were more detailed and those that had less-detailed available data Identify CDBG investment levels that must be complemented with additional investment to produce significant improvements in neighborhood outcomes. CDBG/Entitlement Grants CDBG/States CDBG/Section 108 Loan Guarantees CDBG/Brownfields Economic Development Initiative (BEDI) To determine the results of local third-party lending programs in terms of business development and job creation benefits. To determine whether some kinds of borrowers in certain types of neighborhoods create jobs or leverage private funds at lower cost than others. telephone interviews with Economic Development directors in 460 of the 972 entitlement communities that used CDBG funds, and interviews with 234 of the 750 business borrowers. sample of business loans to those areas, matched with Dun and Bradstreet information. Study examines various indicators of program performance, including business survival rates, rates of total and low- income job creation, retention relative to jobs planned at the time of loan origination, public costs of each job created, amount of private funding induced (or leveraged) by program loans, and rates at which public loan dollars substitute for private funds that would have otherwise been invested. To measure the outcomes of Indian CDBG expenditures. The outcomes included amount of leveraged funding obtained by grantees, enhancements of partnering relationships, and level of economic activity in the communities. Study had three main data sources: (1) grant file reviews of program data, (2) telephone survey of grant participants, and (3) case study observations. Purpose of the study To evaluate the effect of the Section 4 program on improving organization capacity. The section 4 program was set up to support training for Community Development Corporations (CDC) and to help CDCs grow and serve. Data and methods used From 2001 through 2009, data were collected from (1) interviews of key staff at intermediaries, (2) online survey of 360 CDCs that received Section 4 grants, and (3) interviews with leaders of 34 Section 4-asssisted CDCs. To assess the impact of SBA’s entrepreneurial development programs on small businesses, including businesses’ perceptions of the programs and their economic growth as a result of the services provided. Study included survey of clients served by SBA’s entrepreneurial businesses. Sample size approximately 6,500 observations across all years–2007, 2008 and 2010 with a smaller sample in 2007. Study includes a set of descriptive statistics on the rate of growth in the number of Women’s Business Center clients and also the rate of jobs and profits at those centers. Study used a regression to test the association between clients and other outcomes. impact on growth of firms factors that account for success specific program model that predicts success predictors of positive economic outcomes, and effect of client demographics on outcomes. To examine the economic impact and effectiveness of Women’s Business Centers. Survey and focus group of 100 Women’s Business Centers. In order to test whether SBA loan guarantees are associated with positive firm outcomes, this study addressed the following questions: What happens to sales, employment and survival before and after firms receive the guarantee? What explains the changes observed? Data and methods used other factors (such as business type) affect the change in outcome. To produce a survey that is intended provide customer satisfaction indicators for the 7(a), 504, SBIC, and MicroLoan programs. Beginning from a sample of assisted firms from Dunn and Bradstreet, a survey was sent to approximately 3,000 firms. The surveyed firms had received the loans 6 or 7 years prior to the questionnaire. HUBZone (Historically Underutilized Business Zone) To examine the effectiveness of the HUBZone program. Data are from three databases: applications for HUBZone certification, Central Contractor Registration on small businesses, and the Federal Procurement Data System for information on HUBZone businesses that have won HUBZone contracts. The report primarily used an input-output approach to estimate the impact on the HUBZone areas. In this approach, direct and indirect impacts are measured using the above three databases and multipliers from Bureau of Economic Analysis. Small Business Innovation Research Program (SBIR) Study is based on National Research Council surveys and reviews of agency materials. Study includes surveys and also case studies. stimulating technological innovation; using small businesses to meet federal needs; increasing private sector commercialization; and encouraging participation of minority and other disadvantaged groups. Program(s) reviewed Value Added Producer Grants (VAPG) Purpose of the study To identify the determinants for success among USDA’s VAPG. Data and methods used Survey of 739 VAPG recipients, out of which 621 responded. A statistical analysis was conducted using binary logistical regression (logit) and cumulative logit models. While SBA conducts annual impact surveys of the SBDC, WBC, and SCORE programs, for purposes of this report we focused on the most recent impact study conducted of these programs. William B. Shear, (202) 512-8678 or [email protected]. In addition to the contact named above, Marshall Hamlett and Triana McNeil (Assistant Directors), Matthew Alemu, Ben Bolitzer, Julianne Dieterich, Cindy Gilbert, Geoffrey King, Terence Lam, Alma Laris, Marc Molino, Alise Nacson, Jennifer Schwartz, and Karen Villafana made key contributions to this report.
Economic development programs that effectively provide assistance to entrepreneurs may help businesses develop and expand. GAO focused on 52 economic development programs, with an estimated $2.0 billion in funding, at Commerce, HUD, SBA, and USDA that support entrepreneurs. In response to a statutory requirement, this report discusses (1) the extent of overlap and fragmentation, the effects on entrepreneurs, and agencies' actions to address them; and (2) the extent of tracked program information and whether these programs have met their performance goals and been evaluated. To address these objectives, GAO analyzed program information and interviewed agency officials in headquarters and selected field offices, entrepreneurs, and third-party entities, such as nonprofits, that use federal grants to provide assistance directly to entrepreneurs. Federal efforts to support entrepreneurs are fragmented--including among 52 programs at the Department of Agriculture (USDA), Commerce, and Housing and Urban Development (HUD) and the Small Business Administration (SBA). All overlap with at least one other program in terms of the type of assistance they are authorized to offer, such as financial (grants and loans) and technical (training and counseling), and the type of entrepreneur they are authorized to serve. Some entrepreneurs struggle to navigate the fragmented programs that provide technical assistance. For example, some entrepreneurs and technical assistance providers GAO spoke with said the system can be confusing and that some entrepreneurs do not know where to go for assistance. Collaboration could reduce some negative effects of overlap and fragmentation, but field staff GAO spoke with did not consistently collaborate to provide training and counseling services to entrepreneurs. The agencies have taken initial steps to improve how they collaborate by entering into formal agreements, but they have not pursued a number of other good collaborative practices GAO has previously identified. For example, USDA and SBA entered into a formal agreement in 2010 to coordinate their efforts to support businesses in rural areas; however, the agencies' programs that can support start-up businesses--such as USDA's Rural Business Enterprise Grant program and SBA's Small Business Development Centers--have yet to determine roles and responsibilities, find ways to leverage each other's resources, or establish compatible policies and procedures. Without enhanced collaboration and coordination agencies may not be able to make the best use of limited federal resources in the most effective and efficient manner. Agencies do not track program information on entrepreneurial assistance activities for many programs, a number of programs have not met their performance goals, and most programs lack evaluations. In particular, the agencies do not generally track information on the specific type of assistance they provide or the entrepreneurs they serve, in part because they do not rely on this information to administer the programs. Rather, agencies may rely, for example, on data summaries in narrative format, which cannot be easily aggregated or analyzed. According to government standards for internal control, this information should be available to help inform management in making decisions and identifying risks and problem areas. GAO also found that 19 programs failed to meet their annual performance goals related to entrepreneurial assistance, including USDA's Rural Business Opportunity Grants, Commerce's Economic Development/Support for Planning Organizations, HUD's Indian Community Development Block Grants, and SBA's 504 loans to finance commercial real estate. Programs could potentially rely on results from program evaluations to determine the reasons why they have not met their goals, as well as to gauge overall effectiveness. However, the agencies lack program evaluations for 32 of the 52 programs. Therefore, information on program efficiency and effectiveness is limited, and scarce resources may be going toward programs that are less effective. In addition, without more robust program information, agencies may not be able to administer programs in the most effective and efficient manner. GAO recommends that the agencies and the Office of Management and Budget explore opportunities to enhance collaboration among programs, both within and across agencies; track program information; and conduct more program evaluations. Commerce, HUD, and USDA provided written comments and each neither agreed nor disagreed with the recommendations. However, USDA commented that the recommendations were not explicit. In the report, GAO provides specific actions that agencies can take to address each recommendation. GAO recommends that the agencies and the Office of Management and Budget explore opportunities to enhance collaboration among programs, both within and across agencies; track program information; and conduct more program evaluations. Commerce, HUD, and USDA provided written comments and each neither agreed nor disagreed with the recommendations. However, USDA commented that the recommendations were not explicit. In the report, GAO provides specific actions that agencies can take to address each recommendation.
Radio frequency spectrum is used to provide an array of commercial and governmental services, like mobile voice and data, air-traffic control, broadcast television and radio, and public safety activities. In the United States, responsibility for spectrum management is divided between two agencies: FCC and the Department of Commerce’s National Telecommunications and Information Administration (NTIA). FCC manages spectrum use for nonfederal users, including commercial, private, and state and local government users under authority provided in the Communications Act. NTIA manages spectrum for federal government users and acts for the President with respect to spectrum management issues. FCC is an independent regulatory agency composed of five commissioners appointed by the President and confirmed by the Senate. The commissioners delegate many of FCC’s day-to-day responsibilities, including processing applications for licenses and analyzing consumer complaints, to the agency’s 7 bureaus and 10 offices. According to its fiscal year 2014 budget request, FCC has just over 1,700 full-time equivalent staff in Washington, D.C., and other locations; FCC requested $359 million for its fiscal year 2014 budget. Among other duties, FCC bureaus responsible for granting spectrum licenses administer service rules that outline technical and operating requirements for spectrum licenses. Service rules may be set at the time FCC allocates spectrum into bands for a specific type of service or group of users. FCC develops rules through a process defined by the Administrative Procedure Act (APA). The APA process requires FCC to provide the public with notice of its proposed and final rules and with an opportunity to comment as the rules are developed. All comments and information gathered by FCC constitute the public record to support rulemakings and are electronically maintained in a docket. FCC maintains the dockets in an electronic system that is available to the public on its website. After spectrum is allocated and service rules are set, depending on the type of service or user, one of four FCC bureaus assigns licenses to users (see table 2). For example, the Wireless Telecommunications Bureau (WTB) develops and executes policies and procedures for the licensing of all wireless services (except wireless public safety services), and the Media Bureau administers television and radio broadcast licenses. Licenses for wireless services are assigned through competitive bidding (auctions) or administrative processes. The assignment process used depends in large part upon whether applications for licenses are mutually exclusive—that is, on whether granting a license to one entity would preclude granting a license to another entity for the same portion of the spectrum in the same geographic location. For licenses that are mutually exclusive, FCC typically uses auctions to assign licenses for commercial wireless services. Auctions are a market-based mechanism used to assign a license to the entity that submits the highest bid. In the report, we refer to these licenses as market-based licenses. For licenses that are not mutually exclusive, primarily public-safety and private-wireless licenses, FCC generally assigns licenses through administrative processes. For example, FCC distributes some licenses on a first-come, first-served basis, where licenses are assigned based on when the license applications were submitted. To maximize the number of spectrum users, FCC often requires license applicants to coordinate. License applicants retain a private third-party firm, known as a frequency coordinator, to select a frequency that minimizes interference to existing licensees. We refer to these licenses as site-based licenses. FCC uses its ULS database to assign and track licenses for wireless services. ULS operates as a single licensing system used by FCC and licensees to apply for, modify, cancel, and take other actions on licenses for all wireless services in a uniform manner. FCC has established buildout requirements for most wireless services, including paging, cellular, land mobile radio, and wireless communications services (see appendix II). FCC officials said that the Commission makes every effort to ensure efficient use of each spectrum license, and, in line with these efforts, FCC uses buildout requirements to help ensure that spectrum is put to use. FCC establishes buildout requirements for wireless services through its rulemaking process, and the buildout requirement for each wireless service is tailored to the particular service. According to FCC officials, they take into account, when setting buildout requirements, (1) stakeholders’ comments about the proposed requirements in the notice of proposed rulemaking; (2) the characteristics of the relevant spectrum in terms of propagation of the signal through space and its interaction with obstacles, which could affect the infrastructure costs for the intended coverage; and (3) the types of service in adjacent spectrum, including considerations of harmful interference with those services. Buildout requirements have three common features, which vary based on the differences in wireless services. Type of requirement. This refers to the benchmark or outcome that a licensee must meet. There are three types of requirements. A population or geographic coverage requirement sets the percentage of the license’s population or geographic area, respectively, that must be covered by service. A construction requirement requires that the system operate consistent with the rules governing the service, specified in the license, by a specific time. Lastly, a “substantial service” requirement describes the level of service that must be provided in narrative terms rather than in absolute, numeric benchmarks, such as with a coverage requirement. When FCC establishes a substantial service requirement, it sometimes includes “safe harbors” in the rulemaking documents. Safe harbors illustrate specific ways that a licensee could demonstrate substantial service for a particular wireless service, such as constructing a certain number of point- to-point links or serving populations that are outside areas served by other licensees. Number of benchmarks. This refers to whether the licensee must complete the buildout requirement by one deadline, or whether the licensee must complete multiple requirements in stages with corresponding deadlines. When FCC sets more than one benchmark and thus deadline for a license, it refers to the requirements as the interim and final requirements or as the first requirement, second requirement, and so forth. Length of buildout period. This refers to the length of time from the grant of a license to the buildout deadline or deadlines. We examined five wireless services, which have buildout requirements that vary in each of the features discussed above (see table 3). Licenses in these five services are subject to all three types of requirements described above. For example, FCC set a substantial service requirement for 39 GHz licenses. In the rulemaking proceeding for 39 GHz, FCC stated that setting a substantial service requirement would permit flexibility in licensees’ system design, as the types of possible service vary tremendously and may develop in unpredictable ways. For some services, FCC gave licensees the option to choose the type of requirement. For Broadband PCS, FCC set a population coverage requirement, but also provided the alternative of meeting a substantial service requirement. In terms of the number of benchmarks, FCC set two benchmarks for the Broadband PCS (depending on the license) and 220 MHz services and set a single benchmark for the other three services we selected. The lengths of the buildout period for the five services range from 12 months (for industrial/business private land mobile radio) to 10 years (for all three market-based services). Under some circumstances, a particular license may not be subject to a buildout requirement, even if FCC established buildout requirements in the wireless-service rules for the license. For the three market-based services we examined, for example, a licensee can divide its license into smaller pieces by disaggregating or partitioning, which divides the assigned spectrum into smaller amounts of bandwidth or smaller geographical areas, respectively. In such cases, some of the resultant licenses may not have buildout requirements because the requirements are met by one of the other pieces of the original license. For example, for Broadband PCS licenses, parties seeking to disaggregate a license must decide which party will be responsible for meeting the buildout requirements or agree to share responsibility for meeting the requirements. Additionally, fixed-microwave and private land-mobile-radio licenses that authorize certain temporary or itinerant use of the spectrum, such as construction work or event planning, would not normally include a buildout requirement, since the license does not permit any long-term or ongoing operations. In other circumstances, licenses for industrial/business private land mobile radio and fixed microwave site-based services can authorize the use of multiple frequencies. For industrial/business private land-mobile- radio licenses, for example, the purpose of authorizing multiple frequencies is to improve the efficiency of a multi-user system in which users can use any available channel; it is similar to a multi-lane highway in which cars can use any lane. In such cases, FCC sets a buildout requirement for each frequency authorized by the license. If a licensee fails to construct for a given frequency, FCC automatically terminates the authorization to use that frequency as of the buildout deadline. If all the frequencies for a license are terminated, FCC will terminate the license. To enforce buildout requirements for wireless services, FCC requires licensees to self certify that they met buildout requirements and automatically terminates licenses that fail to do so, in line with FCC rules. Through ULS—the computerized system FCC and licensees use to process and track licenses for all wireless services—licensees submit notifications to inform FCC that a requirement is met. However, if the licensee does not notify FCC that it met a buildout requirement in a timely manner, FCC takes steps through ULS to terminate the license. Specifically, ULS is programmed to automatically carry out steps to terminate a license. Thirty days after a buildout deadline, ULS puts a license into “termination pending” mode. FCC releases a weekly public notice of all the market-based and site-based licenses that entered termination-pending mode. If a licensee does not file a petition for reconsideration within 30 days of the public notice demonstrating that it timely met the requirement, ULS will automatically terminate the license effective as of the buildout deadline. Once terminated, the license is then made available for re-assignment or re-auction. Beyond the automated steps involving the termination of licenses discussed above, FCC primarily enforces buildout requirements for wireless services by responding to information provided by licensees. In particular, licensees submit information to FCC in ULS through filings, which FCC responds to through automatic processes or staff reviews, depending on the type of filing. While FCC responds to licensee filings, it does not actively monitor licensee performance on buildout requirements for wireless services; that is, FCC does not send teams out to determine the extent of a licensee’s buildout. Through ULS, FCC officials told us they have the ability to examine outcomes related to the buildout requirements. While FCC enforces buildout requirements for individual licenses, it does not maintain a comprehensive program that monitors overall licensee compliance with buildout requirements within a service or across services. Automatic processes. For licensee filings that do not require evaluation, FCC automates the responses to be carried out in ULS. In general, two specific filings—the required notification for site-based services and request to cancel a license—are automatically reviewed in ULS. According to FCC officials, no formal review is needed for required notifications for site-based licenses because the licensee is certifying that it met the conditions laid out in the license. In addition, a licensee may apply to cancel a license at any time, including before or after a buildout requirement is due. Unless a licensee has other pending applications, an application to cancel a license is automatically approved. Staff reviews. In contrast, some licensee filings require evaluation, so FCC staff must review these notifications and requests. In particular, FCC staff review is required for notifications for market-based licenses (that is, licenses assigned through auctions), as well as all requests for extensions and requests to accept late-filed required notifications. Required notification. As with site-based services, a market-based licensee must file a required notification to notify FCC that it met its buildout requirement. Compared to the specific parameters set in site-based licenses, FCC officials said that market-based licenses tend to give licensees more flexibility in how to use spectrum or deploy service; therefore, FCC requires additional documentation—like information on the technology used in a system—to help assess whether a licensee met its buildout requirement. FCC specifies what additional documentation is required in the rules for a wireless service or a public notice. For example, Broadband PCS licensees must submit maps and other supporting documents showing compliance with the 5- and 10- year benchmarks. FCC can also ask a licensee to send additional information if needed to determine if the licensee met the buildout requirement. Request for an extension. A licensee can also request an extension of the buildout deadline. A request for an extension must be filed before the licensee reaches the buildout deadline for a license. The criteria for when extension requests may be granted are laid out in regulation. For example, FCC will not grant an extension request where delay has been caused solely by a failure to obtain financing. In general, the regulation states that FCC may grant an extension request if the licensee shows that its failure to meet a buildout deadline is due to causes beyond its control. For example, extension requests can be granted for issues such as a lack of available equipment for a band or interference problems with other spectrum users. FCC staff review each request to determine whether an extension is justified. If an extension request is granted, FCC changes the buildout deadline for a license in ULS, but if the request is dismissed, the original buildout deadline stands. If a licensee still needs additional time after being granted an extension, the licensee can request an additional extension. Beyond individual licensee requests, FCC can grant blanket extensions when warranted for a wireless service or group of licenses. According to FCC officials, FCC has considered a blanket extension for most or all of the licenses in a service in cases where it has observed a relatively high number of extension requests. A licensee or stakeholder may also ask FCC to consider a blanket extension. FCC granted a blanket extension of the 5-year buildout requirement for 220 MHz phase II licenses. When the buildout requirements for these licenses started to come due in 2004, numerous licensees filed extension requests. Licensees and others said that there was insufficient equipment to provide voice communications in the 220 MHz band, so licensees collectively would not be able to meet their buildout requirements. In the order granting the extension, FCC stated that a 3-year extension was warranted because, among other reasons, it would provide time for the equipment market to develop. Waiver or petition for reconsideration. Licensees can also file requests to submit late-filed required notifications in limited situations. Specifically, a licensee can file a waiver request or petition for reconsideration if it met a buildout requirement but did not file a required notification on time. For example, if the license has entered termination-pending mode, a licensee can file a petition for reconsideration within 30 days of being listed in the weekly public notice to prevent the license from terminating. In the petition, a licensee must provide the date on which it met the buildout requirement, any supporting documentation required by the rules, and the reason the notification was not filed on time. To be granted a waiver, a licensee must submit any required documentation to demonstrate how it met the buildout requirement and must meet the wavier standard set forth in FCC rules. Figure 1 provides further information on the timing of filings related to buildout requirements. Industry associations and licensees we interviewed generally thought FCC’s enforcement process works well. Many stakeholders we interviewed said that FCC’s self-certification process is appropriate. For example, one expert, three licensees, and officials from an industry association indicated that the public, transparent nature of the required notifications makes self-certification an effective way to enforce buildout requirements. One expert said that self-certification is the most efficient method for FCC to collect and manage buildout information, as licensees are in the best position to gather and report this information. Moreover, a few licensees and industry associations indicated no other approach would be feasible given the high volume of wireless-service licenses and FCC resource constraints. Furthermore, most of the industry associations and licensees we interviewed said that FCC’s ULS system is easy to use. For example, one licensee said that the mechanics of uploading information in ULS for required notifications and requests for extension are straightforward. For the five wireless services we reviewed, buildout requirements were met for many licenses, and when buildout requirements were not met, FCC generally terminated the licenses. Across the five services, we found that buildout requirements were met for 75 percent of licenses (19,582 of 26,217). For 3 of the 5 services we examined, buildout requirements were met for a majority of licenses (see table 4). For the other services, buildout requirements were met for half of the 39 GHz licenses and 19 percent of 220 MHz licenses. When licensees did not meet the buildout requirements, FCC generally proceeded as expected by terminating the licenses. A few of our selected services have a relatively high percent of licenses terminated because licensees did not meet their buildout requirements due to special circumstances. For example, FCC terminated 21 percent of the fixed microwave licenses we examined, but we found that a single licensee held nearly all—1,955 of 2,179—of the terminated licenses. However, buildout requirements were not met for some licenses, and the licenses were not terminated—that is, licenses in the “other outcomes” category in table 4. In two services—fixed microwave and private land mobile radio—there were mixed outcomes because licenses can authorize use of multiple frequencies. Such licenses can be terminated, in part, with the remainder being active or having no requirements. For example, a licensee could meet the requirement for some frequencies but not others, resulting in the termination of some frequencies but not the license; there were 1,216 fixed-microwave (12 percent) and 13 private land-mobile-radio (less than 1 percent) licenses with this outcome. Also, a licensee could have some frequencies terminated and not have a buildout requirement for other frequencies; 66 fixed-microwave licenses (1 percent) and 17 private land-mobile-radio licenses (less than 1 percent) fit this description. The remaining licenses in the other-outcomes category are instances where a buildout requirement was not met but a license remained active after the buildout deadline, mostly for good reasons. Examining ULS license data and other FCC documents, we found that there were reasonable explanations—like a licensee canceling a license during the automatic termination process or filing a required notification that FCC has yet to approve or dismiss—for why most of these licenses were not terminated on the buildout deadline. For example, we found that 106 fixed-microwave licenses in the other-outcomes category were canceled during the automatic termination process—that is, within 60 days of the buildout deadline. For the 220 MHz phase II licenses with other outcomes, 110 licenses were canceled during the automatic termination process, and 24 licenses had pending required notifications in ULS. While there were good reasons why most licenses remained active, there were a few instances where ULS contained no explanation. For the Broadband PCS licenses, for example, ULS did not contain information explaining why 16 licenses were not terminated on the buildout deadline. The automatic termination process in ULS did not occur until 2006, after the buildout deadlines for the 16 Broadband PCS licenses. Therefore, FCC officials said that while these licenses were terminated automatically by rule on the deadline, ULS was not updated to reflect the termination for several months after their buildout deadlines. Lastly, some licenses with buildout requirements did not reach their buildout deadlines; as a result, FCC did not have to enforce the buildout requirements for these licenses. A license could not reach its buildout deadline for three reasons. First, the license could be canceled by the licensee on or before the buildout deadline. A license can be canceled by a licensee, for example, if it ceases operations and no longer needs the license. Second, FCC could terminate a license before its buildout deadline if the licensee fails to fulfill a condition of the license or violates a rule. Third, the license could expire on or before the buildout deadline. For example, a buildout deadline could be extended past the license expiration date and the licensee could fail to renew the license. The number of licenses in this category varies by service, though a higher percentage of market-based licenses did not reach their buildout deadline compared to site-based licenses. As discussed above, FCC’s enforcement of buildout requirements also involves granting or dismissing extension requests. Across the 5 wireless services we examined, 9 percent of licenses had an extension request. However, this percentage varied substantially across the services (see table 5). Extensions were requested for all of the 39 GHz licenses and half of the 220 MHz phase II licenses. Our analysis of FCC and licensee documents for both services indicated that buildout was largely impeded by lack of available equipment. As shown in table 4 above, both these services also had a relatively high percent of licenses terminated for not meeting buildout requirements. By contrast, less than 1 percent of fixed- microwave and private land-mobile-radio licenses requested extensions. One fixed-microwave licensee we interviewed said that it typically constructs the infrastructure for a new license within 2 to 4 months, so it has not needed to request an extension of the 18-month buildout requirement. We found that FCC granted most extension requests it received for the five wireless services we examined, as shown in table 6. FCC officials said that the Commission seeks to be aggressive but pragmatic in its enforcement of buildout requirements and is flexible on deadlines when it needs to be. FCC officials said that the high rate is due in part to high- quality extension requests. More specifically, they said a licensee typically takes steps before submitting a request—through both internal work and informal discussions with FCC staff—to determine whether it is likely to receive an extension and thus worth the resources to seek an extension. Due to this upfront work, FCC officials said that licensees are likely to submit high-quality extension requests and refrain from submitting unjustified extension requests, which leads to a high percentage of granted requests. Not surprisingly, we also found that buildout requirements were more likely to be met when all extension requests for a license were granted (see table 7). For Broadband PCS, buildout requirements were met for 84 percent of licenses with granted extension requests while buildout requirements were met for 40 percent of licenses with dismissed extension requests. For 39 GHz, the difference is starker, as buildout requirements were met for 63 percent of licenses with granted extension requests, and buildout requirements were not met for any of the licenses with both granted and dismissed extension requests. Two of the 39 GHz licensees we interviewed said they benefitted from being granted an extension, as the additional time enabled both licensees to meet the buildout requirements for many of their licenses. Many experts, licensees, and industry associations we interviewed said that extensions can be beneficial, but some concerns were raised. Some licensees and industry associations we interviewed said that extensions of buildout requirements can provide needed flexibility and be in the public interest. Officials from one industry association said that licensees sometimes encounter an unexpected problem—such as interference with other licensees—and need more time to complete the buildout. Officials from a few industry associations said that extensions provide flexibility when a company has a problem that calls for an extension, especially if a large amount of capital has been invested in the buildout. However, a few licensees and industry associations said that FCC can be inconsistent in granting extensions or that the threshold used to grant extensions was unclear. One licensee and officials from two industry associations, for example, said that FCC has granted many extensions in the past but is now less willing to do so. With respect to the guidance on extensions, officials from one industry association said that FCC’s process provides no certainty as to whether a licensee will get an extension, as they felt that the threshold FCC uses to grant extensions is not clear. FCC officials stated that they review requests for extensions on a case-by-case basis and analyze whether each request meets the legal standard necessary to receive an extension. They added that because the facts are different for each case, an outside party or licensee might observe that the outcomes for extensions were inconsistent, even when the criteria are consistent. Moreover, a few experts had negative opinions on the frequency with which extensions are granted. One expert said that FCC’s extensions have set a precedent of extending buildout requirements, which has created an impression that the requirements are not necessarily enforced. Similarly, according to another expert, while the extensions that FCC grants seem reasonable, granting extensions can undermine the purpose of the buildout requirements. Though infrequent, instances involving FCC delays in reviewing filings— both required notifications and extension requests—can pose problems for licensees. As noted above, we found that 24 220 MHz phase II licenses had filed required notifications that were waiting to be approved or dismissed by FCC. Nineteen of these pending required notifications were filed 4 or more years ago. A few licensees we interviewed said that such delays in processing required notifications for market-based wireless services can create uncertainty. For example, two licensees said that FCC delays in approving a required notification could cause problems or delay the selling or leasing of a license. For extension requests, another licensee said that delays in FCC’s response creates risks for licensees and can cause a licensee to expedite construction and spend additional money in case the extension is not granted and the original buildout deadline stands. FCC officials said that they aim to complete reviews as quickly as possible, but do not have a target time for completing reviews. For required notifications, they added that the time needed to complete a review varies depending on the volume of licenses in a service (as many required notifications could be submitted at the same time), the staff resources available, whether a notification contains sufficient information, and what other priorities face FCC or WTB at a given time. For requests for extensions, FCC officials said reviews can take more time compared to other filings as FCC must determine whether the request meets the criteria for an extension and often has to ask a licensee for additional information to better understand the request, among other things. For the 19 filings that have been pending for 4 or more years mentioned above, FCC noted that all these filings aim to demonstrate that the licensees are providing substantial service, and the filings remain pending due to resource limitations, workload priorities, and novel policy, legal, and technical issues the filings present. Nearly all licensees and industry associations we interviewed said that they support FCC’s having buildout requirements for wireless services because the requirements help ensure that spectrum will be put to use. In particular, all 10 licensees we spoke with said that they support having buildout requirements for spectrum licenses. Licensees mostly said that the buildout requirements are effective in preventing spectrum warehousing by making licensees accountable for putting the spectrum to use within a specified time frame. For example, one licensee elaborated that there needs to be some kind of buildout requirement in place or the potential for companies to hold spectrum without providing service would increase. Similarly, the majority of industry associations that we spoke with—6 out of 9—support buildout requirements for the same reasons that the licensees cited. However, one association that opposed buildout requirements said the requirements are cumbersome for licensees. Officials at another industry association were ambivalent, saying that the effectiveness of buildout requirements depends on the type of wireless service, as the requirements make more sense for site-based wireless services than market-based wireless services. In contrast, spectrum policy experts we spoke with were more mixed in their opinions, with most experts being either ambivalent about or unsupportive of buildout requirements. Two experts who opposed buildout requirements said that there are better alternatives for promoting spectrum efficiency, such as spectrum sharing and encouraging more industry competition. Five experts said that they were ambivalent about the requirements for several reasons, including that the requirements are set too weak or undermined by FCC’s granting extensions; as previously mentioned, extensions were requested for 9 percent of licenses we examined, and FCC granted 74 percent of these extension requests. In addition, three experts said that the presence of buildout requirements can lower auction revenues collected by FCC. According to one expert, buildout requirements could force a licensee to deploy a network that might not be the most efficient, which could lower the licensee’s expected profits and thus willingness to pay for the license. Only one expert specifically supported buildout requirements without qualification, stating that the requirements make licensees consider whether they will put the wireless license to use before they decide to acquire it. Beyond these broader stakeholder opinions on buildout requirements, stakeholder opinions on the effectiveness of buildout requirements in meeting commonly cited goals for the requirements were more varied. Of four goals commonly cited for buildout requirements, stakeholders tended to report that buildout requirements are effective in meeting two of these goals: encouraging licensees to provide service in a timely manner and preventing warehousing of spectrum. The stakeholders had mixed views on the effectiveness of buildout requirements in meeting the other two goals: promoting innovative services and promoting services to rural areas. Encouraging licensees to provide service in a timely manner. Many of the stakeholders whom we interviewed said that the buildout requirements were effective in meeting this goal. More specifically, 9 of 10 licensees and 7 of 9 industry associations said that the requirements were effective in meeting this goal because, for example, they impose construction deadlines that require licensees to put the spectrum to use or surrender the license. In contrast, experts were mixed in their opinions, with 4 of 9 experts saying the requirements were ineffective. For example, one expert said that buildout requirements are not effective in encouraging timely service because FCC does not set buildout requirements that are overly onerous in terms of how long licensees have to meet benchmarks. Preventing the warehousing of spectrum. Many stakeholders said that the buildout requirements were effective in meeting this goal. In particular, 7 of 10 licensees and 6 of 9 industry associations said that buildout requirements are effective, while experts were mixed. One licensee we interviewed said that buildout requirements can create legitimate pressure for licensees to use the spectrum or offer it for lease or sale in the secondary market, through which FCC enables licensees to lease or sell portions of the licensed spectrum rights to others. In contrast, 4 of 9 experts said that buildout requirements are ineffective or neither effective nor ineffective in helping FCC meet this goal. For example, one expert said that despite having buildout requirements, FCC’s enforcement gives licensees an opportunity to take their time in putting the licensed spectrum to use because the licensees can apply for waivers and extensions and the buildout requirements themselves are not very strict. However, licensees and experts we interviewed generally said they did not believe that spectrum warehousing is a major problem, in their experience. One licensee, for example, said that it does not have an incentive to warehouse spectrum because of high consumer demand for its services. Promoting the provision of innovative services throughout the license areas. All three groups of stakeholders were mixed in their views on the effectiveness of buildout requirements in promoting innovative services. Licensees and industry associations mostly reported that they thought buildout requirements were neither effective nor ineffective in meeting this goal, and a majority of the experts said that buildout requirements are ineffective in promoting innovative services. For example, three licensees said that innovative services are not directly related to buildout requirements because market forces, such as consumer demand and competition, are what drive innovation. Moreover, two licensees and three experts said that buildout requirements could actually be counter-productive by causing licensees to use older or less innovative technologies to deploy service more quickly. Encouraging the provision of services to rural areas. Stakeholders were mixed in their views about whether buildout requirements help promote services in rural areas. For example, four licensees said that the buildout requirements were effective while four said they were ineffective, and five experts said that the requirements were effective while two experts said they were ineffective. The licensees and industry association representatives noted that building out to rural areas is difficult and expensive and the high costs associated with construction in these areas are rarely recovered by providing service to sparsely populated areas with few customers. A few stakeholders across all three groups added that geographic coverage requirements are more effective in promoting rural service than population coverage requirements. For example, one industry association said that geographic coverage requirements better promote rural buildout, particularly for licenses covering large geographic areas; if a large geographic-area license has a population coverage requirement, a licensee might be able to meet the requirement by serving the relatively densely populated areas and leaving the rural areas unserved. While buildout requirements are generally supported, some stakeholders we spoke with said that the requirements were not effective in meeting some of the commonly cited goals, in particular promoting innovative services and services to rural areas, as discussed above. Therefore, 22 of 28 stakeholders we spoke with identified changes that they said could improve the effectiveness of buildout requirements for wireless services to meet the goals identified above, in particular for market-based services. The most frequently mentioned changes or enhancements include the following: More clarity. Four licensees, two industry associations, and three experts said additional clarity could make buildout requirements more effective. The four licensees, for example, reported that more clarity in the service rules could allow both FCC and licensees to better meet goals by removing uncertainty. Specifically, stakeholders said more clarity could be provided through greater detail about what could constitute substantial service or about the engineering parameters licensees should use in their required notifications. According to one licensee, any clarification on a required process or rule is helpful, and for buildout requirements, more specific guidance might help eliminate some back and forth needed for FCC to approve a required notification. FCC officials said that the Commission sets specific requirements for waivers and extension requests, as well as specific buildout requirements, and that it reviews licensee notifications and requests on a case-by-case basis. More robust and transparent enforcement. Three industry associations, two experts, and a licensee said that the self- certification process, while efficient and appropriate, could be bolstered by more visible enforcement, such as using spot checks to verify licensees’ required notifications. For example, one expert said that to ensure effectiveness of the buildout requirements, FCC could conduct random spot checks to see that licensees are providing services upon meeting the buildout requirement. Similarly, to increase transparency, an industry association said that FCC could better educate licensees about the administrative aspects of filing requests or notifications and then conduct spot checks and issue fines if licensees are not providing services. These licensees and industry associations also said that more consistent enforcement entails more transparency and consistency in FCC’s processes for granting extensions and waivers. Different penalties. Some stakeholders—two licensees and two experts—said that FCC could change the penalty for not meeting a buildout requirement. Many licensees, industry associations, and experts said that the penalty of license termination was too strict. Specifically, officials from one industry association said that with termination, licensees face the loss of all their investment in constructing infrastructure if they have to surrender the license for not meeting buildout requirements. Some of these stakeholders favored a use-it-or-share-it approach whereby a licensee would have to make spectrum for which it did not meet a buildout requirement available to others through leasing or sharing. According to one expert, use it or share it could create opportunities for other users to benefit by using spectrum that is lying fallow more immediately, even if only temporarily, or providing a stronger incentive for a licensee to make secondary market arrangements to put spectrum to use. More opportunities to align licensees’ goals with buildout requirements. Two licensees and one industry association noted that FCC’s buildout requirements do not necessarily align with a licensee’s business plans, particularly for market-based services. Two licensees said FCC could provide more upfront feedback to licensees on whether a licensee’s plan to meet a buildout requirement would be accepted by the Commission. These licensees said that through such early interaction with FCC, licensees might invest in building their systems and meet buildout requirements in tandem, rather than potentially having to invest in additional infrastructure simply to meet a buildout requirement to save a license. For example, one licensee said it currently has to consider two parallel tracks—its business plan and FCC buildout requirements—when building a system, a situation that can increase costs and make buildout less efficient. The licensee believed it could more efficiently construct its system if these two tracks could be brought closer together early in the license term. FCC officials told us that licensees can engage with FCC to obtain informal guidance before filing notifications to discuss the sufficiency of their plans and avoid potential problems. One industry association also said that FCC could provide licensees with additional ways to demonstrate meeting a buildout requirement, beyond a single requirement or safe harbor for substantial service, to distinguish licensees that are warehousing spectrum from those who are working to put it to use. In addition to changes to buildout requirements, stakeholders from each of the three groups we spoke with identified alternatives to buildout requirements that they said could better meet the four commonly cited goals, including provision of innovative services and service to rural areas. We also identified additional support for stakeholder-identified tools and other tools through a review of our previous reports on spectrum management and comments filed in response to FCC proceedings related to buildout requirements for spectrum licenses. Some alternative tools could be used in place of buildout requirements, and others could complement the buildout requirements with the intent to better meet the goals and promote efficient use of spectrum. Table 8 summarizes the alternatives to buildout requirements that stakeholders identified as tools that could better meet each of the four commonly cited goals. Secondary markets. FCC enables licensees to make transactions through secondary markets, such as leasing spectrum rights to other licensees; this process facilitates licensees selling or leasing unneeded spectrum rights by negotiating their own terms. Three licensees and two experts said that these transactions could promote the provision of timely service by allowing for accelerated transactions to a licensee that wants to deploy wireless services sooner and without the additional time needed for FCC review of the transaction. Some of these licensees and experts also said that this alternative may help better meet the goal of preventing spectrum warehousing by allowing a licensee, which does not want to deploy service in the spectrum in the near future, to recover costs by leasing or selling the spectrum rights to others that want to put the spectrum to use more immediately. Reliance on market forces. A few licensees and experts said that relying more on market forces could help spur competition and ultimately encourage licensees to provide timely and innovative services. FCC already relies on market forces to some degree by, for example, auctioning licenses. Three licensees and two experts we spoke with said that FCC could further implement or bolster policies to promote competition that better motivate licensees to build or expand their networks and provide services more quickly than buildout requirements alone. Furthermore, one expert said that FCC already promotes competition among existing licensees as well as encouraging entry for others through the auctioning process, so buildout requirements are not needed in settings where there is sufficient competition to encourage licensees to acquire and put spectrum to use. Flexible-use licenses. With traditional licenses, the use or service is limited to the specific terms of the license (e.g., broadcast a television signal in a specific geographic market), but flexible-use licenses allow for a wider array of uses without having to seek additional FCC authorization. Therefore, several stakeholders we spoke with said that FCC could do more to allow flexible-use licenses and that this might speed up wireless service deployment and help meet the goal of promoting timely service. FCC officials said that the Commission does propose to issue flexible-use licenses when circumstances permit but that flexible-use licenses are not appropriate for allowing certain services in specific bands, such as broadcast services in a mobile wireless band, or when technical limitations of a band limits flexibility. According to stakeholders, flexible-use licenses could also help promote innovative services. One licensee, for example, said that flexible- use licenses allow it to update its networks and technology without changing bands or asking FCC to modify licenses. Spectrum sharing. Through our interviews and review of two FCC proceedings, three experts and one licensee reported that enabling spectrum sharing may encourage licensees to put licensed spectrum to use while allowing them to increase efficiency in their business plans. This cooperative use of spectrum, with multiple users agreeing to access the same spectrum at different times or locations, could allow licensees to provide service more quickly and help prevent warehousing of spectrum. For example, one licensee said that having a spectrum-sharing policy would be a good supplement to buildout requirements; specifically, if a licensee does not meet its buildout requirements, FCC could require that the licensee negotiate sharing or leasing for the unused part of the license to help put it to use in timely manner. Similarly, one expert said sharing could enable others to put spectrum to use in cases where the licensee is not ready to use the spectrum, thus putting the spectrum to use more quickly, and might help discourage licensees from warehousing spectrum. Smaller license areas. Through our interviews and review of FCC proceedings, two licensees and two experts we spoke with said smaller geographic area licenses could better encourage service to rural areas. One licensee said that for market-based services, auctioning smaller-sized licenses could allow entities, such as rural wireless licensees, to bid on the specific areas they want to serve. One industry association commented that a larger inventory of smaller, and likely more affordable, licenses might attract the small and rural providers that best know and can best serve rural areas. Also, as previously discussed, another industry association commented that licensing exclusively by larger blocks could disfavor competition and discourage deployment of services in rural and less densely populated areas. Subsidies. Through interviews, two licensees and two experts said that using subsidies would be a more effective way to help promote service to rural areas. One example of a subsidy is the Universal Service Fund, in which FCC establishes programmatic goals and distributes a subsidy to assist in efforts to increase nationwide access to advanced wireless services. Two experts said that the buildout requirements are vague and represent a crude and untargeted approach to addressing a specific goal like promoting services to underserved and unserved areas in rural locations. For example, one expert said that while a specially designed buildout requirement may be effective in prompting a licensee to provide service to rural areas, using a subsidy to procure an entity that is willing to provide service would be a more direct and effective way to meet the goal. One licensee said that buildout requirements have a limited ability to promote service to rural areas since it is often not economical for a licensee to build a system in areas that are sparsely populated; in these cases, subsidies can better encourage licensees to serve these areas. Spectrum usage fees. A few licensees we interviewed said spectrum usage fees may be a good alternative to buildout requirements to help prevent spectrum warehousing, particularly for licenses not obtained through auctions. Usage fees could help encourage licensees to use spectrum more efficiently or pursue sharing opportunities once they bear the opportunity cost of letting licensed spectrum sit idle. For example, one licensee said that if spectrum is made available for free, a licensee may have less incentive to put it to use or use it efficiently compared to a licensee that bought its spectrum at auction. For this reason, a few licensees noted that when FCC began using auctions to assign spectrum, there was a debate about whether buildout requirements were needed for auctioned licenses since those who paid to acquire spectrum have demonstrated their commitment to use the spectrum by paying for it. We provided a draft of this report to FCC for review and comment. FCC provided technical comments that we incorporated throughout the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of the Federal Communications Commission and the appropriate congressional commitees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines Federal Communications Commission (FCC) buildout requirements for wireless services and the efficient use of spectrum. In particular, this report provides information on (1) the buildout requirements established by FCC for spectrum licenses for wireless services, (2) the extent that FCC follows its process to enforce buildout requirements for wireless services, and (3) stakeholder opinions on the extent that goals for buildout requirements have been met. To describe FCC buildout requirements for wireless services, we reviewed FCC regulations and guidance on buildout requirements for services that use spectrum. We also interviewed FCC officials to understand which services have buildout requirements, the general process used to set buildout requirements for a service, and what factors FCC considers when setting buildout requirements for a service. According to FCC, the Wireless Telecommunications Bureau (WTB) is responsible for granting and monitoring licenses for wireless services that use spectrum. For this review, we focused on guidance and processes related to buildout requirements for licenses for wireless services, which amounts to nearly 2 million licenses. To describe FCC’s enforcement process, we reviewed FCC regulations and guidance to determine the steps FCC takes to monitor and enforce buildout requirements for wireless services. In addition, we interviewed FCC officials to learn about different parts of the enforcement process to determine licensee responsibilities and actions, FCC responsibilities and actions, and which FCC actions are automated in the Universal Licensing System (ULS) licensing database. To examine FCC’s enforcement of buildout requirements for wireless services, we selected five wireless services and analyzed data on them from FCC’s ULS database. We selected the five services to ensure variety in type of service or use, type of buildout requirement (e.g., population coverage or substantial service), how licenses were assigned (e.g., auctions), and the number of licenses in the service. We also considered recommendations from FCC officials and other interviewees when selecting among wireless services. As a result, we selected three market-based services—broadband personal communications service (PCS), 220 megahertz (MHz) phase II, and 39 gigahertz (GHz)—and two site-based services—industrial/business private land mobile radio below 700 MHz and fixed microwave. Tables 1 and 3 in the report provide information on how each selected service aligns with the criteria we used to select wireless services. The results of the data analysis for these five services are not generalizable to other wireless services. For the five selected wireless services, we analyzed data for licenses that would have buildout requirements on or before December 31, 2012. We picked December 31, 2012, to allow sufficient time after the buildout deadline for any licensee or FCC action—such as FCC review of a licensee’s notification that it met the requirement—to occur and be entered in ULS. For the market-based services, we included all licenses that would have a buildout requirement on or before December 31, 2012, based on the auction dates for the licenses and the length of the buildout requirement in regulation. For example, we included Broadband PCS licenses awarded at auctions between 1995 and 2007; since these licenses have a 5-year buildout requirement, the buildout deadlines fell between 2000 and 2012. For site-based services, we similarly sought to include licenses that would have buildout requirements due on or before December 31, 2012. However, due to the high volume of licenses in these two services, we limited our analysis to new licenses that would have a buildout requirement during calendar year 2012; that is, we did not include modifications to existing licenses, for which FCC also includes buildout requirements. Private land-mobile-radio licenses below 700 MHz, for example, have a 12-month buildout requirement, so we included new licenses granted during calendar year 2011 that would have a buildout deadline during calendar year 2012. Appendix III contains additional information on the number and type of licenses in each wireless service included in our analysis, such as the auction numbers and dates for market-based services. For each service, we analyzed license data to determine the outcomes of buildout requirements and examine FCC’s enforcement of buildout requirements. We used license and application data from the ULS public access downloads as of September 1, 2013. In particular, we examined (1) the number of licenses that did and did not have buildout requirements; (2) the outcomes for licenses that had buildout requirements; (3) the number of licenses with requests for extensions; and (4) for licenses with requests, whether the request was granted or dismissed. With respect to the outcomes for licenses that had buildout requirements, we examined the number of licenses in each wireless service that: met the requirement, including whether it met the requirement on did not meet the requirement and was terminated; did not meet the requirement and remained active after the did not reach the buildout deadline, meaning that the license was (1) canceled on or before the buildout deadline, (2) otherwise terminated before the buildout deadline, or (3) expired on or before the buildout deadline. For licenses with outcomes that did not appear to align with FCC’s enforcement process, we conducted additional research to understand the circumstances for these licenses. Specifically, (1) licenses that met the buildout requirement late and (2) licenses that did not meet the requirement and were not terminated on the buildout deadline. For these licenses, we reviewed additional information using ULS’s online license search to determine whether FCC followed its enforcement processes. We also asked FCC officials about the general circumstances surrounding these licenses. Overall, a small percentage of licenses had one of these outcomes. Since some licenses had more than one buildout requirement and thus could have more than one outcome, we developed rules to classify these licenses. For market-based licenses with more than one buildout requirement on or before December 31, 2012, we generally classified the license’s buildout outcome by the outcome for the second buildout requirement. For example, if a first buildout requirement was met but the license was canceled before the second buildout requirement, we classified the outcome as “canceled on or before buildout requirement.” However, to be classified as “met,” both the first and second requirement had to be met for a license. For the site-based services, a license can authorize multiple frequencies, and each frequency could have a buildout requirement. For each site-based license, we assessed the outcomes for all frequencies and used this information to report an outcome for the license. For example, if the buildout requirements were met for all frequencies, we classified the outcome as “met.” If the buildout requirements were met for some frequencies but not met for other frequencies (meaning that those frequencies were terminated), we classified the license as “some met/some not met.” When examining extension requests for licenses in all five services, we assessed whether any extension request was filed for the license. Based on interviews with FCC officials, as well as reviewing system documentation and electronic data testing, we determined that these data were sufficiently reliable for our purposes. Appendix III contains detailed results of the analysis of ULS data for each of the five selected wireless services. To assess the effectiveness of FCC’s enforcement, we also selected and interviewed a sample of industry associations and licensees. We conducted semi-structured interviews with industry associations and licensees to gather their opinions on FCC’s enforcement process, including the clarity of FCC guidance on buildout requirements, the timeliness of FCC responses to licensee requests and other applications, and their experiences with ULS. We selected both industry associations and licensees to cover the five selected wireless services, and we further selected licensees to ensure variety in licensee type and size. The opinions from these industry associations and licensees are not generalizable. To gather stakeholder opinions on the effectiveness of buildout requirements, we selected and interviewed a sample of industry stakeholders, including spectrum policy experts, industry associations, and licensees. To determine the goals for buildout requirements, we reviewed statute, FCC documents from recent rulemakings, and other FCC budget and performance documents to identify frequently cited goals for buildout requirements. In addition to the general goal of promoting efficient or productive use of spectrum, we identified the following four goals: encouraging licensees to provide service in a timely manner, promoting the provision of innovative services throughout the encouraging provision of services to rural areas, and preventing the warehousing of spectrum. We conducted semi-structured interviews with stakeholders to gather their opinions on the extent that buildout requirements meet each of the four goals as well as reasons or examples to support these opinions (see table 9). We also asked what changes, if any, could make buildout requirements more effective and what alternative tools FCC could use to more directly or better meet the four goals. To select experts, we included individuals based on participation in recent GAO reviews on spectrum policy, publications on spectrum policy, and recommendations from other interviewees. In particular, we sought to interview individuals who appeared at least twice across the criteria or participated in at least two recent GAO reviews. We selected industry associations and licensees as described above. The views of the selected stakeholders we interviewed are not generalizable. To supplement these interviews, we reviewed our previous reports on spectrum management and filings in two recent FCC proceedings that sought comments on buildout requirements for wireless services. We identified and reviewed filings in the dockets for the following two proceedings: 07-293, In the Matter of Establishment of rules and policies for the Digital Audio Radio Satellite Service in the 2310-2360 MHz Frequency Band and 12-268, In the Matter of Expanding the Economic and Innovation Opportunities of Spectrum Through Incentive Auctions. For each proceeding, we reviewed filings made by licensees, industry associations, and other companies and associations and summarized opinions on whether buildout requirements are effective and any changes that could be made to improve buildout requirements. Tables 24 to 37 provide results from our analysis of FCC’s ULS data for licenses for five selected wireless services. Among other things, these tables tabulate the number of licenses by auction, buildout requirement outcomes (e.g., whether a license met the buildout requirement), and extension request outcomes. Appendix I contains information on the scope and methodology of this analysis. Mark L. Goldstein, (202) 512-2834 or [email protected]. In addition to the contact person named above, Michael Clements, Assistant Director; Richard Brown; Stephen Brown; Andy Clinton; Leia Dickerson; Mya Dinh; Bert Japikse; Joanie Lofgren; Joshua Ormond; Amy Rosewarne; Hai Tran; and Elizabeth Wood made key contributions to this report.
Radio frequency spectrum is a natural resource used to provide a variety of communication services, such as mobile voice and data. The popularity of smart phones, tablets, and other wireless devices among consumers, businesses, and government users has increased the demand for spectrum. FCC takes a number of steps to promote efficient and effective use of spectrum. One such step is to establish buildout requirements, which specify that an entity granted a license must begin using the assigned spectrum within a specified amount of time or face penalties, such as loss of the license. GAO was asked to review buildout requirements and the efficient use of spectrum. This report (1) describes the buildout requirements FCC established for wireless services, (2) assesses the extent to which FCC follows its process to enforce buildout requirements, and (3) examines stakeholder opinions on the extent that commonly cited goals for buildout requirements have been met. GAO reviewed FCC regulations and guidance on buildout requirements and examined FCC license data on outcomes of buildout requirements for 5 out of about 45 wireless services selected to ensure variety in type of use and buildout requirement, among other criteria. GAO also interviewed FCC officials, commercial spectrum licensees, industry associations, and spectrum policy experts. GAO is making no recommendations in this report. FCC reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate. The Federal Communications Commission (FCC) has established buildout requirements—which require a licensee to build the necessary infrastructure and put the assigned spectrum to use within a set amount of time—for most wireless services, including cellular and personal communication services. FCC tailors the buildout requirements it sets for a wireless service based on the physical characteristics of the relevant spectrum and comments of stakeholders, among other factors. Therefore, buildout requirements vary across wireless services. For example, a buildout requirement can set the percentage of a license's population or geographic area that must be covered by service or can describe the required level of service in narrative terms rather than numeric benchmarks. Buildout requirements also vary by how much time a licensee has to meet a requirement and whether it has to meet one requirement or multiple requirements in stages. FCC's enforcement process for wireless-service licenses with buildout requirements primarily relies on information provided by licensees, and FCC followed its process for the five wireless services GAO reviewed. Specifically, FCC requires licensees to self-certify that they have met buildout requirements. If a licensee does not do so, FCC automatically terminates the license. Some stakeholders GAO interviewed said that self-certification is an effective way for FCC to enforce buildout requirements because it is public and transparent. GAO examined FCC license data for five wireless services and found that buildout requirements were met for 75 percent of those licenses, and FCC generally terminated those that did not. As part of enforcement, FCC also grants or dismisses licensees' requests to extend the deadline for meeting a requirement. FCC may grant an extension if the licensee shows that it cannot meet a deadline due to causes beyond its control, like a lack of available equipment. For the five wireless services examined, GAO found that extensions were requested for 9 percent of licenses, and FCC granted 74 percent of these requests. FCC officials said that the Commission seeks to be aggressive but pragmatic when enforcing buildout requirements, including being flexible on deadlines when needed. Some licensees and industry associations GAO interviewed said that extensions can provide needed flexibility when unexpected problems occur. Some concerns were raised, however, that granting extensions can undermine buildout requirements by creating an impression that they will not be strictly enforced. Stakeholders GAO interviewed generally said that buildout requirements are effective in meeting two of four goals commonly cited in FCC documents and statute—encouraging licensees to provide services in a timely manner and preventing the warehousing of spectrum. Stakeholders had mixed views on the effectiveness of buildout requirements in meeting two other goals—promoting innovative services and promoting services to rural areas—largely because they believed that other tools could better address these goals. Other tools stakeholders mentioned include greater use of spectrum licenses that allow a wider array of uses and providing licensees with subsidies to serve rural areas. Nearly all the licensees and industry associations GAO interviewed said they support FCC having buildout requirements, while spectrum policy experts GAO interviewed were mixed in their support of the requirements. Experts who did not support buildout requirements said that the requirements are set too weak or that other tools could better meet FCC goals, among other reasons.
The Secretary of the Treasury delegated overall authority for enforcement of, and compliance with, BSA and its implementing regulations to the Director of FinCEN. FinCEN develops policy and provides guidance to other agencies, analyzes BSA data for trends and patterns, and pursues enforcement actions when warranted. It also relies on other agencies in implementing the BSA framework. These activities include (1) ensuring compliance with BSA requirements to report suspicious activity, (2) collecting and storing reported information, and (3) taking enforcement actions or conducting investigations of criminal financial activity. The Secretary of the Treasury delegated BSA examination authority for depository institutions to five banking regulators—the Federal Reserve, OCC, OTS, FDIC, and NCUA. The regulators conduct periodic on-site safety and soundness and compliance examinations to assess an institution’s financial condition, policies and procedures, adherence to BSA regulations (for example, filing of SARs and other BSA-related reports), and compliance with other laws and regulations. Financial institutions must report any suspicious transaction relevant to a possible violation of a law. In 1996, FinCEN required banks and other depository institutions to report, on a SAR form, certain suspicious transactions involving possible violations of law or regulation, including money laundering. In the same year, federal banking regulators required depository institutions to report suspected money laundering and other suspicious activities using the SAR form. IRS’s Enterprise Computing Center–Detroit serves as the central point of collection and storage of these data. Figure 1 summarizes the process for filing and accessing SARs. Federal regulators and FinCEN can bring formal enforcement actions, including civil money penalties, against institutions for violations of BSA. Formal enforcement actions generally are used to address cases involving systemic, repeated noncompliance; failure to respond to supervisory warnings; and other violations. However, most cases of BSA noncompliance are corrected within the examination framework through supervisory actions or letters that document the institution’s commitment to take corrective action. In addition, DOJ may bring criminal actions against individuals and corporations, including depository and other financial institutions, for money laundering offenses and certain BSA violations. The actions may result in criminal fines, imprisonment, and forfeiture actions. Institutions and individuals willfully violating BSA and its implementing regulations, and structuring transactions to evade BSA reporting requirements, are subject to criminal fines, prison, or both. Law enforcement agencies housed in DOJ and the Department of Homeland Security use SARs for investigations of money laundering, terrorist financing, and other financial crimes. Agencies in DOJ involved in efforts to combat money laundering and terrorist financing include FBI; DEA; the Department’s Criminal and National Security Divisions; the Bureau of Alcohol, Tobacco, Firearms, and Explosives; the Executive Office for U.S. Attorneys; and U.S. Attorneys Offices. The Secret Service and ICE (in Homeland Security) also investigate cases involving money laundering and terrorist activities. IRS-CI uses BSA information to investigate possible cases of money laundering and terrorist financing activities. Federal and multiagency law enforcement teams, which may include state and local law enforcement representatives, also use SAR data to provide additional information about subjects during ongoing investigations. From 2000 through 2007, depository institutions filed an increasing number of SARs each year and representatives from federal regulators, law enforcement, and depository institutions with whom we spoke attributed the increase to a number of factors. According to FinCEN data, SAR filings by depository institutions increased from approximately 163,000 in 2000 to more than 732,000 in 2008. In our report, our analysis of SAR and banking data from 2004 through 2007 indicates that the growth rates in SAR filings varied over time among depository institutions of different asset sizes. For example, the greatest increase in SARs filed during this period by the largest depository institutions occurred from 2004 to 2005, and SARs filed by small credit unions nearly doubled from 2005 to 2006. Representatives of federal banking regulators, law enforcement agencies, and depository institutions most frequently attributed the increase to two factors: technological advances and the effect of public enforcement actions on institutions. According to the representatives, automated transaction monitoring systems can flag multiple indicators of suspicious activity and identify much more unusual activity than could be identified manually. At the largest depository institutions, these systems conduct complex analyses incorporating customer profiles. The representatives also said that issuance of several public enforcement actions in 2004 and 2005 with civil money penalties and forfeitures up to $40 million against a few depository institutions prompted many institutions to file more SARs. FinCEN and the federal banking regulators took the actions because of systemic BSA program noncompliance, which included failures to meet SAR filing requirements. More recently in March 2010, government actions taken against one depository institution for BSA violations, including SARs violations, included $160 million in penalties and fines. Depository institution representatives with whom we spoke cited a third factor for increases—concerns they would receive criticisms during examinations about decisions not to file SARs. To avoid such criticism, they said their institutions filed SARs even when they thought them unnecessary—a practice sometimes called “defensive SAR filing.” However, according to the federal regulators and some law enforcement officials with whom we spoke, there is no means of determining what, if any, portion of the increase in filings could be attributed to defensive filing. The representatives suggested additional factors as contributing to the increase, including greater awareness of BSA requirements after September 11, 2001, more regulator guidance for BSA examinations, and more BSA-related training at the institutions. FinCEN and law enforcement agencies have taken multiple actions to educate filers about SARs usefulness and improve the quality of SAR filings. Since 2000, FinCEN has issued written products with the purpose of educating filers and making filings more useful to law enforcement. These include (1) a regularly issued publication that gives tips on topics such as the preparation of SARs and (2) guidance for depository institutions and other SAR filers. For example, in its SAR Activity Review: Trends, Tips and Issues—FinCEN regularly provides information on suspicious activity reporting, trends, and data analyses, law enforcement cases assisted by BSA data, and other issues. In 2008 and in 2009, the publication included information on suspicious activity reviews by a state banking regulator and securities regulators, respectively. In 2009, FinCEN issued guidance on filing SARs for mortgage loan modification and foreclosure rescue scams and in 2010 began an effort to promote electronic filing of BSA forms targeted at current paper filers. FinCEN representatives regularly participate in outreach events on BSA and anti- money laundering issues, including events on SARs. FinCEN also chairs the Bank Secrecy Act Advisory Group—a forum for federal agencies and financial industry representatives to discuss BSA administration, including SAR-related issues. Federal law enforcement agency representatives said they improved SARs’ usefulness by conducting outreach events and establishing relationships with depository institutions in their local areas to communicate with staff about crafting useful SAR narratives. Representatives from some multiagency law enforcement teams told us that they subsequently noticed improved SAR narratives from local depository institutions. FinCEN, law enforcement agencies, and banking regulators use SARs in investigations and depository institution examinations and took steps in recent years to make better use of them. FinCEN uses SARs to provide a number of public and nonpublic analytical products to law enforcement agencies and depository institution regulators. In 2004 and 2005, several federal law enforcement agencies signed memorandums of understanding with FinCEN to receive bulk BSA data, including SARs. They combined these data with information from their law enforcement databases to facilitate more complex and comprehensive analyses. Different team structures have been established to better analyze SARs. For example, in 2000 and again in 2003, DOJ issued guidance that encouraged the formation of SAR review teams with federal, state, and local representation. Each month, these teams review SARs filed in their areas to determine which would merit additional investigation. In 2006, DOJ and IRS-CI collaborated on a pilot to create task forces and augment SAR review teams with federal prosecutors in selected districts. These task forces specifically investigate possible BSA violations with potential for seizures or forfeitures. The regulators also use SARs for scoping their depository institution examinations and review SARs relating to known or suspected unlawful activities by current and former institution-affiliated parties, including officers, directors, and employees. Although law enforcement agency representatives generally were satisfied with their ability to access BSA data, various agencies and multiagency teams we interviewed said that formatting and other issues related to the data system slowed their downloads and reviews. In 2009, FinCEN officials described how features of FinCEN’s planned modernization effort for information technology could address these issues. FinCEN and IRS officials said that, when budgetary resources were available, these and other data management challenges would be addressed as part of FinCEN’s modernization plan, developed in collaboration with IRS. FinCEN officials recently told us that they have begun the first phase of the information technology modernization, which they anticipate will last through fiscal year 2014. We reported in 2009 that FinCEN encountered a number of problems in its 2006 revision of the SAR form and in 2008, developed a new process for form revisions. However, the available information on the process was limited and did not fully indicate how FinCEN would avoid or address some of the problems previously encountered. In 2006, FinCEN and the federal banking regulators issued proposed substantive and formatting revisions to the SAR form. The revisions were finalized but, because of technology limitations with IRS’s data management system, the revised form has not been implemented. Law enforcement agency officials we interviewed had mixed views on the proposed revisions. They generally supported most of the proposed revisions, but some felt they had been insufficiently consulted and also expressed concerns that some revisions could affect their work negatively. For example, one change would replace the name and title of a person with personal knowledge about the suspicious activity reported on the form with a contact office, possibly increasing the time it would take law enforcement investigators to reach a person knowledgeable about the activity. However, banking regulators supported this change because of concerns that a SAR listing a named contact could jeopardize the safety and privacy of that person if it were inappropriately disclosed. In 2008, FinCEN developed a new process that it planned to use in future revisions of BSA forms, including SARs. Early documentation for the process suggested some greater stakeholder involvement at early stages, but subsequent documentation we reviewed did not indicate that FinCEN fully incorporated certain GAO-identified practices that can enhance and sustain collaboration among federal agencies. Such practices include defining a common outcome; agreeing on respective roles and responsibilities, including how the collaborative effort will be led; and creating the means to collect information on, monitor, evaluate, and report efforts. In our 2009 report, we determined that if FinCEN more fully incorporated some of these practices it might achieve some potential benefits—such as greater consensus from all stakeholders on proposed SAR form revisions. We recommended that the Secretary of the Treasury direct the Director of FinCEN to further develop and document its strategy to fully incorporate certain of these practices into the revision process and distribute that documentation to all stakeholders. In written comments on the report, the FinCEN Director generally agreed with our recommendation and noted that FinCEN recognized the need to work with a diverse range of stakeholders to revise BSA forms. Recent implementation of FinCEN’s process suggests greater collaboration with stakeholders on defining a common outcome and establishing roles and responsibilities and planned steps, which could result in more sustained collaboration. According to FinCEN officials, FinCEN’s implementation of the process generally would involve three phases. The initial phase has involved collaboration with a wider range of stakeholders than in the past. For example, in addition to collaboration with IRS information technology staff we previously identified, current documentation indicates that FinCEN has collaborated in more detail with federal law enforcement agency representatives, federal financial regulators, representatives from SAR review teams and other multiagency law enforcement teams, and prosecutors to determine the content of a revised SAR form. FinCEN also obtained and adopted input from other stakeholders, such as banking industry representatives, in the Bank Secrecy Act Advisory Group. FinCEN officials plan to obtain and adopt input from its Data Management Council (DMC), after providing its members the opportunity to consult with colleagues at their respective agencies. They also plan to conduct a focus group of DMC members to obtain feedback on how the new forms revision process is working and use that feedback to modify the process. However, because FinCEN has not yet completed implementation of its form revision process, it is too soon to determine the effectiveness of the process. Mr. Chairman and Members of the subcommittee, I appreciate this opportunity to discuss this important issue and would be happy to answer any questions you might have. For further information regarding this testimony, contact Richard J. Hillman at (202) 512-8678. Contact points at our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making major contributions to this statement included Toni Gillich, Kay Kuhlman, Linda Rego, and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To assist law enforcement agencies in their efforts to combat money laundering, terrorist financing, and other financial crimes, the Bank Secrecy Act (BSA) requires financial institutions to file suspicious activity reports (SAR) to inform the federal government of transactions related to possible violations of law or regulation. Depository institutions have been concerned about the resources required to file SARs and the extent to which SARs are used. The Subcommittee asked GAO to discuss our February 2009 report on suspicious activity reporting. Specifically, this testimony discusses (1) factors affecting the number of SARs filed, (2) actions agencies have taken to improve the usefulness of SARs, (3) federal agencies' use of SARs, and (4) the effectiveness of the process used to revise SAR forms. To respond to the request, GAO relied primarily on the February 2009 report titled Bank Secrecy Act: Suspicious Activity Report Use Is Increasing, but FinCEN Needs to Further Develop and Document Its Form Revision Process (GAO-09-226), and updated it with additional information provided by FinCEN. In that report, GAO recommended that FinCEN work to further develop a strategy that fully incorporates certain GAO-identified practices to enhance and sustain collaboration among federal agencies into the forms-change process. In 2000 through 2008, total SAR filings by depository institutions increased from about 163,000 to 732,000 per year; representatives from federal regulators, law enforcement, and depository institutions with whom GAO spoke attributed the increase mainly to two factors. First, automated monitoring systems can flag multiple indicators of suspicious activities and identify significantly more unusual activity than manual monitoring. Second, several public enforcement actions against a few depository institutions prompted other institutions to look more closely at client and account activities. Other factors include institutions' greater awareness of and training on BSA requirements after September 11, 2001 and more regulator guidance for BSA examinations. FinCEN and law enforcement agencies have taken actions to improve the quality of SAR filings and educate filers about their usefulness. Since 2000, FinCEN has issued written products with the purpose of making SAR filings more useful to law enforcement. FinCEN and federal law enforcement agency representatives regularly participate in outreach on BSA/anti-money laundering, including events focused on SARs. Law enforcement agency representatives said they also establish relationships with depository institutions to communicate with staff about crafting useful SAR narratives. FinCEN, law enforcement agencies, and financial regulators use SARs in investigations and financial institution examinations and have taken steps in recent years to make better use of them. FinCEN uses SARs to provide public and nonpublic analytical products to law enforcement agencies and depository institution regulators. Some federal law enforcement agencies have facilitated complex analyses by using SAR data with their own data sets. Federal, state, and local law enforcement agencies collaborate to review and start investigations based on SARs filed in their areas. Regulators use SARs in their examination process to assess compliance and take action against abuse by depository institution insiders. After revising a SAR form in 2006 that could not be used because of information technology limitations, in 2008, FinCEN developed a new process for revising BSA forms, including SARs, that may increase collaboration with some stakeholders, including some law enforcement groups concerned that certain of the 2006 revisions could be detrimental to investigations. Available documentation on the process did not detail the degree to which the new process would incorporate GAO-identified best practices for enhancing and sustaining federal agency collaboration. For example, it did not specify roles and responsibilities for stakeholders or depict monitoring, evaluating, and reporting mechanisms. According to FinCEN officials, it is taking some additional steps toward obtaining greater collaboration with law enforcement agency representatives, prosecutors, and multi-agency law enforcement teams and others to determine the contents of the form, but it is too soon to determine the effectiveness of the process.
Collecting information is one way that federal agencies carry out their missions. For example, IRS needs to collect information from taxpayers and their employers to know the correct amount of taxes owed. The U.S. Census Bureau collects information used to apportion congressional representation and for many other purposes. When new circumstances or needs arise, agencies may need to collect new information. We recognize, therefore, that a large portion of federal paperwork is necessary and often serves a useful purpose. Nonetheless, besides ensuring that information collections have public benefit and utility, federal agencies are required by the PRA to minimize the paperwork burden that the collection of information imposes. Among the provisions of the act aimed at this purpose are requirements for the review of information collections by OMB and by agency CIOs. Under PRA, federal agencies may not conduct or sponsor the collection of information unless approved by OMB; information collections for which OMB approval is expired or missing are considered violations of the PRA. Before approving collections, OMB is required to determine that the agency’s collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility. Consistent with the act’s requirements, OMB has established a process to review all proposals by executive branch agencies (including independent regulatory agencies) to collect information from 10 or more persons, whether the collections are voluntary or mandatory. In addition, the act as amended in 1995 requires every agency to establish a process under the official responsible for the act’s implementation (now the agency’s CIO) to review program offices’ proposed collections. This official is to be sufficiently independent of program responsibility to evaluate fairly whether information collections should be approved. Under the law, the CIO is to review each collection of information before submission to OMB, including reviewing the program office’s evaluation of the need for the collection and its plan for the efficient and effective management and use of the information to be collected, including necessary resources. As part of that review, the agency CIO must ensure that each information collection instrument (form, survey, or questionnaire) complies with the act, certify that the collection meets 10 standards (see table 1), and provide support for these certifications. In addition, the original PRA of 1980 (section 3514(a)) requires OMB to keep Congress “fully and currently informed” of the major activities under the act and to submit a report to Congress at least annually on those activities. Under the 1995 amendments, this report must include, among other things, a list of any increases in burden. To satisfy this requirement, OMB prepares the annual PRA report, which reports on agency actions during the previous fiscal year, including changes in agencies’ burden-hour estimates as well as violations of the PRA. The 1995 PRA amendments also required OMB to set specific goals for reducing burden from the level it had reached in 1995: at least a 10 percent reduction in the governmentwide burden-hour estimate for each of fiscal years 1996 and 1997, a 5 percent governmentwide burden reduction goal in each of the next 4 fiscal years, and annual agency goals that reduce burden to the “maximum practicable opportunity.” At the end of fiscal year 1995, federal agencies estimated that their information collections imposed about 7 billion burden hours on the public. Thus, for these reduction goals to be met, the burden-hour estimate would have had to decrease by about 35 percent, to about 4.6 billion hours, by September 30, 2001. In fact, on that date, the federal paperwork estimate had increased by about 9 percent, to 7.6 billion burden hours. Over the years, we have reported on the implementation of PRA many times. In a succession of reports and testimonies, we noted that federal paperwork burden estimates generally continued to increase, rather than decrease as envisioned by the burden reduction goals in PRA. Further, we reported that some burden reduction claims were overstated. For example, although some reported paperwork reductions reflected substantive program changes, others were revisions to agencies’ previous burden estimates and, therefore, would have no effect on the paperwork burden felt by the public. In our previous work, we also repeatedly pointed out ways that OMB and agencies could do more to ensure compliance with PRA. In particular, we have often recommended that OMB and agencies take actions to improve the paperwork clearance process. After 2 years of slight declines, OMB reports that burden hours increased in fiscal year 2005 and are expected to increase again in fiscal year 2006. According to OMB’s most recent PRA report to Congress, the estimated total burden hours imposed by government information collections in fiscal year 2005 was 8.4 billion hours; this is an increase of 441 million burden hours (5.5 percent) from the previous year’s total of 8.0 billion hours. It is also almost a billion and a half hours larger than it was in 1995 and 3.8 billion larger than the PRA target for the end of fiscal year 2001 (4.6 billion burden hours). OMB’s report also states that burden will increase in fiscal year 2006 by an estimated 303 million hours to about 8.7 billion hours; however, according to OMB, most of this projected increase (250 million hours or 83 percent) is attributable to a new method of estimating burden that is being implemented by IRS, rather than to any increase in the actual burden. Finally, according to OMB, fewer violations of the act were reported than in previous years. Changes in paperwork burden estimates result from several causes, which OMB assigns to two main categories. OMB classifies all changes—either increases or decreases—in agencies’ burden-hour estimates as either program changes or adjustments. ● Program changes are the result of deliberate federal government action (e.g., the addition or deletion of questions on a form); these can occur as a result of ● agency-initiated actions, or ● the expiration or reinstatement of OMB-approved collections. ● Adjustments do not result from federal activities but from external factors. For example: ● an agency may reestimate the burden associated with a collection of information, or ● the population responding to a requirement may change—such as if the economy declines and more people complete applications for food stamps; the resulting increase in the Department of Agriculture’s paperwork estimate is considered an adjustment because it is not the result of deliberate federal action. As shown above, within the category of program changes, OMB distinguishes between changes due to new statutes and changes due to agency action, which it also refers to as agency discretionary actions. However, this term should not imply that agencies have no discretion in how they implement new statutes. A major goal of the PRA is to ensure that agencies consider how to make the burden of information collections, whether old or newly established, as small as possible. In the second part of my statement, I will address one of the ways set forth in the PRA to help achieve this goal. Table 2 shows the changes in reported burden totals from fiscal year 2004 to fiscal year 2005. As the table shows, the change due to new statutes was by far the largest factor in the increase for fiscal year 2005. OMB reports that the statute having the largest impact on burden was the statute establishing voluntary prescription drug coverage under Medicare; implementing the program mandated by this statute required the collection of significant amounts of information, leading to an increase in burden of 224 million hours. An additional significant increase—about 116 million hours—resulted from the implementation by the Federal Communications Commission (FCC) of the CAN-SPAM Act, which requires disclosure of certain information contained in unsolicited commercial e-mails. In contrast to changes due to new statutes, changes due to agency action did not contribute significantly to the overall change in burden this year, adding 180,000 hours out of the total rise of 441 million. Although the overall result was a slight increase, agencies did take many actions that decreased burden; without these actions, the governmentwide increase would have been greater. The annual report does not list all these actions, but it does highlight actions that led to significant paperwork reductions and increases. (These include increases and decreases in burden from statutory requirements and miscellaneous agency actions, as well as burden reductions from changing regulations, cutting redundancy, changing forms, and using information technology.) From both an individual agency perspective and a governmentwide perspective, the relatively small increase due to agency action is the result of large increases and decreases that mostly offset each other: ● From an individual agency perspective, the net change in an agency’s burden estimate is generally the result of disparate actions, some of which reduce burden and some of which increase it. An example is the IRS, which as an agency was responsible for a net decrease of about 3 million hours. Among the burden reductions that the annual report highlights are two IRS actions to change forms, both of which reduced burden by simplification and streamlining, for a reduction of about 19 million hours. The ICB also reports that in January 2006 IRS completed an initiative to simplify the process of applying for an extension to file an income tax return, which is associated with a burden reduction of 8 million hours. Elsewhere, on the other hand, five IRS actions are highlighted that together resulted in an increase of about 24 million hours. Examples of reasons IRS took these actions included increasing accuracy and improving the agency’s ability to monitor compliance with the law. ● Similarly, from a governmentwide perspective, the overall change is the result of some agencies whose actions produced a net decrease and others whose produced a net decrease. In fiscal year 2005, agencies with net decreases produced a reduction of about 14.02 million hours. This reduction was offset, however, by agencies with net increases, which totaled about 14.20 million hours. Compared to program changes as a whole, adjustments to the estimates were a relatively small factor (as table 2 also shows), accounting for a net increase in the burden of about 19 million hours. In previous years, adjustments have had a much greater impact and have tended to decrease overall burden estimates, thus masking the effect of increases from program changes. In fiscal years 2003 and 2004, the impact of adjustments was large enough to lead to overall burden estimates that were lower than for the year before. In fiscal year 2004, OMB reported a decrease of about 156 million hours in adjustments versus an increase of about 29 million hours in program changes; the result was a lower overall burden estimate than for the previous year. Similarly, overall burden in fiscal year 2003 was slightly less than in fiscal year 2002, also as a result of a decrease in adjustments (about 182 million hours) that more than offset an increase in program changes (about 72 million hours). Besides these large decreases due to adjustments, another reason for the slight decrease in total burden in fiscal years 2004 and 2003 was that increases due to program changes were relatively small, as shown in table 3. This year, both program changes and adjustments went up, so adjustments did not have the effect of masking increases in program changes. As the table also shows, fiscal year 2005 saw the largest net increase from program changes since 1998. In fiscal year 2005, IRS accounted for about 76 percent of the governmentwide paperwork burden: about 6.4 billion hours. As shown in figure 1, no other agency’s estimate approaches this level. Six agencies had burden-hour estimates of 100 million hours or more (the Departments of Health and Human Services, Labor, and Transportation; EPA; FCC; and the Securities and Exchange Commission). Thus, as we have previously reported, changes in paperwork burden experienced by the federal government have been largely attributable to changes associated with IRS. OMB reports that starting in fiscal year 2006, IRS began using a new methodology based on a statistical model—the Individual Taxpayer Burden Model—to estimate the reporting burden imposed on individual taxpayers. Among other things, this new model, which was developed to improve the accuracy and transparency of taxpayer burden estimates, reflects the major changes over the past two decades in the way that taxpayers prepare and file their returns, including the use of electronic preparation methods. According to OMB, rather than estimating burden on a form-by-form basis, the new methodology takes into account broader and more comprehensive taxpayer characteristics and activities, considering how the taxpayer prepares the return (e.g., with or without software or a paid preparer) as well as the taxpayer’s activities, such as gathering tax materials, completing forms, recordkeeping, and tax planning. In contrast, the previous methodology primarily focused on the length and complexity of each tax form. OMB states that this new model will make it possible to estimate the burden implications of new legislative and administrative tax proposals. OMB projects that these changes will create a one-time increase of about 250 million hours in the estimate of IRS burden levels in fiscal year 2006. This increase represents most (83 percent) of the total projected governmentwide increase for fiscal year 2006 of 303 million hours. However, according to OMB, this increase does not reflect any change in the actual burden experienced by taxpayers, but rather a change in the way the burden is measured. In the past, we reported that IRS’s previous estimation model ignored important components of burden and had limited capabilities for analyzing the determinants of burden. The new model is the result of work that IRS has performed over the past several years to improve its model and address these and other limitations. At this time, we have not analyzed IRS’s new model to determine the extent to which it improves the accuracy of burden estimates, and we have not assessed the accuracy of the new model’s estimates. However, IRS’s efforts to increase the accuracy of its model appear to be an important step towards addressing the previous model’s shortcomings. See GAO, EPA Paperwork: Burden Estimate Increasing Despite Reduction Claims, GAO/GGD-00-59 (Washington, D.C.: Mar. 16, 2000), for how one agency estimates paperwork burden. understood, these estimates can be useful as the best indicators of paperwork burden available. OMB reports reductions in PRA violations for fiscal year 2005 compared to previous years. The PRA prohibits an agency from conducting or sponsoring the collection of information unless (1) the agency has submitted the proposed collection to OMB, (2) OMB has approved the proposed collection, and (3) the agency displays an OMB control number on the collection. According to OMB’s annual report, agencies have made great progress in recent years in reducing the number of violations of these conditions and in resolving them more promptly. OMB attributed this reduction to several initiatives it had taken, including meeting with agency officials to discuss ways to reduce violations and adding reporting requirements. According to OMB, during fiscal year 2005, agencies reported a total of 97 violations: 60 information collections that expired during the year, and another 37 that had expired before October 1, 2004, and were not reinstated until fiscal year 2005. Of the 27 agencies included in the annual report, the three agencies with the greatest number of violations were the Departments of the Treasury and Homeland Security and the Small Business Administration. In addition, OMB reported no unresolved violations at the end of fiscal year 2005 and only 6 violations during the first 8 months of fiscal year 2006. The 97 violations reported in fiscal year 2005 is much less than the 164 violations in fiscal year 2004 and the 223 violations in fiscal year 2003. Although the reduction in violations is a positive trend, we should note that the violations reported may not be comprehensive; they include only those that agencies identified and reported to OMB. As a result, the statistics would omit violations of which agencies were unaware. In our May 2005 review, we examined forms posted on Web sites for four agencies (VA, HUD, Labor, and IRS). We found examples of violations among these forms of which the agencies were generally unaware. Based on our examination, we projected that the four agencies overall had an estimated 69 violations: 61 collections in use without OMB approval and 8 expired collections. For example, we estimated 16 violations at VA; at that time, OMB’s report reflected VA’s belief that it had no violations. Based on these results, we recommended that the four agencies periodically review their Web sites to ensure that all forms comply with PRA requirements; we also recommended that OMB alter its guidance so that all federal agencies would be required to periodically review Web sites in this way. Since then, VA has reported to us that it removed forms from its Web site that were in violation of PRA. However, OMB has not yet issued governmentwide guidance directing these types of reviews, so it is possible that some PRA violations remain undetected. Among the PRA provisions intended to help achieve the goals of minimizing burden while maximizing utility are the requirements for CIO review and certification of information collections. The 1995 amendments required agencies to establish centralized processes for reviewing proposed information collections within the CIO’s office. Among other things, the CIO’s office is to certify, for each collection, that the 10 standards in the act have been met, and the CIO is to provide a record supporting these certifications. The four agencies that we reviewed for our May 2005 report all had written directives that implemented the review requirements in the act, including the requirement for CIOs to certify that the 10 standards in the act were met. However, in the 12 case studies that we reviewed, this CIO certification occurred despite a lack of rigorous support that all standards were met. Specifically, the support for certification was missing or partial on 65 percent (66 of 101) of the certifications. Table 4 shows the result of our analysis of the case studies. We have attempted to eliminate duplication within the agency wherever possible. This assertion provides no information on what efforts were made to identify duplication or perspective on why similar information, if any, could not be used. Further, the files contained no evidence that the CIO reviewers challenged the adequacy of this support or provided support of their own to justify their certification. A second standard mandated by the act is that each information collection should reduce burden on the public, including small entities, to the extent practicable and appropriate. OMB guidance emphasizes that agencies are to demonstrate that they have taken every reasonable step to ensure that a given collection of information is the least burdensome necessary for the proper performance of agency functions. In addition, OMB instructions and guidance direct agencies to provide specific information and justifications: (1) estimates of the hour and cost burden of the collections and (2) justifications for any collection that requires respondents to report more often than quarterly, respond in fewer than 30 days, or provide more than an original and two copies of documentation. With regard to small entities, OMB guidance states that the standard emphasizes such entities because these often have limited resources to comply with information collections. The act and OMB guidance give various techniques for reducing burden on these small entities. Our review of the case examples found that for the certification on reducing burden on the public, the files generally contained the specific information and justifications called for in the guidance. However, none of the case examples contained support that addressed how the agency ensured that the collection was the least burdensome necessary. According to agency CIO officials, the primary cause for this absence of support is that OMB instructions and guidance do not direct agencies to provide this information explicitly as part of the approval package. In addition, four of our case studies did not provide complete information that would support certification that the collection specifically addressed reducing burden for small entities. Specifically, 7 of the 12 case studies involved collections that were reported to impact businesses or other for-profit entities, but the files for 4 of these 7 did not explain either ● why small businesses were not affected, or ● even though such businesses were affected, that burden could or could not be reduced. Instead, the files included statements such as “not applicable,” which do not inform the reviewer whether or not there was an effort made to reduce burden on small entities. When we asked agencies about these four cases, they indicated that the collections did, in fact, affect small business. OMB’s instructions to agencies on minimizing burden on small entities require agencies to describe any methods used to reduce burden only if the collection of information has a “significant economic impact on a substantial number of small entities.” This does not appropriately reflect the act’s requirements concerning small business: the act requires that the CIO certify that the information collection reduces burden on small entities in general, to the extent practical and appropriate, and provides no thresholds for the level of economic impact or the number of small entities affected. OMB officials acknowledged that their instruction is an “artifact” from a previous form and more properly focuses on rulemaking rather than on the information collection process. The lack of support for the 10 certifications required by the act appeared to be influenced by a variety of factors. In some cases, as described above, OMB guidance and instructions were not comprehensive or entirely accurate. In the case of the duplication standard specifically, IRS officials said that the agency did not need to further justify that its collections are not duplicative because (1) tax data are not collected by other agencies, so there is no need for the agency to contact them about proposed collections, and (2) IRS has an effective internal process for coordinating proposed forms among the agency’s various organizations that may have similar information. Nonetheless, the law and instructions require support for these assertions, which was not provided. Further, agency reviewers told us that management assigns a relatively low priority and few resources to reviewing information collections. Further, program offices have little knowledge of and appreciation for the requirements of the PRA. As a result of these conditions and a lack of detailed program knowledge, reviewers often have insufficient leverage with program offices to encourage them to improve their justifications. When support for the PRA certifications is missing or inadequate, OMB, the agency, and the public have reduced assurance that the standards in the act, such as those on avoiding duplication and minimizing burden, have been consistently met. IRS and EPA have supplemented the standard PRA review process with additional processes aimed at reducing the burden while maximizing the public benefit and utility of the information collected. These agencies’ missions require them both to deal extensively with information collections, and their management has made reduction of burden a priority. In January 2002, the IRS Commissioner established an Office of Taxpayer Burden Reduction, which includes both permanently assigned staff and staff temporarily detailed from program offices that are responsible for particular information collections. This office chooses a few forms each year that are judged to have the greatest potential for burden reduction (these forms have already been reviewed and approved through the CIO process). The office evaluates and prioritizes burden reduction initiatives by ● determining the number of taxpayers impacted; ● quantifying the total time and out-of-pocket savings for taxpayers; ● evaluating any adverse impact on IRS’s voluntary compliance ● assessing the feasibility of the initiative, given IRS resource limitations; and ● tying the initiative into IRS objectives. Once the forms are chosen, the office performs highly detailed, in- depth analyses, including extensive outreach to the public affected, users of the information within and outside the agency, and other stakeholders. This analysis includes an examination of the need for each data element requested. In addition, the office thoroughly reviews form design. The office’s director heads a Taxpayer Burden Reduction Council, which serves as a forum for achieving taxpayer burden reduction throughout IRS. IRS reports that as many as 100 staff across IRS and other agencies can be involved in burden reduction initiatives, including other federal agencies, state agencies, tax practitioner groups, taxpayer advocacy panels, and groups representing the small business community. The council directs its efforts in five major areas: ● simplifying forms and publications; ● streamlining internal policies, processes, and procedures; ● promoting consideration of burden reductions in rulings, regulations, and laws; ● assisting in the development of burden reduction measurement ● partnering with internal and external stakeholders to identify areas of potential burden reduction. According to IRS, this targeted, resource-intensive process has achieved significant reductions in burden. For example, it reported that about 95 million hours of taxpayer burden were reduced through increases in the income reporting threshold on various IRS schedules. Another example, mentioned earlier, was given in OMB’s latest annual PRA report: in January 2006 IRS completed an initiative to simplify the process of applying for an extension to file an income tax return, which is associated with a burden reduction of 8 million hours. Another example from the annual PRA report is a reduction of about 19 million hours from a redesign of IRS form 1041 to streamline the requirements and make it easier to read and file. Similarly, EPA officials stated that they have established processes for reviewing information collections that supplement the standard PRA review process. These processes are highly detailed and evaluative, with a focus on burden reduction, avoiding duplication, and ensuring compliance with PRA. According to EPA officials, the impetus for establishing these processes was the high visibility of the agency’s information collections and the recognition, among other things, that the success of EPA’s enforcement mission depended on information collections being properly justified and approved: in the words of one official, information collections are the “life blood” of the agency. According to these officials, the CIO staff are not generally closely involved in burden reduction initiatives, because they do not have sufficient technical program expertise and cannot devote the extensive time required. Instead, these officials said that the CIO staff’s focus is on fostering high awareness within the agency of the requirements associated with information collections, educating and training the program office staff on the need to minimize burden and the impact on respondents, providing an agencywide perspective on information collections to help avoid duplication, managing the clearance process for agency information collections, and acting as liaison between program offices and OMB during the clearance process. To help program offices consider PRA requirements such as burden reduction and avoiding duplication as they are developing new information collections or working on reauthorizing existing collections, the CIO staff also developed a handbook to help program staff understand what they need to do to comply with PRA and gain OMB approval. In addition, program offices at EPA have taken on burden reduction initiatives that are highly detailed and lengthy (sometimes lasting years) and that involve extensive consultation with stakeholders (including entities that supply the information, citizens groups, information users and technical experts in the agency and elsewhere, and state and local governments). For example, EPA reported that it amended its regulations to reduce the paperwork burden imposed under the Resource Conservation and Recovery Act. One burden reduction method EPA used was to establish higher thresholds for small businesses to report information required under the act. EPA estimated that the initiative will reduce burden by 350,000 hours and save $22 million annually. Another example is an ongoing EPA initiative reported in this year’s PRA report, the Central Data Exchange; this is an e-government initiative that is designed to enable fast, efficient, and more accurate environmental data submissions and exchange from state and local governments, industry, and tribes through the use of electronic reporting procedures. The estimated reduction for this initiative, which is expected to be complete in 2008, is 166,000 hours. Overall, EPA and IRS reported that they have produced significant reductions in paperwork burden by making a commitment to this goal and dedicating resources to it. In contrast, for the 12 information collections we examined, the CIO review process resulted in no reduction in burden. Further, the Department of Labor reported that its PRA reviews of 175 proposed collections over nearly 2 years did not reduce burden. Similarly, both IRS and EPA addressed information collections that had undergone CIO review and received OMB approval and nonetheless found significant opportunities to reduce the paperwork burden. In our 2005 report, we concluded that the CIO review process was not working as Congress intended: It did not result in a rigorous examination of the burden imposed by information collections, and it did not lead to reductions in burden. In light of these findings, we suggested options that Congress might want to consider when it next reauthorizes the act, including mandating pilot projects to test and review alternative approaches to achieving PRA goals. Such pilot projects could build on the lessons learned at IRS and EPA, which have used a variety of approaches to reducing burden, sharing information (for example, by facilitating cross-agency information exchanges), standardizing data for multiple uses, and integrating data to avoid duplication; and re-engineering work flows. Pilot projects would be most appropriate for agencies for which information collections are a significant aspect of the mission. In addition, we recommended (among other things) that agencies strengthen the support provided for CIO certifications and that OMB update its guidance to clarify and emphasize this requirement (including that agencies provide support showing that they have taken steps to reduce burden, determined whether small entities are affected and reduced reporting burden on them, and established a plan to manage and use the information to be collected, including the identification of necessary resources). OMB and the agencies agreed with most of the recommendations, although they disagreed with aspects of GAO’s characterization of agencies’ compliance with the act’s requirements. Since our report was issued, the four agencies have reported taking steps to strengthen their support for CIO certifications: ● According to the HUD CIO, the department established a senior- level PRA compliance officer in each major program office, and it revised its certification process to require that before collections are submitted for review, they be approved at a higher management level within program offices. ● The Treasury CIO established an Information Management Sub- Council under the Treasury CIO Council and added resources to the review process. ● According to the VA’s 2007 budget submission, the department obtained additional resources to help review and analyze its information collection requests. ● According to the Office of the CIO at the Department of Labor, the department intends to provide guidance to components regarding the need to provide strong support for clearance requests and has met with component staff to discuss these issues. OMB has updated parts of its guidance and plans to incorporate other guidance into an automated system to be used by agencies submitting information collections for clearance. In January 2006, OMB revised its guidance to agencies on surveys and statistical information collections. This guidance, among other things, is aimed at strengthening the support that agencies must provide for certifying collections, as we recommended. For example, the guidance requires agencies submitting requests for approval to include context and detail that will allow OMB to evaluate the practical utility of the information to be collected. However, this guidance does not apply to all information collections. Rather, it applies only to surveys that are used for general-purpose statistics or as part of program evaluations or research studies. In addition, it does not provide clear guidance on one of the topics mentioned in our recommendation: determining whether small entities are affected by the collection and reducing reporting burden on these entities. OMB also reported that its guidance to agencies will be updated through a planned automated system that is to begin operating this month. According to the former acting head of OMB’s Office of Information and Regulatory Affairs, the new system will permit agencies to submit clearance requests electronically, and the instructions will provide clear guidance on the requirements for these submissions, including the support required. This official stated that OMB has worked with agency representatives with direct knowledge of the PRA clearance process in order to ensure that the system and its instructions clearly reflect the requirements of the process. If this system is implemented as described and OMB withholds clearance from submissions that lack adequate support, it could lead agencies to strengthen the support provided for their certifications. In conclusion, Madam Chairman, the PRA puts in place mechanisms to focus agency attention on the need to minimize the burden that information collections impose—while maximizing the public benefit and utility of government information collections—but these mechanisms have not succeeded in achieving the ambitious reduction goals set forth in the 1995 amendments. Achieving real reductions in the paperwork burden is an elusive goal, as attested by years of OMB’s annual PRA reports, including the latest. That report shows the largest rise in estimated burden for the last several years, mostly due to new statutory requirements and how they have been implemented. As we have seen, the tendency is for burden to rise unless agencies take active steps to reduce it. Agencies have taken such actions—by cutting redundancy, changing forms, and using information technology, among other things—but these have not been enough to make up for the increases. Besides demonstrating once again how challenging it is for the government to achieve true burden reduction, this year’s results highlight the need to look for new ways to achieve this and the other goals of the PRA. Among the mechanisms already in place is the CIO review and certification process. However, as it was implemented at the time of our review, this process had limited effect on the quality of support provided for information collections, and it appeared to have no appreciable impact on burden. The targeted approaches to burden reduction used by IRS and EPA appear promising, but the experience of these agencies suggests that success requires top-level executive commitment, extensive involvement of program office staff with appropriate expertise, and aggressive outreach to stakeholders. However, such an approach would probably also be more resource-intensive than the CIO certification process, and thus it may not be warranted at agencies where paperwork issues do not rise to the level of those at IRS and similar agencies. Consequently, it is critical that efforts to expand the use of the IRS and EPA models take these factors into consideration. Madam Chairman, this completes my prepared statement. I would be pleased to answer any questions. For future information regarding this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6420, or [email protected]. Other individuals who made key contributions to this testimony were Barbara Collier, Nancy Glover, and Alan Stapleton. Paperwork Reduction Act: New Approaches Can Strengthen Information Collection and Reduce Burden. GAO-06-477T. Washington, D.C.: March 8, 2006. Paperwork Reduction Act: Subcommittee Questions Concerning the Act’s Information Collection Provisions. GAO-05-909R. Washington, D.C.: July 19, 2005. Paperwork Reduction Act: Burden Reduction May Require a New Approach. GAO-05-778T. Washington, D.C.: June 14, 2005. Paperwork Reduction Act: New Approach May Be Needed to Reduce Government Burden on Public. GAO-05-424. Washington, D.C.: May 20, 2005. Paperwork Reduction Act: Agencies’ Paperwork Burden Estimates Due to Federal Actions Continue to Increase. GAO-04-676T. Washington, D.C.: April 20, 2004. Paperwork Reduction Act: Record Increase in Agencies’ Burden Estimates. GAO-03-619T. Washington, D.C.: April 11, 2003. Paperwork Reduction Act: Changes Needed to Annual Report. GAO- 02-651R. Washington, D.C.: April 29, 2002. Paperwork Reduction Act: Burden Increases and Violations Persist. GAO-02-598T. Washington, D.C.: April 11, 2002. Information Resources Management: Comprehensive Strategic Plan Needed to Address Mounting Challenges. GAO-02-292. Washington, D.C.: February 22, 2002. Paperwork Reduction Act: Burden Estimates Continue to Increase. GAO-01-648T. Washington, D.C.: April 24, 2001. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Tax Administration: IRS Is Working to Improve Its Estimates of Compliance Burden. GAO/GGD-00-11. Washington, D.C.: May 22, 2000. Paperwork Reduction Act: Burden Increases at IRS and Other Agencies. GAO/T-GGD-00-114. Washington, D.C.: April 12, 2000. EPA Paperwork: Burden Estimate Increasing Despite Reduction Claims. GAO/GGD-00-59. Washington, D.C.: March 16, 2000. Federal Paperwork: General Purpose Statistics and Research Surveys of Businesses. GAO/GGD-99-169. Washington, D.C.: September 20, 1999. Paperwork Reduction Act: Burden Increases and Unauthorized Information Collections. GAO/T-GGD-99-78. Washington, D.C.: April 15, 1999. Paperwork Reduction Act: Implementation at IRS. GAO/GGD-99-4. Washington, D.C.: November 16, 1998. Regulatory Management: Implementation of Selected OMB Responsibilities Under the Paperwork Reduction Act. GAO/GGD- 98-120. Washington, D.C.: July 9, 1998. Paperwork Reduction: Information on OMB’s and Agencies’ Actions. GAO/GGD-97-143R. Washington, D.C.: June 25, 1997. Paperwork Reduction: Governmentwide Goals Unlikely to Be Met. GAO/T-GGD-97-114. Washington, D.C.: June 4, 1997. Paperwork Reduction: Burden Reduction Goal Unlikely to Be Met. GAO/T-GGD/RCED-96-186. Washington, D.C.: June 5, 1996. Environmental Protection: Assessing EPA’s Progress in Paperwork Reduction. GAO/T-RCED-96-107. Washington, D.C.: March 21, 1996. Paperwork Reduction: Burden Hour Increases Reflect New Estimates, Not Actual Changes. GAO/PEMD-94-3. Washington, D.C.: December 6, 1993. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Americans spend billions of hours each year providing information to federal agencies by filling out information collections (forms, surveys, or questionnaires). A major aim of the Paperwork Reduction Act (PRA) is to minimize the burden that responding to these collections imposes on the public, while maximizing their public benefit. Under the act, the Office of Management and Budget (OMB) is to approve all such collections and to report annually on the agencies' estimates of the associated burden. In addition, agency chief information officers (CIO) are to review information collections before submitting them to OMB for approval and certify that the collections meet certain standards set forth in the act. GAO was asked to testify on OMB's burden report for 2005 and on a previous study of PRA implementation (GAO-05-424), which focused on the CIO review and certification processes and described alternative processes that two agencies have used to minimize paperwork burden. To prepare this testimony, GAO reviewed the current burden report and its past work in this area. For its 2005 study, GAO reviewed a governmentwide sample of collections, reviewed processes and collections at four agencies that account for a large proportion of burden, and performed case studies of 12 approved collections at the four agencies. After 2 years of slight declines, OMB reports that paperwork burden grew in fiscal year 2005 and is expected to increase further in fiscal year 2006. Estimates in OMB's annual report to Congress show that the total paperwork burden imposed by federal information collections increased last year to about 8.4 billion hours--an increase of 5.5 percent from the previous year's total of about 8.0 billion hours. Nearly all this increase resulted from the implementation of new laws (for example, about 224 million hours were due to the implementation of voluntary prescription drug coverage under Medicare). The rest of the increase came mostly from adjustments to the estimates due to such factors as changes in estimation methods and in the numbers of respondents. Looking ahead to fiscal year 2006, OMB expects an increase of about 250 million hours because of a new model for estimating burden being implemented by the Internal Revenue Service (IRS). According to OMB, this expected rise does not reflect any real change in the burden on taxpayers, but only in how IRS estimates it. The PRA requires that CIOs review information collections and certify that they meet standards to minimize burden and maximize utility; however, these reviews were not always rigorous, reducing assurance that these standards were met. In 12 case studies at four agencies, GAO determined that CIOs certified collections proposed by program offices despite missing or inadequate support. Providing support for certifications is a CIO responsibility under the PRA, but agency files contained little evidence that CIO reviewers had made efforts to improve the support offered by program offices. Numerous factors contributed to these problems, including a lack of management attention and weaknesses in OMB guidance. Based on its review, GAO recommended (among other things) that agencies strengthen the support provided for certifications and that OMB update its guidance to clarify and emphasize this requirement. Since GAO's study was issued, the four agencies have reported taking steps to strengthen their support for CIO certifications, such as providing additional resources and guidance for the process, and OMB has updated parts of its guidance. In contrast to the CIO review process, which did not lead to reduced paperwork burden in GAO's 12 case studies, IRS and the Environmental Protection Agency (EPA) have set up alternative processes specifically focused on reducing burden. These agencies, whose missions involve numerous information collections, have devoted significant resources to targeted burden reduction efforts that involve extensive outreach to stakeholders. According to the two agencies, these efforts have led to significant reductions in paperwork burden on the public. In light of these promising results, the weaknesses in the current CIO review process, and the persistent increases in burden, a new approach to burden reduction appears warranted. GAO suggested that Congress should consider mandating pilot projects to target some collections for rigorous analysis along the lines of the IRS and EPA approaches.
The General Services Administration (GSA) administers the federal government’s contracts in support of agencies’ purchase card programs. GSA contracts with commercial banks to issue purchase cards to federal employees to make official government purchases. The Bank of America issues purchase cards to USDA agencies, including the Forest Service. The purchase card, unless otherwise directed by regulation, is intended to be the primary purchasing method for purchases from vendors that accept purchase cards for payment. This payment method is intended to streamline procurement and payment procedures by reducing the number of procurement requests, purchase orders, and vendor payments issued. USDA’s purchase card program, including the Forest Service, also includes the use of convenience checks to pay vendors that do not accept purchase cards as payment. In fiscal year 2001, the Forest Service used purchase cards and convenience checks to make 1.1 million purchases totaling $320 million. The USDA procurement process is subject to the Federal Acquisition Regulation (FAR), the primary set of regulations governing acquisition of supplies and services by federal executive agencies with appropriated funds. The FAR also incorporates U. S. Department of Treasury’s Treasury Financial Manual (TFM) requirements for the governmentwide purchase card program. To implement and supplement these regulations, USDA issues the Agriculture Acquisition Regulations (AGAR), which prescribe USDA procurement policies and procedures. To implement and supplement the AGAR, the Forest Service issues directives, which contain Forest Service procurement policies and procedures. The Forest Service Handbook, FSH 6309.32 Part 4G13 Simplified Acquisition Procedures, provides specific guidance on procurement for the Forest Service, including the use of the government purchase card. The handbook contains policies and procedures that define the responsibilities of regional and local program coordinators for managing the purchase card program, including establishing cardholder data in the Purchase Card Management System (PCMS) and monitoring activities for the purchase card program. GSA and Bank of America also provide purchase card guidance, and GSA provides training to cardholders and program coordinators. For example, GSA’s Blueprint for Success: Purchase Card Oversight was prepared by a working group of agency program coordinators (APC) and provides general program guidance to APCs in performing their responsibilities. Beginning in fiscal year 2003, GSA made available to APCs a Web-based on- line training course covering such topics as APC responsibilities, reporting tools, and preventive measures to use in monitoring the purchase card program. According to USDA policy, APCs and local area program coordinators (LAPC) are appointed by the head of the agency contracting office. APCs are primarily responsible for managing the purchase card program in their agency. In addition, they establish agency-unique purchase card policies and procedures, provide training and guidance to LAPCs, and conduct agencywide oversight of the purchase card program. LAPCs are responsible for the day-to-day operations of the purchase card program within their respective location. In addition, LAPCs are responsible for updating cardholder information in PCMS; providing training to cardholders; and monitoring purchases and reporting fraud, waste, and abuse in accordance with agency procedures. Currently, there are 137 Forest Service LAPCs. In the Forest Service, cardholders are responsible for understanding and complying with purchasing policies and procedures; maintaining records and receipts of all purchases; validating their purchases against PCMS online data; disputing unauthorized charges, and obtaining all necessary prepurchase approvals for certain items, such as information technology (IT) costing $1,000 or more and other purchases costing $2,501 or more. For all other purchases costing $2,500 or less, the Forest Service cardholder is not required to obtain pre- or post-approval. During fiscal year 2001, approximately 14,000 of the approximately 30,000 employees, or over one-third of the Forest Service workforce, had purchase cards and most of them had a single purchase limit ranging from $2,500 to $25,000. The single transaction limit applies to both the purchase card and convenience checks issued to the cardholder. In 1995, the Forest Service’s use of the purchase card was limited to procurement personnel. However, with implementation of the President’s National Performance Review recommendations, the Forest Service reduced its procurement staff by 27 percent by 1998. At the same time, USDA put together a task force to look at the procurement process and make recommendations to improve it. The task force recommended increasing the use of purchase cards within USDA, including the Forest Service, to streamline the procurement process. USDA rapidly expanded purchase card use, authorizing operations personnel as well as procurement personnel to use them. USDA’s Office of Procurement and Property Management (OPPM) and the National Finance Center (NFC) developed PCMS in 1995 to reduce administrative costs and to allow agencies faster procurement of goods and services. The system allowed USDA, including the Forest Service, to track, reconcile, and monitor purchases made using the USDA purchase cards and convenience checks. PCMS is used by program coordinators to establish and manage cardholder accounts and by cardholders to reconcile and dispute their transactions from their desktop computers. In 1998, USDA switched card issuers and issued a task order under the GSA contract to Bank of America. The Bank of America purchase card system, developed under the GSA contract and called the Electronic Access Government Ledger System (EAGLS), includes various tools available to manage purchase card transactions. EAGLS is able to generate account activity reports, which identify trends such as purchases from merchants that would not be expected to be traditional suppliers or unusually high spending patterns; dispute reports, which identify cardholders with excessive disputes that may indicate cardholder misuse or fraudulent activity; and various other exception reports. Bank of America recommended that USDA also use EAGLS to manage its purchase card program. However, as PCMS was developed by USDA prior to its changeover to Bank of America, USDA officials chose to continue using PCMS because they believed that it offered functionality similar to EAGLS. Bank of America processes purchase card transaction data received from vendors using EAGLS, which records the data, then sends it electronically to NFC. NFC uploads the data into PCMS and processes payments. In August 2001, the IG issued a report on its review of PCMS, which identified several internal control weaknesses. The report noted the (1) lack of supervisory review and approval of cardholder transactions, (2) untimely validation of purchases against PCMS data, and (3) inadequate monitoring by agency management. In addition, a private firm was contracted to perform an Independent Verification and Validation (IV&V) assessment of PCMS, an assessment which also reported weaknesses in accounting process controls and internal controls over purchase card transactions. Both the IG and contractor reports noted that cardholders were authorized to buy a majority of items they wanted at any time. The IG made several recommendations, which included instituting a requirement that supervisors periodically review and approve their subordinates’ purchase card transactions to confirm they are appropriate, for official purposes, and validated against PCMS data in a timely manner; developing and implementing appropriate internal control procedures over the custody, control, accountability, and issuance operations for convenience checks to ensure they are not misused; and instructing USDA agencies to review their controls for ensuring they always properly record property purchases valued at $5,000 or more (called accountable purchases) in the Office of the Chief Financial Officer (OCFO)/NFC Property Management Information System. To obtain an understanding of the Forest Service’s purchase card and convenience check policies and procedures, and the related internal controls, we reviewed USDA and Forest Service procurement policy, USDA PCMS guidance, Forest Service regional purchase card program policy, U.S. Department of the Treasury purchase card program policy, and previous GAO reports, as well as reports issued by USDA’s IG and an independent contractor; and observed and documented purchase card procedures and conducted telephone interviews with USDA and Forest Service management and staff to identify key purchase card, convenience check, and accountable property policies, procedures, and initiatives. Because of known weaknesses in the design of internal controls at the Forest Service, we did not perform detailed tests to assess the effectiveness of these controls. However, we reviewed the internal control findings reported by the IG and the contractor in reports issued on the purchase card program and PCMS. In addition, we assessed the adequacy of the internal controls as designed, using our Standards for Internal Control in the Federal Government, Internal Control Management and Evaluation Tool, Guide for Evaluating and Testing Controls Over Sensitive Payments, and Executive Guide: Strategies to Manage Improper Payments. To determine whether the Forest Service’s fiscal year 2001 purchase card transactions were made in accordance with established policies and procedures, were reasonable, and reflected a legitimate government need, we selected transactions using three different methods. For each method of selection, we provided the Forest Service with the transactions selected and obtained and reviewed related supporting documentation. The three methods are as follows: Data mining. We performed data mining on Bank of America’s database of the Forest Service’s fiscal year 2001 purchase card and convenience check transactions for indicators of potential noncompliance with established policies and procedures. Specifically, we looked for transactions that exceeded cardholder or convenience check spending limits, split purchases or duplicate transactions, cardholders with multiple cards, transactions on purchase card accounts after the separation dates of the employees, and cardholders who wrote convenience checks to themselves or for cash. Except for potential split and duplicate transactions, we forwarded all selected transactions to the Forest Service APC to request supporting documentation from cardholders, which we used to assess whether in fact these were violations of policy. For split and duplicate transactions, we selected a statistical sample of transactions as discussed below. Statistical sampling. To test for split transactions, we first performed data mining to identify possible split transactions from the population of purchase card transactions paid from October 1, 2000, through September 30, 2001. We then selected a stratified random (statistical) sample of 213 of 1,854 potential split transactions totaling $3.5 million. Similarly, to test for duplicate transactions, we first performed data mining to identify possible duplicate transactions from the population of purchase card transactions paid from October 1, 2000, through September 30, 2001. We then selected a stratified random (statistical) sample of 230 of the 8,659 possible duplicate transactions totaling $1.6 million. We requested supporting documentation for these transactions from the APC. Actual findings from both statistical samples were projected separately to total fiscal year 2001 Forest Service purchase card and convenience check transactions. Nonstatistical sampling. We selected transactions nonstatistically to allow us to identify those that appeared to have a higher risk of fraud, waste, or abuse, although the results cannot be projected to the overall population of purchases. We identified merchant category codes (MCC) or vendor names that appeared more likely to represent unauthorized or personal use items. We chose a nonstatistical sample of high-risk transactions from the total population of transactions identified for each vendor or MCC selected. We then requested supporting documentation from the APC for over 5,000 transactions totaling $8.7 million that we identified as meeting the criteria mentioned above to test for improper purchases. In addition, we requested the records for more than 1,000 transactions totaling over $690,000, which were disputed by cardholders during fiscal year 2001. We reviewed these transactions to determine whether the cardholders properly complied with applicable purchasing policies and procedures for disputed transactions. To determine if controls over purchase card and convenience check equipment acquisitions were adequate to properly record and safeguard assets, we reviewed policies and procedures over the management and control of accountable property and sensitive items; and tested accountable property selected in the nonstatistical sample discussed above to determine whether these assets had been recorded in the Forest Service’s property management system prior to our review. While we identified some improper purchases, our work was not designed to identify all fraudulent or otherwise improper purchases made by the Forest Service. We conducted our review from April 2002 through March 2003 at the Forest Service Washington Office in Rosslyn, Virginia, and USDA headquarters in Washington, D.C., in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Chief of the Forest Service. Written comments were received from the Chief of the Forest Service and are reprinted in appendix I. The Forest Service’s internal controls did not provide reasonable assurance that improper purchase card and convenience check purchases would not occur or would be detected in the normal course of business. Effective internal controls are the first line of defense in safeguarding assets and preventing and detecting fraud. In addition, they help to ensure that actions are taken to address risks, and are an integral part of an entity’s accountability for stewardship of government resources. Our Standards for Internal Control in the Federal Government contains the specific internal control standards to be followed. Specifically, these standards require, among other things, that (1) key duties and responsibilities be divided or segregated among different people to reduce the risk of error or fraud, (2) transactions and other significant events be authorized and executed only by persons acting within the scope of their authority, (3) internal control monitoring be performed to assess the quality of performance over time and ensure that audit findings are promptly resolved, and (4) physical control must be established to secure and safeguard assets vulnerable to risk of loss or unauthorized use. The IG report, issued in August 2001, covered purchases made during fiscal years 1999 and 2000. The report noted several internal control weaknesses in USDA’s purchase card program, including that of the Forest Service. These included (1) lack of supervisory review of purchase card transactions, (2) untimely reconciliation of purchases, and (3) inadequate monitoring by agency management. Because the IG report addressed significant internal control weaknesses and made several recommendations to address them, we did not conduct detailed tests of internal controls. However, through discussions with USDA and Forest Service officials and our reviews of purchase card policies and procedures, we confirmed that the Forest Service still did not have an adequate supervisory review process or sufficient program monitoring activities. Most importantly, we determined that the Forest Service continued not to have adequate segregation of duties, and property susceptible to theft or misuse was not adequately safeguarded. Our data mining of specific purchase card and convenience check transactions revealed numerous improper and wasteful purchases that could have been prevented or detected if these basic internal controls were in place. Without effective internal controls, the Forest Service does not have reasonable assurance that purchases are proper or that items purchased are safeguarded against loss or theft. Our Standards for Internal Control in the Federal Government requires that key duties and responsibilities, including authorizing, processing, recording, and reviewing transactions and handling related assets, be divided or segregated among different people in order to reduce the risk of error or fraud. Simply put, no one individual should control all the key aspects of a transaction or event. The processing and recording duties for purchase card and convenience check transactions were automated and not performed by the cardholder. However, under Forest Service regulations, the majority of purchase card transactions do not require a segregation of duties. Cardholders are allowed to perform the key duties of authorizing purchases, receiving related assets, and validating the purchases subsequent to payment. Although Forest Service guidance required that a requisition be prepared for all procurements as a method to determine that the requestor had the authority for the purchase, a procurement request was not required for acquisitions below $2,500 when using a purchase card or convenience check. In fiscal year 2001, 96 percent of purchase card transactions were for amounts less than $2,500. Further, as purchase card purchases usually involved face-to-face transactions between the cardholder and the vendor, the cardholder received the assets. Lastly, Forest Service guidance required that cardholders reconcile their transactions in PCMS at least once a month using the documentation retained from each transaction. After reconciling a transaction, cardholders validate the transaction by marking an “approved” cell in PCMS. Therefore, for the majority of Forest Service purchase card transactions made by individual cardholders, there is no separate authorization of the purchases, possession of the items, or independent validation of the transactions. In discussions with USDA OPPM management and our review of purchase card policies and procedures, we noted that generally, segregation of duties was not adequately considered in the implementation of the purchase card program. When USDA, including the Forest Service, adopted PCMS, the revised procurement process allowed a much larger population of Forest Service employees to authorize purchasing decisions to buy goods and services as well as the responsibility for validating these purchases, up to their single transaction limit. This new process did away with the need for procurement requests and approving officials for the majority of purchase transactions below $2,500 initiated by Forest Service employees. As noted above, Forest Service purchase card program policies and procedures were written to support the increased authority given cardholders in purchase decisions. The lack of proper segregation of duties increased the Forest Service’s vulnerability to theft or misuse since there was limited oversight or control to ensure that purchased items or services were for a legitimate government need and were being used for those purposes. Supervisory approval of transactions is a principal means of assuring that only valid transactions are initiated or entered into by persons acting within the scope of their authority. A supervisory review of purchase transactions is particularly important where there is a lack of segregation of duties, because a supervisor or approving official may be the only person other than the purchaser who would be in a position to identify an inappropriate purchase. Therefore, the supervisor or approving official’s review is a critical internal control for ensuring that purchases are appropriate and comply with agency regulations. The August 2001 IG report recommended that USDA institute a requirement that supervisors periodically review and approve their subordinates’ purchase card transactions to confirm they are appropriate and revise departmental regulations and purchase card program instructions accordingly. USDA did not concur with this recommendation because the IG audit did not find any material problems with the purchase card transactions that they tested. Further, in commenting on the report, USDA management stated that they believe the existing management structure is effective in ensuring that purchase card transactions are appropriate. During our review of the Forest Service purchase card program for fiscal year 2001, we also noted that supervisory review and approval of purchase card transactions was inadequate. The Forest Service did not require approval of purchase transactions under $2,500, except for certain IT items such as computer hardware, software, and cellular phones. The Forest Service trusts cardholders to make appropriate purchasing decisions for transactions under this amount. During fiscal year 2001, purchases less than $2,500 totaled $226 million and accounted for 96 percent of all purchase card transactions in the Forest Service. While Forest Service guidance requires prior approval for all purchase transactions exceeding $2,500 and for specific IT items, we noted that this requirement was not consistently followed. We identified 11 transactions totaling $25,452 that required prior approval, but were initiated and completed by cardholders without obtaining such approval. For example, we identified a $1,260 purchase for a printer from ComputerLand Center. Prior approval for the purchase was not obtained as required by Forest Service policy. The cardholder stated that she did not obtain the proper approval because she was unaware of the requirement. USDA issued its revised procurement regulation, Use of the Purchase Card and Convenience Check (DR-5013-6), in February 2003. The revised regulation added the cardholder supervisor to the list of responsible persons in the purchase card program. Supervisors are described as the first line of control over the purchasing activity of cardholders in their units. In addition, it states that supervisors will require cardholders to generate periodic reports of purchase card and convenience check transactions, and that supervisors review these reports at least quarterly, or more often if agency procedures require. However, OPPM officials told us that this new regulation does not require supervisors to review each and every transaction nor does it require them to review supporting documentation. Rather, these reviews are completed using data that has been entered into PCMS and do not require the cardholders to submit the original documentation for their purchases. Both USDA and Forest Service officials told us that supervisory review of all transactions is not practical because of the Forest Service’s decentralized organization. However, this very decentralization makes it even more imperative that a supervisor or other approving official validate purchases. Without an independent validation of transactions via supervisory review of supporting documentation, the Forest Service is at significant risk of misappropriation of funds due to fraudulent or improper charges. For example, as mentioned earlier, cardholders are required to “reconcile” or validate their transactions at least once a month. During this process the cardholders view each individual transaction for their account in PCMS and agree the vendor name, the date of the transaction, and the amount of the transaction to the original documentation maintained by the cardholder. In addition, the cardholders enter the description of the items purchased because this information is not initially included in PCMS. While following up on questionable purchases that we identified, our investigators learned that three of these purchases, for $1,031 in jewelry and china from Meier & Frank, had been made by a Forest Service employee who had been under investigation by the IG since January 2002. In reviewing the IG Report on Investigation we noted that the cardholder, when reconciling her purchases, had entered fictitious items into the item description field in PCMS. For example, the purchases from Meier & Frank were described as nonmonetary awards and length-of-service awards. In addition, purchases of CD players, computers, computer games, and other miscellaneous items at one vendor were entered into the PCMS description field as cartridges, chair mats, folders, binders, paper, pencils, and other supplies. A comparison of the receipt to information in the PCMS database would have detected these purchases as potentially fraudulent. The Forest Service did not adequately monitor its purchase card program during fiscal year 2001 to ensure that Forest Service employees were following established policies and procedures. Program oversight through monitoring activities is important even when strong preventive controls are in place, and is especially critical in the Forest Service case where there is a lack of supervisory review and segregation of duties. USDA regulations in place during fiscal year 2001 required that APCs and LAPCs monitor purchase card transactions through PCMS’s alert subsystem, statistical sampling, and query tool software. In August 2001, the IG reported deficiencies in USDA’s (including Forest Service) use of oversight tools for monitoring purchase card usage during fiscal years 1999 and 2000. Specifically, the IG reported that the department had not effectively implemented the alert subsystem of PCMS or implemented reviews of statistically sampled transactions, as required by USDA regulations. During our review of the Forest Service program for fiscal year 2001, we noted that it was still not using these tools for monitoring transactions for compliance with program requirements or for improper purchases. In our discussions with OPPM officials in May 2002, they stated that they were not using the alert subsystem because it was generating too many alerts that did not represent true errors or abuse. They expect to correct the alert process by June 30, 2003, 6 months ahead of the original implementation date included in their corrective action plan to address the IG’s findings. In addition, they informed us that they had not begun performing reviews of statistically selected samples during fiscal year 2001 but that they had begun performing these reviews during fiscal year 2002, distributing the results to the specific agencies, including the Forest Service, for follow-up on the identified transactions. The Forest Service APC confirmed that she received the transactions from OPPM and had distributed them to the specific field offices for investigation. Lack of timely and consistent monitoring activities increases the risk that inappropriate purchase card transactions and improper cardholder activities will go undetected. In addition, without adequate monitoring activities, systematic problems will not be identified and addressed. In our review of support for transactions identified using data mining techniques, we found that local coordinators were not always (1) canceling accounts of permanent and temporary employees when they left the Forest Service, (2) informed by cardholders that cards had been lost or stolen, or (3) monitoring disputed transactions to ensure they were completely resolved and to identify unauthorized activity. Canceling purchase card accounts. Purchase card accounts were not consistently being canceled when cards were reported as lost or stolen, or when a cardholder left the Forest Service. Forest Service guidance requires, in the case of a lost or stolen card, that the cardholder contact Bank of America to have a block placed on the account and a new card issued. The guidance also requires that cardholders, prior to leaving the Forest Service, surrender their cards and, if issued, unused convenience checks, to the LAPC who will destroy them and close the account. We reviewed employee separation procedures at 16 Forest Service regional and field offices and noted that the written procedures at 3 offices did not include steps to physically collect the purchase card from the employee. Further, Forest Service program officials told us that the personnel forms used in this process are out of date at one office, they differ from region to region, and are inconsistently filled out. In addition, we noted that purchase cards were issued to temporary employees hired during fire season when there is an increased need for manpower. According to the Forest Service APC, these cards are collected when the temporary employees leave the Forest Service. However, one of the officials we spoke with stated that the cards are not always retrieved. Instead the purchase limits are reduced to $1 at that time. Our internal control standards require that an agency must establish physical control to secure and safeguard assets that might be vulnerable to risk of loss or unauthorized use. Failure to collect purchase cards due to outdated and inconsistently applied policies and procedures creates a significant risk of unauthorized use of purchase cards. USDA purchase card policy states that cardholders are required to inform their LAPC immediately of lost or stolen purchase cards or convenience checks and contact the card issuer in order to have the accounts blocked. However, we found instances where LAPCs had not been notified that cards had been lost by cardholders or stolen and the cards had not been canceled. For example, we identified three instances where cardholders lost their cards but did not inform their LAPCs. Instead, in each case, the cardholder canceled the lost card and ordered a new card through Bank of America without the LAPC’s knowledge. Monitoring disputed transactions. We found that cardholders were not always disputing transactions within 60 days of the transaction dates as required by Forest Service policy and the GSA contract, and disputes were not being monitored to ensure they were completely investigated and resolved. Forest Service policy requires that cardholders reconcile their purchase card transactions in PCMS every 30 days to ensure the recorded charges are appropriate and correct, and that they dispute any charges identified as inappropriate or erroneous. GSA’s Blueprint for Success: Purchase Card Oversight, states that agency officials should consistently monitor disputes filed by cardholders and watch for unusual trends, such as a high number of disputes for specific merchants. In addition, the GSA master contract with card issuers of government purchase cards states that charges disputed within 60 days of the transaction date will be investigated by the card issuer and appropriate credits issued. After 60 days the cardholder is responsible for investigating the disputed charge. In fiscal year 2001, Forest Service cardholders disputed over 1,000 transactions totaling $690,157. Of these, we noted 62 transactions totaling over $51,000 that had not been disputed by cardholders within 60 days of the transaction date. Forest Service regulations do not require that cardholders inform their LAPC of transactions to be disputed before they are submitted to Bank of America. As a result, LAPCs and other management officials may be unaware of disputed transactions that may indicate potentially fraudulent or improper purchase card use, and therefore are not ensuring that unauthorized activity is identified, compromised accounts are canceled, and appropriate credits are issued. We noted that 76 transactions had been identified by 51 cardholders as potentially fraudulent, but the transactions still had not been resolved (i.e., credits issued) and the accounts were still open as of the end of our fieldwork. For example, a cardholder disputed a $600 charge at Dillards department store stating that this was one of several charges against his account for this vendor and that none of the charges were legitimate. In addition, cardholders disputed a total of 22 charges totaling $2,791 for a vendor named Productivity Plus. In all but 2 of these 22 transactions the cardholders state that they had attempted to reach the vendor but were unable to. However, there was no explanation given as to why the cardholder accounts had not been closed. The Forest Service issued revised policies and procedures for monitoring purchase card usage in June 2001. The revised guidance required that for each Forest Service region, field office, and the Washington office, the Chief of Contracting Office (COCO) and LAPC perform monthly, quarterly, and annual reviews of cardholder purchases. The guidance also gave the COCOs the authority to revoke cardholder purchase card and convenience check privileges for inappropriate use. However, the revised regulations do not address the need for monitoring disputed transactions to help ensure that the purchase cards that have been lost, stolen, or otherwise compromised are canceled and that disputed transactions are resolved. Inadequate monitoring is yet another gap in internal controls that leaves the Forest Service purchase card program open to waste, fraud, and abuse. Since 1999, we have designated financial management at the Forest Service as high risk on the basis of serious financial and accounting weaknesses. An area of particular concern has been the Forest Service’s internal controls related to property. Internal controls are essential to safeguarding assets vulnerable to risk of loss or unauthorized use. However, we found that the Forest Service did not adequately track property bought with purchase cards. Forest Service policy requires that property costing more than $5,000 be entered into its personal property management system. Such property is also referred to as accountable property. USDA’s property management regulations state that all accountable property acquired by purchase, transfer, construction, manufacture, or donation will be recorded in the property records at the time it is accepted by the receiving agency. We reviewed supporting documentation for 108 accountable property items purchased in 64 separate transactions during fiscal year 2001, selected on a nonstatistical basis. In our review, we noted that 54 of these transactions were entered in the property system more than 60 days after the purchase transaction or not at all. Specifically, 34 of these items, totaling $266,074, or approximately 31 percent, had been recorded in the USDA property system more than 60 days after the purchase transaction. In many of those cases, it was several months before the property was recorded. For example, 8 of the items were not entered into the system for more than a year. In addition, we noted 20 property items, totaling $166,803, which the Forest Service could not determine had ever been entered into the property system. These items included 10 all-terrain vehicles, 3 copiers, 2 projectors, 2 generators, and 3 plasma monitors. This lack of accountability makes these assets particularly susceptible to loss or theft without detection. The Forest Service does not require that property costing under $5,000 be tracked unless the items are designated as “sensitive.” Each USDA agency defines its own list of sensitive property and is responsible for providing this list to the cardholders. The Forest Service designates all firearms, frequency modulated land-mobile radios, precise positioning service Global Positioning Satellite (GPS) receivers, IT equipment, and radiological equipment having a radioactive source as sensitive property, agencywide. Further, Forest Service guidance allows each of its regions and field offices to designate other items as sensitive. While the Forest Service allows field offices to categorize items under $5,000 as sensitive and thereby track them in inventory, there is no consistent definition of sensitive property across regions. For example, one Forest Service region considers VCRs, TVs, and CD players costing more than $500 to be sensitive property. Another regional office designates video and audio equipment costing more than $100 as sensitive. That particular region also considers survival equipment and clothing sensitive while other regions do not. USDA regulations state that agencies shall be responsible for maintaining reasonable controls over their nonaccountable property to safeguard it against improper use, theft, and undue deterioration. In our review, we identified transactions totaling $439,789 for purchases of items that were not recorded in the Forest Service’s inventory that, while not specifically designated as sensitive, appear to meet both USDA and Forest Service’s overall definition of sensitive property. The cost of many of these items fell just under the $5,000 accountable property threshold. As shown in table 1, these items included all-terrain vehicles, cameras, GPS units, snowmobiles, and night-vision goggles. Without proper recording and accounting for these vulnerable assets, there is an increased risk of misappropriation of these items. For example, without tracking of these items a supervisor may be unaware that a cardholder leaving the Forest Service purchased one of these items, and therefore could not ensure that the item remained in the possession of the Forest Service. In some Forest Service regions, employee checkout procedures attempt to mitigate this by requiring that an official certify that the employee has accounted for all property. An inventory listing of these items would enable the supervisor to ensure that all vulnerable assets are properly accounted for when employees leave. USDA’s revised regulations, issued in June 2001, prohibit the purchase of accountable and sensitive property except by warranted cardholders. However, the revised regulations did not mitigate the issues we identified regarding proper accounting for vulnerable assets. Therefore, these items continue to be at an increased risk of misappropriation. Table 2 summarizes the actions USDA and the Forest Service have taken to address many of the internal control weaknesses identified by the IG and/or us. We did not test the effectiveness of these actions because they were implemented subsequent to our review time frame. On reviewing the proposed actions, however, we found that in certain cases, even if properly implemented, they will still not fully remedy known vulnerabilities in internal controls. These cases are noted in the table. The lack of adequate internal controls resulted in violations of numerous federal acquisition requirements and USDA/Forest Service purchase card policies that we classified as improper purchases. These included (1) purchases that were split into two or more transactions to circumvent single transaction limits, (2) purchase transactions that were paid for twice, (3) purchases of unauthorized items, (4) purchases that exceeded single purchase limits, (5) unapproved information technology (IT) purchases, (6) transactions charged to purchase card accounts of former employees, and (7) convenience checks written by cardholders to reimburse themselves. Table 3 shows the total dollar amounts for exceptions we identified for each category. While the total amount of improper purchases we identified is relatively small compared to the more than $320 million in annual purchase card and convenience check transactions, it demonstrates vulnerabilities from weak controls that could easily be exploited to a greater extent. The above policy violations are discussed in more detail below. Split purchases. Using data mining techniques, we identified purchases that appeared to have been split into two or more transactions by cardholders to circumvent their single transaction limit. We requested supporting documentation for a statistically determined sample of 213 out of 1,854 potentially split purchases we identified. Of these 213, we identified 29 actual split purchases for which we received and examined documentation that confirmed that the purchases were split into two or more transactions. Based on these results, we estimate that almost $1.3 million of the total fiscal year 2001 purchase card transactions were split transactions. For example, a cardholder with a single purchase limit of $2,500 purchased 13 toner cartridges totaling $3,918. The cardholder had the vendor split the purchase between two invoices to avoid exceeding her single purchase limit. In another example, the cardholder purchased $36,984 of safety equipment for rescue workers. The cardholder had the vendor separate the total charge into three charges to circumvent her single transaction limit of $25,000. The projected amount of split transactions may have been higher had we received all documentation requested. However, for 59 of the 213 sampled transactions, we could not determine whether they were split transactions because cardholders did not provide documentation through the APC to enable us to assess them. The purpose of the single purchase limit is to require that purchases above established limits be subject to additional controls to ensure that they are properly reviewed and approved before the agency obligates funds. By failing to monitor transactions, these limits may be circumvented and the Forest Service will have less control over the expenditure of its resources. Duplicate transactions. Using data mining techniques, we identified individual purchases that appeared to have been charged twice to cardholder’s accounts. We requested supporting documentation for a statistically determined sample of 230 of the 8,659 potentially duplicate transactions we identified. Of these 230, we identified 6 actual duplicate transactions. Based on these results, we estimate that $177,187 of the total fiscal year 2001 purchase card transactions were duplicate transactions. The projected amount of duplicate transactions may have been higher had we received all documentation requested. However, for 30 of the 230 sampled transactions, we could not determine whether they were duplicate transactions because cardholders did not provide documentation through the APC to enable us to assess them. Supervisory review of the documentation supporting the cardholder’s transactions reduces the risk that duplicate charges would go undetected and result in financial losses to the government. In addition, an effective monitoring program at the APC/LAPC level would help flag these types of improper transactions. Purchases of unauthorized items. USDA purchase card policy states that the purchase card and convenience checks will not be used for the purchase of hazardous items such as firearms, ammunition, explosives, or hazardous biological and radioactive substances. However, we identified 10 transactions totaling $53,324 for the purchase of ammunition, rifles, and explosives. For example, we identified two transactions for the purchase of rifles, which are used for animal control and other Forest Service activities. When we informed the cardholders that these transactions were improper, one cardholder stated that he was unaware that purchase card policy prohibited this purchase. The other cardholder stated that as a warranted cardholder, she was allowed to purchase the rifle. This is not the case under Forest Service policy, and when we brought this to the attention of the Forest Service APC, she contacted the employee to inform her that the purchase was improper. Further, we identified a $500 purchase of ammunition, which was given to the local sheriff's department under a cooperative agreement. Under the agreement, the sheriff's department would patrol campgrounds because of manpower shortages within the Forest Service. The cardholder told our investigator that it is a common occurrence in his region to have cooperative agreements with local law enforcement agencies. When we discussed this transaction with the Forest Service APC she expressed some concern as to whether the intent of the cooperative agreement program was being properly administered in the cardholder’s region. Purchases that exceeded single transaction limits established by USDA policy. Through our data mining efforts, we identified 12 purchases totaling $41,445 that exceeded the cardholder’s single transaction limit by 10 percent or more. Of the purchases we identified, we noted that none of them were made using the purchase card; instead they were made using convenience checks that had been issued to the cardholder. According to the Forest Service APC, when an individual cardholder uses a purchase card and the amount of the purchase is in excess of the limit, electronic controls established by Bank of America deny the transaction and it cannot be completed. However, these controls do not exist for convenience checks. The cardholder’s single transaction limit is printed on the face of his/her convenience checks. Yet when a cardholder writes a check in excess of their transaction limit, only scrutiny by the vendor would identify this. According to the Forest Service APC, the Bank of America honors all convenience checks. Therefore, when vendors submit checks written for amounts in excess of the cardholder’s limit to Bank of America, the checks are accepted and processed for payment. This lack of control allows a cardholder to circumvent the single transaction limit, even in the case where the Forest Service has reduced a cardholder’s single transaction limit to $0 or $1 for abusing the purchase card program or due to separation from the agency, increasing the risk of unauthorized or improper purchases by cardholders. No preapproval of IT purchases. While the Forest Service generally does not require preapproval of purchases under $2,500, there are some specific categories of items for which prior independent approval must be obtained. According to Forest Service policy, cell phones, fax machines, scanners, and other IT equipment are not to be purchased without first obtaining approval from the appropriate IT personnel. However, we found 11 transactions totaling $25,452 for equipment, including cell phones, scanners, printers, and fax machines, that did not have this required preapproval. Transactions by cardholders separated from the Forest Service. Using data mining techniques, we identified purchase card accounts that had transactions totaling approximately $43,625 charged to them with transaction dates that appeared to be after the cardholders left the Forest Service. In discussions with the Forest Service APC, she agreed that, based on the available data, $4,385 of these transactions could be confirmed to be improper, having been made after the employees had left the Forest Service. For example, one former employee left the Forest Service on November 4, 2000, but PCMS records indicated that six purchases, totaling $1,632, were charged to the employee’s purchase card account over the next 2 months at Ames and Kmart department stores. The Forest Service was unable to provide documentation to support the appropriateness of the remaining transactions totaling $39,240. Cardholders wrote convenience checks to themselves. We found 26 instances, totaling $2,014 where cardholders wrote checks to themselves, contrary to Forest Service policies that prohibit this practice. Writing checks for cash is also an unauthorized transaction, according to USDA’s micropurchase guide. In addition, the guide states that cardholders may issue checks to reimburse other employees for local travel expenses, such as mileage, parking, and taxis, authorized by their agency while on official business, or miscellaneous expenditures (e.g., supplies, services, registration fees, and telephone use for official business) that were cleared with the cardholder before the purchase was made. However, the proper documentation must be completed and the expenditures must be approved by an authorized official other than the cardholder. In most of the cases we identified, the cardholders stated that they were unaware of the prohibition against writing checks to themselves for cash. The remaining cardholders stated that they were aware of the restriction, but did it anyway to expedite their reimbursement as no other cardholders with checks were available at the time, or in one case, the employee who reimbursed himself was the only one with checks at that location. The above examples not only illustrate a lack of adequate oversight, but also the need for better training. According to USDA and Forest Service regulations, each cardholder is required to obtain some type of training before being issued a card. Each agency within USDA is responsible for training participants in accordance with USDA or agency-specific regulations and is allowed to determine the method of certification. The inadequacies and ineffectiveness of internal controls were also evident in the 779 wasteful and questionable transactions we identified that totaled over $1 million. Transactions we classified as wasteful were for items or services that were (1) excessive in cost compared to other available alternatives, (2) for a questionable government need, or both. We also identified other transactions that we classified as questionable because there was insufficient documentation to determine what was purchased. Lacking key purchase documentation, we could not determine what was actually purchased, how many items were purchased, the cost of each of the items purchased, and whether there was a legitimate government need for such items. Table 4 indicates the number of transactions and dollar amounts that we determined to be wasteful or questionable. These transactions are indicative of what can occur when purchase card use is not properly controlled. We tested only a portion of the transactions that we identified that appeared to have a higher risk of fraud, waste, or abuse; there may be other improper, wasteful, and questionable purchases in the remaining untested transactions. We identified 135 purchases totaling $212,104 that we determined to be wasteful because they were excessive in cost relative to available alternatives, of questionable government need, or both. We considered items to be excessive in cost when less expensive alternatives would meet the same basic needs. We defined items as being of questionable government need when they appeared to be a matter of personal preference or personal convenience, were not reasonably required as part of the usual and necessary equipment for the work the employees were engaged in, or did not appear to be for the principal benefit of the government. Specifically, we identified 93 purchases totaling $127,319 that we considered excessive in cost, including purchases for digital cameras, premium satellite and cable TV packages, awards and gifts, and cancellation fees. In addition, we identified 42 purchases totaling $84,785 for which we questioned the government need. Such purchases included specialty costumes, PDAs, and PDA accessories. Forest Service policy requires that purchasers buy equipment, supplies, or materials that economically meet the needs of the government, avoid deluxe items when the requirements are satisfactorily met by less costly standard articles, and take into account the perspective of the user of the product. When we reviewed the supporting documentation for many of the purchases we identified, we noted that the cardholders frequently did not document their determination that the item purchased economically met the needs of the government based on an evaluation of price and other factors, thus avoiding deluxe items as required by Forest Service purchasing policy. When we requested additional information from cardholders, they either did not provide the requested information or the documentation provided was inadequate to support that the specific purchase was in compliance with this policy. Items purchased at a price higher than that of available alternatives that would have met the same basic needs included: Digital cameras. During our detailed testing, we identified 66 digital cameras and accessories purchased in 37 separate transactions totaling $61,243 that appeared to have been purchased based on the personal preference of the cardholder, not on the minimum specifications to support the anticipated use. Digital cameras are available at many price levels, with the price usually reflecting the technical specifications of the cameras and the options included. The Forest Service uses digital cameras for various purposes in its operations, such as taking digital images of nursery and reforestation activities throughout the nation. These images are used for publications, presentations, workshops, and placed on the RNGR web site as part of the technology transfer and technical assistance missions of certain teams. Depending on the intended use of the images, cameras must have certain capabilities of which the users should be knowledgeable or at a minimum have readily available guidance on. This helps to ensure that cameras purchased meet and not exceed the needs of the user. In our review of the supporting documentation and cardholder statements we noted that some of the cardholders knew how the cameras were going to be used. However, they did not know what minimum technical capabilities the cameras had to have. Cardholders purchased cameras ranging from 1 megapixel to 4 megapixels of data resolution, ranging in price from around $350 to over $1,900, that appeared to be based on personal preference, not Forest Service need. Forest Service policy states that the requirements of an item must be taken into account in its purchase to ensure that it economically meets the needs of the government and to prevent the purchase of deluxe items when the requirements are satisfactorily met by less costly standard articles. However, the Forest Service had not developed guidelines on the purchase of high-tech items such as digital cameras. In addition, the individual transactions by cardholders at various vendors involved the purchase of usually only one or two cameras at a time. This does not allow the Forest Service to achieve possible economies of scale by purchasing them from a single vendor at a discount. Premium satellite and cable TV packages. We identified 21 transactions totaling $4,843 for monthly satellite television programming. The Forest Service is authorized to provide minimum recreation facilities and opportunities for its employees consistent with the degree of isolation and permanence of the individual work centers. However, in each television entertainment purchase we identified, the cardholder had not contracted for the basic service offered by the vendor but instead for a premium package, such as HBO, Cinemax, NFL or NBA games, or for pay-per-view movies. In one instance the invoice included charges for pornographic movies. In addition, we noted one $833 transaction for Direct TV service that the cardholder stated was needed in order to allow the office to track weather conditions in that part of the country. We questioned the need for this capability given that detailed weather tracking is accessible on the Internet, which, according to a Forest Service telecommunications manager, is available in all offices. In addition, we reviewed the invoice supporting this transaction noting that the cardholder had also subscribed to 2001 NFL Sunday Ticket ($199), a subscription to view NFL games on Sunday during the NFL season, through the vendor. Awards and gifts. We noted several purchases for awards and retirement and farewell gifts for which adequate supporting documentation was not provided or the award items purchased were not in compliance with USDA policy. USDA policy provides for a number of performance award categories and criteria for each, and requires that the purpose and type of award given should be documented. Nonmonetary awards, according to USDA policy, are time off awards, keepsakes, letters of appreciation, and honorary awards. We identified purchases for which the Forest Service was unable to identify the purpose of the award or provide supporting documentation. For example, we identified eight transactions, totaling $13,694 in award purchases, which included hats, mugs, backpacks, and blankets purchased from vendors such as Warner Bros., Eddie Bauer, and Mori Luggage and Gifts, in which the cardholder either gave no or inadequate justification for the purchase of these items. In addition, USDA’s regulation on career service recognition states that awards are intended to recognize employees for their special efforts, and motivate others who witness the presentation. According to the regulation, employees should not be recognized monetarily when they leave USDA either through retirement or separation. However, USDA agencies may consider providing some form of honorary or nonmonetary recognition of an employee’s efforts in support of USDA’s mission. Items such as plaques or pins are considered appropriate and may be presented. However, we identified one transaction for the purchase of a golf bag as a farewell gift and another for the purchase of a rifle as a retirement gift for an employee. Cancellation fees. We found two transactions totaling $34,950 for cancellation fees for rooms not used by Forest Service employees for a conference and housing for a seasonal work crew. Specifically, the Forest Service paid a $30,000 cancelation fee to Doubletree Hotel in Denver, Colo.. The Contracting Officer did not recall the specific facts related to this transaction except that the program office rescheduled this conference several times with the hotel and then finally canceled, but not in time to avoid the fee. The Forest Service also paid a $4,950 cancellation fee to the Rain Country Bed and Breakfast for late cancellation of its reservation to house seasonal workers. We also found government expenditures that appeared to be for items that were a matter of personal preference or convenience, were not reasonably required as part of the usual and necessary equipment for the work the employees were engaged in, or did not appear to be for the principal benefit of the government, which included the following. Specialty costumes and decorative tent. Forest Service policy provides for purchases to promote programs related to Smokey the Bear and Woodsy the Owl. We noted three transactions totaling $8,750 for costumes not related to these two programs. For example, the Forest Service purchased two fish costumes, Frank and Franny Fish, from the Carol Flemming Costume Design Studio at $2,500 each, used for aquatic education in the Pacific forest regions. The cardholder explained that these personalities are to the fisheries program what Smokey Bear is to the fire program. We also identified a transaction totaling $3,750 for 39 “web of life” costumes, including animals and nature themes, to be used in education programs. However, Forest Service policy does not support the purchase of these costumes and the cardholder’s statement does not establish sufficient government need for the costumes to support regional programs. In addition, we identified a $7,500 purchase of a hand-stitched “salmon tent” from Evelyn Roth Festival Arts. The supporting documentation did not provide a purpose for the purchase of the tent, only a note that the Forest Service has purchased several of these tents over the last 5 years or so. Personal Digital Assistants (PDA). During our review, we identified 11 transactions for the purchase of 14 PDAs and accessories totaling $8,768 from vendors such as Palm Computing, Casio, Amazon.com, and Best Buy. The Forest Service does not have a policy on the purchase of PDAs, handheld electronic devices that function as calendars, address books, and other personal administrative aids. Calendars and daily planners cost from $6 to $56 with refills for the daily planner costing about $20. We noted that some cardholders had purchased high-end items such as the Palm VTM and Palm M505TM with costs ranging from $350 to $450. By comparison, alternatives such as Palm’s M105TM model retailed for approximately $200 at the time of these transactions. In our review of the supporting documentation for these purchases, we found nothing to show how the cardholders determined that the PDAs were necessary to fulfill a valid government need, rather than the personal preference of the cardholders. For example, one cardholder purchased a single IBM workpad with hotsync cradle for $829 to use as a calendar, address book, and to check e-mail messages. Another cardholder purchased six new PDAs from Casio Electronics by trading in six Forest Service PDAs plus $199 each. When asked to explain the need for the newer PDAs, his response was that the newer ones were faster and had more memory to support e-mail. Lastly, the Forest Service incurred other expenses for items to support the PDAs, such as keyboards and carrying cases. In one instance we identified a purchase of PDA keyboards, totaling $374, which, according to the cardholder, would be used for taking notes in meetings. Cordless phones and headsets. We noted several purchases of cordless phones or headsets for Forest Service employees where cardholders were unable to provide documentation supporting the necessity for the item in performing their duties. Instead, the purchases have the appearance of having been made for personal convenience. For example, a Forest Service cardholder purchased cordless phones and handsets totaling $2,242. When we asked why the phones were needed, the cardholder responded that they were purchased for the ease of use and to enhance the workplace for certain employees. We also identified numerous other individual purchases that we considered to be wasteful due to excessive cost or questionable government need. Such purchases included $9,219 for six pairs of night vision goggles ($1,536 average), that we found available at prices ranging from $379 to $1,949; $2,701 for sound masking equipment, which the cardholder stated was needed to reduce the level of noise coming from the cubicles in the regional office where she worked; $2,929 paid to Hair of the Dog for an aquarium in an Alaska regional visitor information center; $2,295 paid to Quality Billiards for a billiard table for a Forest Service bunk house; $2,204 paid to Best Buy for TV/VCR combinations and their installation into Forest Service vehicles; $589 paid to Ultimate Electronics for a DVD player to be used by employees to watch exercise videos in the fitness room; and $200 for a leather briefcase. Until the Forest Service provides adequate management oversight of its purchase card program, including a more thorough, systematic review and monitoring of expenditures with appropriate disciplinary action when warranted, the types of wasteful and abusive purchases we identified are likely to continue. Forest Service policy requires that cardholders maintain adequate documentation of all purchase card and convenience check transactions. As discussed earlier in this report, we requested supporting documentation for a nonstatistical sample of over 5,000 transactions. Of these, we identified 644 transactions totaling over $869,825 that appeared to be improper, wasteful, or potentially fraudulent, but for which the Forest Service either provided insufficient or no documentation to determine the propriety of the transactions. For 104 transactions, totaling $184,682, that appeared to be either improper or wasteful, the documentation we received was inadequate or was not the correct supporting documentation, and we were unable to make a determination of the propriety of the transactions. For example, we requested supporting documentation for a $2,315 transaction charged by Unisys Corporation. Supporting documentation was not provided to us. The Forest Service explained that the employee knowledgeable about this charge had left the Forest Service and the documentation related to the purchase could not be located. The remaining transactions represented purchases made at various vendors such as $5,803 at Have Party Will Travel; $4,940 at Spencer’s TV & Appliance; $2,400 at Grand Home Furnishings; $2,828 at Lowder’s Home Entertainment; $1,729 at Mick’s Scuba Inc.; and $3,430 at Samson Tours. We also identified 213 transactions totaling over $68,706, which appeared to be either unauthorized and for personal use, made using compromised accounts or unauthorized transactions by merchants, but adequate documentation was unavailable to allow us to determine the propriety of the purchases. We were subsequently able to determine that several of the transactions were in fact fraudulent. These fraudulent and potentially fraudulent transactions included the following. Transactions made by cardholders that appeared to be unauthorized and intended for personal use. For example, we identified three transactions totaling $1,031 in jewelry and china for one cardholder that appeared to be unauthorized or for personal use. In the course of our follow-up inquiries, we found that the cardholder had been under investigation by the IG since January 2002 when an employee at a local vendor expressed concerns to a Forest Service employee about some purchases by another Forest Service employee. In our review of the USDA IG Report of Investigation on this cardholder, we noted that the fraudulent activity identified by the IG spanned from May 1999 through January 2002. During this period the cardholder purchased five digital cameras totaling $2,960, six computers totaling $6,019, three palm pilots totaling $736, jewelry totaling $1,967 and various other items including cordless telephones, figurines, and Sony Playstations totaling $6,101. On December 2, 2002, the employee pleaded guilty to one felony count of theft of government money and property in the amount of $31,342. In addition, we identified one transaction totaling $511, at a Tribal Bingo Casino, for another cardholder who, according to the IG, is also currently under investigation. Transactions made using compromised accounts in which a purchase card or account number was stolen and used to make unauthorized purchases. For example, we identified unauthorized transactions for $692 at Kmart, Circuit City, and other vendors by a person other than the cardholder using the cardholder’s account number. The cardholder contacted one of the merchants about the charges and was told that the merchant’s security personnel requested personal identification from the individual after the purchase, but the individual left the store and did not return. The cardholder’s account was canceled. In addition, we identified a transaction that had been disputed by a cardholder and upon investigation the cardholder determined that the charge was incurred by an employee of a local vendor for calls made to a phone-sex line. Unauthorized transactions charged by merchants to cardholder accounts. For example, we identified 20 disputed transactions, totaling over $2,700, for one merchant, Productivity Plus. On the basis of cardholder explanations we reviewed in PCMS’s dispute screen, it appeared that the merchant had obtained several cardholder account numbers and charged amounts to them without the authorization of the cardholders. For the remaining 327 transactions, totaling $616,437, the cardholders provided no documentation to the APC. Lacking key purchase documentation, we could not determine what was actually purchased, how many items were purchased, the cost of individual items purchased, and whether there was a legitimate government need for the items. Based on the vendor names and MCCs which identified the types of products or services sold by these vendors, we believe at least some of these items may have been determined to be improper or wasteful had the documentation been provided or available. These transactions included $2,178 in purchases from Best Buy, $2,500 from BUY.COM, $6,840 at HPSHOPPING.com, $4,100 from Party Time Inc. and $3,185 from USA Tours. The majority of these transactions represent single transactions for individual cardholders. However, we noted that there were several cardholders with multiple transactions who did not provide us with supporting documentation for their purchases. For example, one cardholder in the pacific southwest region did not provide documentation for five transactions of electronic purchases totaling $3,349 that appeared to be either improper or wasteful. Another cardholder in the pacific southwest region did not provide documentation for six transactions, totaling $11,267, for on-line services, electronics, and one payment by convenience check. The Forest Service lacks certain basic internal controls over its purchase card program, and thus is susceptible to waste, fraud, and abuse. The IG in its August 2001 report also identified many of the same control weaknesses that we did. The Forest Service took several steps to address these problems when it issued revised purchase card regulations in June 2001, December 2002, and most recently, in conjunction with USDA, in February 2003. However, the revised regulations did not fully address the critical issues reported by the IG and confirmed by us as continuing weaknesses during our audit, such as supervisory review, effective monitoring of purchase card transactions, and property accountability. Until these weaknesses in fundamental internal controls are addressed, the types of improper, wasteful, and potentially fraudulent purchases we identified are likely to continue and certain assets will remain vulnerable to theft. The Forest Service will have to thoroughly reassess and strengthen its current policies and procedures to address the weaknesses identified, develop a strong commitment at all levels of the agency to carryout these policies and procedures, and implement appropriate oversight to continually assess their effectiveness. We recommend that the Chief of the Forest Service take the following actions to strengthen internal controls and compliance in its purchase card program, decrease improper and wasteful purchases, and improve the accountability over assets. With regard to improving the Forest Service’s internal controls over purchasing, we recommend that the Chief of the Forest Service do the following. Establish policies and procedures that segregate duties for at least some phases of the purchasing process when using the purchase card. The Forest Service program should ensure that no one individual is able to take all the steps needed to request, purchase, receive, maintain, and validate goods and services. Establish policies and procedures requiring that supervisors review and validate all of their subordinates’ purchase card transactions, including review of original supporting documentation to confirm they are appropriate, for official purposes, and reconciled in a timely manner. Strengthen policies and procedures to ensure that the appropriate LAPC is notified and the LAPC cancels cardholder accounts immediately when a purchase card is lost or stolen or a cardholder leaves Forest Service employment. Establish a systematic process that the APC can use to track and monitor training for cardholders and program coordinators to help ensure that they receive (1) training before being granted purchase cards or approval authority and (2) timely, periodic refresher training in areas such as proper segregation of duties, purchasing policies and procedures, supervisor and program coordinator responsibilities for reviewing and approving individual purchases, and reporting potential purchase card fraud and abuse. Revise and strengthen policies and procedures for cardholders who have had their purchase card use suspended or limited to ensure that similar action is taken on the use of convenience checks. Revise and strengthen policies and procedures over disputed transactions to ensure that all disputed transactions are identified in a timely manner and completely resolved. Establish policies and procedures to ensure that original documentation is maintained in central locations, such as regional offices, so that it is readily available for periodic monitoring reviews by supervisors, LAPCs, and COCOs. Revise and strengthen policies and procedures for designating property costing under $5,000 as “sensitive” to include all equipment susceptible to theft. Also, ensure that the revised policies and procedures are applied consistently across all Forest Service regions. Establish policies and procedures to ensure that all sensitive and accountable personal property used in Forest Service operations is promptly entered into the PROP system or other comparable system and that a periodic inventory of the items is taken. With regard to improving and enforcing compliance with purchasing requirements at the Forest Service, we recommend that the Chief of the Forest Service do the following. Implement monitoring techniques to identify improper transactions such as cardholders making split purchases, cardholders writing checks payable to themselves, purchases exceeding established dollar thresholds, or purchasing unauthorized items. Revoke or suspend purchasing authority of cardholders who are found to be frequently or flagrantly noncompliant with policies and procedures. With regard to purchases that may be at an excessive cost or for questionable government need, we recommend that the Chief of the Forest Service do the following. Require purchases of certain assets, such as computer equipment, PDAs, and other electronics to be coordinated centrally to take advantage of economies of scale, standardize types of equipment purchased, and better ensure bona fide government need for each purchase. Develop and implement purchasing guidelines, based on specific Forest Service uses, for equipment such as digital cameras and projectors. Require that cardholders document their determination that purchased items economically met the needs of the government based on an evaluation of price, consideration of the item’s expected use, and other factors. Follow up on transactions we identified for which no supporting documentation was provided to determine that the items purchased were for a legitimate government need, and take appropriate disciplinary or corrective action as warranted. The Forest Service provided written comments on a draft of this report. In its response, the Forest Service did not specifically comment on our recommendations. However, the response acknowledged that some of the internal control weaknesses identified in our report existed both prior to and during our review. The response further outlined actions taken or planned since June 2001 to strengthen the overall management of the purchase card program, which the Forest Service described as having been taken, not withstanding our report. We acknowledged many of these actual and planned actions in our report and believe that these actions, if fully implemented, will help to address some of the vulnerabilities that the IG and we identified. However, as shown in Table 2 on page 30 of our report, many weaknesses will still remain that continue to expose the Forest Service to improper, wasteful, and fraudulent purchase card activity. Our 15 recommendations address remaining weaknesses identified in the table and elsewhere in our report. Specific actions taken as outlined in the Forest Service’s response included, among other things, requiring definitive levels of auditing of purchase card transactions, performing data mining queries of transaction data to identify potential questionable purchases, and conducting training for regional and local agency program coordinators. In its response, the Forest Service also stated that in fiscal year 2003 USDA issued an Internal Control Blue Print to decrease risks and improve internal controls over the purchase card program. In response, the Forest Service developed a Plan for Improving Internal Controls (Plan) that included improvements such as significantly decreasing the use of convenience checks beginning in fiscal year 2003 with the goal of totally eliminating them in the future, reducing the number of cardholders by 10 percent, developing additional data mining queries including PCMS alerts and statistical sampling, ensuring that the ratio of LAPCs to cardholders is appropriate, and requiring supervisors to review cardholder purchases including backup documentation. If fully institutionalized and enforced, the actions included in the Forest Service’s Plan, along with those actions previously taken, will go a long way in identifying improper purchases. However, it will be important that these actions be carried out in a systematic manner. Further, even if these actions are implemented systematically, they still fall short in mitigating certain internal control weaknesses that are addressed by the 15 recommendations in our report. Specifically, the Forest Service letter outlined actions to strengthen monitoring such as the monthly, quarterly, and annual transaction reviews by LAPCs and COCOs, data mining queries developed and furnished to coordinators, and reviews by regional offices of audits performed by local offices. While these revised policies will provide much needed oversight at a macro level, these actions do not specifically address our recommendations regarding controls over cancellation of stolen cards, disputed transactions, training, and the maintenance of documentation in a central location. The Forest Service letter stated that its Plan requires supervisors to review cardholder purchases, including backup documentation. Upon review of the Forest Service’s Plan, we noted that it does not require supervisors to review backup documentation, as stated in the response letter. The Plan only states that the Forest Service will communicate, by July 15, 2003, the requirement for cardholder’s supervisors to review transactions quarterly in accordance with DR 5013-6, which also does not require that supervisors review backup documentation. We confirmed our understanding of this in discussions with USDA OPPM officials. We continue to believe that limited post-reviews are not sufficient, given the lack of segregation of duties, the decentralization of the organization, and the ratio of cardholders to LAPCs in the organization to detect or prevent inappropriate transactions. As recommended in our report, we believe that the Forest Service should establish policies and procedures requiring a front-line review by supervisors to validate all of their subordinates’ purchase card transactions, including review of original supporting documentation to confirm that they are appropriate, for official purposes, and reconciled in a timely manner. The Forest Service response also did not discuss any actions taken to reduce purchases that are of excessive cost or for questionable government need. In our report, we recommended that the Forest Service purchase certain assets centrally, develop purchasing guidelines, and require that cardholders document that items meet the needs of the government. We believe that our recommendations, if implemented, will assist in reducing waste in the purchase card program. In the area of property accountability, the Forest Service responded that unwarranted cardholders are no longer permitted to acquire accountable property with purchase cards. Further, the letter stated that the Forest Service plans to issue guidance requiring that all property be labeled as Forest Service property and prohibiting regions from individually determining what property is considered sensitive. However, these new policies will not require the tracking of items costing under $5,000, such as PDAs, cameras, ATVs, and snowmobiles that we consider to be at high risk for theft or misuse. USDA has determined that the $5,000 accountability threshold is the level of acceptable risk for tracking property in the property system. USDA has further determined that items such as PDAs and digital cameras rapidly lose their value and usefulness and therefore, the cost of tracking and maintaining property records for these types of items exceeds their value. We disagree with this position. None of the documentation we have reviewed or individuals we have spoken to indicated that uses for which these items were purchased will change dramatically or cease altogether in the near term, thus these items will continue to be useful for some time to come. We are not suggesting that items costing less than $5,000 be capitalized for financial reporting purposes, however, we continue to believe that the Forest Service should track these items to help ensure accountability over them to mitigate the risk of misappropriation. The Forest Service response also characterized the $2.7 million of alleged improper, wasteful, and questionable purchases that we identified as relatively small compared to the $320 million in purchases during fiscal year 2001. While we acknowledge this in the report, we also note that these improper transactions demonstrate vulnerabilities from weak controls that could be exploited to a greater extent. Further, in performing our review, we identified approximately 68,000 transactions that appeared to be at a higher risk of being improper or wasteful. However, we selected only 5,000 of these transactions for detailed review, therefore the actual amount of improper payments at the Forest Service is likely higher that what we identified. The Forest Service response further stated that it appears that “GAO’s goal is a risk free micro-purchase program that would include approval and/or review of each and every micro-purchase transaction.” While no purchase card program can be risk free, the goal of our recommendations is to reduce the level of risk in the Forest Service program to an acceptable level. Currently, we believe that the risk of waste, fraud, and abuse in the program is unacceptably high. A micro-purchase program should and can be designed with certain basic internal controls that need not be costly or onerous to implement to help ensure that improper transactions are detected or prevented in the normal course of business and therefore that taxpayer funds are effectively used toward the achievement of agency goals and objectives. The Forest Service’s written comments and our evaluation of certain of those comments not addressed above are presented in appendix I. As arranged with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days from its date. Then we will send copies of this report to the Ranking Minority Member of the Senate Committee on Finance, congressional committees with jurisdiction over the Forest Service and its activities, the Secretary of Agriculture, the Chief of the Forest Service, and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-8341 or [email protected] or Alana Stanfield, Assistant Director, at (202) 512-3197 or [email protected]. Major contributors to this report are acknowledged in appendix II. The following are GAO’s comments on the Forest Service’s letter dated July 7, 2003. 1. We received summary documentation for the number of cardholders from USDA’s Office of Procurement and Property Management and the Forest Service that supported approximately 14,000 and 11,000 cardholders respectively. Since USDA’s Office of Procurement and Property Management is responsible for the oversight of the purchase card program for all of USDA’s agencies, we used the number of cardholders that they provided for fiscal year 2001 in our report. 2. Discussed in the “Agency Comments and Our Evaluation” section of the report. 3. Of the 29 split purchases identified in the draft report provided to Forest Service for comment, 4 were made by cardholders who were also warranted employees, employees who can enter into, administer, or terminate contracts to the extent of the authority delegated to them. The contracting authority limit for a warranted Forest Service employee is separate and distinct from the single transaction limit for purchase card transactions. The Forest Service response stated that USDA regulations allowed a single purchase limit of $2 million or the cardholder’s warrant level. According to USDA purchase card regulations, warranted cardholders may conduct transactions up to the lesser of their purchase card single transaction limit or warrant authority. For all 4 purchases mentioned above, the total invoice amounts exceeded the single transaction limits of the cardholders. Therefore, the cardholders violated USDA regulations by splitting the invoice amount into separate purchase card transactions to circumvent their single transaction limits. Further, the Forest Service requires that cardholders submit requisition forms for all purchases exceeding $2,500, to ensure that they are properly reviewed and approved. However, requisition forms were not submitted by the cardholders for these 4 purchases, violating policies and procedures. In addition, none of the 29 split purchases identified in our report reflected transactions with GSA Advantage. 4. As stated in our report, the Forest Service was unable to provide us with documentation to support the appropriateness of $39,240 of the $43,625 in transactions that appeared to have occurred after the cardholders left the Forest Service. The Forest Service confirmed the remaining $4,385 as having been charged after the cardholder left the agency. 5. As part of our review, we tested compliance with existing Forest Service policies and procedures that were meant to prevent or detect improper payments, including the policy that cardholders are prohibited from writing checks to themselves. We identified 23 transactions that were in clear violation of this policy, indicating that this control was not functioning effectively. Although the purchases related to these particular transactions were not determined to be improper, this control weakness leaves the Forest Service vulnerable to improper purchases. The Forest Service’s internal control plan supports eliminating the use of convenience checks for non-emergency purchases, as well as other measures that should reduce the risk of improper use of convenience checks. However, as these steps have not yet been fully implemented, we are unable to assess their effectiveness. 6. The original requests for supporting documentation were made between June 20 and July 26, 2002. We asked the Forest Service to provide documentation on the last request by August 16, 2002. Subsequently, we extended the deadline until November 30, 2002, more than four months after our last request for information. In a status meeting held December 4, 2002, we informed OPPM and Forest Service officials that we had not received any supporting documentation for 327 transactions included in our requests and that these transactions would be categorized as questionable transactions in our report. We explained that continuing to accept this documentation would require us to significantly delay issuance of our report due to the time required to adequately review and assess any new documentation. OPPM and Forest Service officials both concurred with our position. Subsequently, the Forest Service offered to provide us with supporting documentation for 200 of the 327 transactions and we declined per the agreement reached during the December meeting. In addition to those named above, the following individuals made important contributions to this report: William Brown, Sharon Byrd, Cary Chappell, Lisa Crye, Francis Dymond, Jeffrey Jacobson, Jason Strange, and Ed Tanaka. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Since 1999, GAO has designated Forest Service's financial management as a high-risk area because of internal control and accounting weaknesses that have been identified by the Inspector General and GAO. Given these known risks and the hundreds of millions of dollars in credit card purchases made by the agency each year, GAO was asked to review the Forest Service's fiscal year 2001 purchase card transactions to determine whether (1) existing internal controls were designed to provide reasonable assurance that improper purchases would be prevented or detected, (2) purchases were made in accordance with established policies and procedures, and (3) purchases were made for a reasonable cost and reflected a legitimate government need. Internal control weaknesses in the Forest Service's purchase card program leave the agency vulnerable to, and in some cases, resulted in, improper, wasteful, and questionable purchases. These weaknesses included inadequate segregation of duties over purchases, supervisory review and approval of purchases, monitoring activities, and control over property used in Forest Service activities. For example, GAO found instances where items highly susceptible to theft, such as all terrain vehicles, digital cameras, and snowmobiles, were purchased and retained by cardholders, but no records of the items were created in Forest Service systems. These weaknesses likely contributed to approximately $2.7 million in improper, wasteful, and questionable purchases identified in our review. GAO identified purchases that totaled over $1.6 million that were improper because they violated law, regulation, or agency policy. These included purchases that had been split into two or more segments to avoid the cardholder's single purchase limit, purchases that had been paid for twice, purchases that exceeded single transaction limits, purchases for which required approvals were not obtained, purchases of unauthorized items, transactions on accounts of former employees, and instances where cardholders wrote convenience checks to themselves. GAO also found purchases totaling $212,104 that it considered wasteful because they were excessive in cost relative to available alternatives or were for a questionable government need. Further, GAO found purchases totaling $869,825 that it considered to be questionable because the Forest Service either could not provide supporting documentation for them, or supporting documentation was incomplete or incorrect and GAO was unable to determine whether the purchases were proper.
Prior to passage of the CFO Act in 1990, the seemingly never ending disclosures of fraud, waste, abuse, and mismanagement in federal programs painted a picture of a government unable to manage its programs, protect its assets, or provide taxpayers with the effective and economical services they expect. As the Comptroller General pointed out around the time the CFO Act was passed, the problems that existed were not limited to a few agencies or a few programs; rather, all of the major agencies had serious problems. For years, GAO and the Inspectors General had reported a litany of problems that resulted in wasteful spending, poor management, and losses totaling billions of dollars. In some cases, the government’s ability to carry out crucial programs had been severely hampered. The financial management environment was in such disarray that not only were audited annual financial statements not required, most in the federal financial management community did not even see the value of annual financial reporting. The need for federal financial statements was an area of major disagreement. Budgeting was what financial management was all about, and accountability for how the money was being spent was not the priority. Federal managers in the government were more concerned with the budget and did not focus on improved financial management. Concerned about the ever-growing problems in federal financial management and the need for a more integrated, comprehensive, and systematic approach to reform, in 1985 the Comptroller General issued a two-volume report entitled, Managing the Cost of Government: Building an Effective Financial Management Structure. This report laid out the nature and scope of the problem and provided the conceptual framework for the reforms needed to improve the federal financial management and therefore, manage the cost of government. Problems included the poor quality of financial management information and antiquated financial management systems, with recommended reforms such as strengthened accounting, auditing, and financial reporting and the systematic measurement of performance. In addition, GAO and the Office of Management and Budget (OMB) conducted separate studies of “high risk” programs that demonstrated breakdowns in internal control affecting hundreds of billions of dollars. Since January 1990, GAO has designated certain federal programs and operations as high risk because of their greater vulnerabilities to fraud, waste, abuse, and mismanagement. The increased awareness of the nature and extent of high-risk programs further bolstered the need for broad-based congressional action. While the federal government had not made concerted progress in reforming financial management, state and local governments had moved beyond the federal government in this area because of key factors including federal legislation such as the Single Audit Act of 1984. The Single Audit Act of 1984 established financial audit requirements for state and local governments that receive federal financial assistance in excess of certain dollar thresholds in any fiscal year. The Single Audit Act also included a requirement for independent auditors to review whether state and local governments have adequate internal control over federal funds. The need to comply with this law prompted states and local governments to focus on improving financial management and accountability. In the 1980s, many states began producing financial reports based on generally accepted accounting principles to provide a more complete picture of their financial situation. Consequently, in 1990, many in the financial management community believed that state and local governments were ahead of the federal government. To address the underlying problems that plagued federal financial management in March 1986, Senator William Roth introduced S.2230, the Federal Management Reorganization and Cost Control Act of 1986. Over the ensuing 4½ years, the concepts in S.2230 were refined and debated. What resulted was the CFO Act of 1990. The CFO Act, with bipartisan and bi-cameral support, had as its principle sponsors Senators John Glenn and William Roth and Representatives John Conyers and Frank Horton. Signed into law by President George H.W. Bush on November 15, 1990, the CFO Act was the most comprehensive and far-reaching financial management improvement legislation since the Budget and Accounting Procedures Act of 1950. The CFO Act established a leadership structure, provided for long- range planning, required audited financial statements, and strengthened accountability reporting. The CFO Act was the beginning of a series of management reform legislation to improve the general and financial management of the federal government, and laid the foundation for other key legislative reforms that followed a common thread of increased accountability and better management practices. The first legislation that followed the CFO Act was the Government Performance and Results Act of 1993 (GPRA), which requires agencies to develop strategic plans, set performance goals, and report annually on actual performance compared to goals. GPRA was followed by the Government Management Reform Act of 1994 (GMRA), which made permanent the pilot program in the CFO Act for annual audited agency-level financial statements, expanded this requirement to all CFO Act agencies, and established a requirement for the preparation and audit of governmentwide consolidated financial statements. The Federal Financial Management Improvement Act of 1996 (FFMIA) built on the foundation laid by the CFO Act by reflecting the need for CFO Act agencies to have systems that can generate reliable, useful, and timely information with which to make fully informed decisions and to ensure accountability on an ongoing basis. The Clinger-Cohen Act of 1996 (also known as the Information Technology Management Reform Act of 1996) sets forth a variety of initiatives to support better decision making for capital investments in information technology, which has led to the development of the Federal Enterprise Architecture and better-informed capital investment and control processes within agencies and across government. The Accountability of Tax Dollars Act of 2002 (ATDA) required most executive agencies that are not otherwise required, or exempted by OMB, to prepare annual audited financial statements and to submit such statements to Congress and the Director of OMB. Lastly, the Department of Homeland Security Financial Accountability Act of 2004 added the Department of Homeland Security (DHS) to the list of CFO Act agencies. As shown in figure 1, if successfully implemented, these reforms provide a solid basis for improving accountability of government programs and operations as well as routinely producing valuable cost and operating performance information. The enactment of the CFO Act represented a broad-based recognition that federal financial management was in great need of fundamental reform. Key elements of the CFO Act require centralized financial management leadership, both governmentwide and at the agency level, and as expanded by GMRA, agency-level and governmentwide annual audited financial statements. To facilitate stewardship and accountability at executive branch agencies, the CFO Act designated CFOs with broad responsibility for modernizing financial management systems, financial reporting, asset management, and strengthened internal control practices. The systematic measurement of performance, the development of cost information, and the integration of financial management systems are some of the financial management practices called for by the CFO Act that, if properly implemented, will significantly improve financial management throughout the federal government. Furthermore, the Act statutorily designates the 24 executive departments and agencies covered. These 24 departments and agencies represent 95 percent of net outlays in fiscal year 2004. Strong centralized leadership is essential to solving the government’s long- standing financial management problems. The CFO Act provided for such leadership by giving OMB broad new authority and responsibility for directing federal financial management, modernizing the government’s financial management systems, strengthening financial reporting, and internal control. The CFO Act also created a new position in OMB, the Deputy Director for Management, who serves as the government’s chief official responsible for financial management. In addition, the CFO Act established a new Office of Federal Financial Management (OFFM) in OMB to carry out the government-wide financial management initiatives and responsibilities. To head this office, the CFO Act established the position of Controller, an individual who must possess demonstrated ability and practical experience in accounting, financial management, and financial systems. This individual has responsibility for handling the day- to-day operations of OFFM to ensure that financial operations are being properly carried out government-wide. Executive-level leadership is a critical success factor for building a foundation of control and accountability that supports external reporting and performance management, which is needed to achieve the goals of the CFO Act. For this reason, an agency CFO must be a key figure in an agency’s top management team. The CFO Act stipulates that the CFO is a presidential appointee or appointed by the agency head and is assisted by a Deputy Chief Financial Officer. Both the CFO and the Deputy CFO generally must possess demonstrated ability in accounting, budget execution, financial and management analysis, systems development, and practical experience in financial management practices in large governmental or business entities. Among the CFO’s responsibilities are developing and maintaining integrated accounting and financial management systems, directing, managing, and providing policy guidance and oversight of all agency financial management personnel, activities, and operations, and overseeing the recruitment, selection, and training of personnel to carry out agency financial management functions. In addition, each CFO for the 24 agencies serves on the Chief Financial Officers Council, which regularly meets to advise and coordinate the activities of the agencies of its members on such matters as consolidation and modernization of financial systems. The CFO Act created the council and specified that the council will be chaired by OMB’s Deputy Director for Management. Other members will be OMB’s Controller and the Department of the Treasury’s Fiscal Assistant Secretary. The CFO Act, as expanded by GMRA, required that annual financial statements be prepared and audited for each CFO Act agency covering all accounts and associated activities of each office, bureau, and activity of the agency. The CFO Act also requires that the financial statements prepared pursuant to the act be audited in accordance with applicable generally accepted government auditing standards. These audits are the responsibility of the Inspectors General, but may be conducted by, and at the discretion of, the Comptroller General, in lieu of the Inspectors General. Inspectors General may contract with independent public accountants to conduct financial statement audits. The federal government has made substantial progress in financial management in the 15 years since the enactment of the CFO Act. If I were to summarize in just a few words the environment in 2005 as compared to 1990, financial management has gone from the backroom to the boardroom. Achieving Cultural Change—Perhaps most importantly, we have seen true cultural change in how financial management is viewed, this has been accomplished through a lot of hard work by OMB and the agencies and continued strong support and oversight by Congress. As I previously discussed, federal financial management had suffered from decades of neglect and an organizational culture that for the most part, had not fully recognized the value of good financial management—as a means of ensuring accountability and sound management. Although the views about how an organization can change its culture vary considerably, the organizations we and others have studied identified leadership as the most important factor in successfully making cultural changes. Top management must be totally committed in both words and deeds to changing the culture and the fundamental way that business is conducted. At the top level, federal financial management reform has gained momentum through the committed support of top federal leaders. For example, improved financial performance is one of the six governmentwide initiatives in the President’s Management Agenda (PMA). Under this initiative, agency CFOs share responsibility—both individually and through the efforts of the CFO Council—for improving the financial performance of the government. To achieve the goals of the financial performance initiative, agencies must now have more timely and reliable financial information, improve the integrity of their financial activities, and have sound and dependable financial systems. In conjunction with the other governmentwide program initiatives of the PMA, the federal government is improving its financial reporting practices and overall accountability. Establishing a Governmentwide Leadership Structure—As established by the CFO Act, OFFM, the OMB organization with governmentwide responsibility for federal financial management, has undertaken a number of initiatives related to improving financial management capabilities ranging from requiring the use of commercial-off-the-shelf financial systems to the promotion of cost accounting. In addition to assessing agency financial performance for the PMA, OFFM has issued financial management guidance to agencies. Some of OFFM’s initiatives are in collaboration with the CFO Council and are broad-based attempts to reform financial management operations across the federal government. While reforming federal financial management is an undertaking of tremendous complexity, it presents great opportunities for improvements in financial management and related business operations. In their efforts to continue financial management improvement, OFFM has recently collaborated with the CFO Council on initiatives in the following areas: internal control, full implementation of FFMIA, asset management, improper payments, and control over federal charge cards Selecting Qualified CFOs—The CFO Act established CFOs at the major departments and agencies and established minimum qualifications for CFOs. Measured in terms of coming to the job with a proven track record in financial management, the background of individuals selected for these positions has improved tremendously over the past 15 years. For example, the CFO at the Department of Labor has held a range of CFO and CFO- related positions in the private sector and government over a 30-year career that included serving as Treasury’s CFO in a previous administration. Testifying with me today, Dr. Linda M. Combs, the Controller at OMB, brings impeccable credentials and extensive experience to the federal government’s financial management leadership and policy-setting organization and exemplifies today’s federal CFO. Improving Financial Management Systems and Operations—Since 1990, progress has been made towards improving financial management systems in the federal government. Improved agency financial management systems and operations are essential to support management decision making and results-oriented activities as addressed by the CFO Act. At a minimum, federal managers must have financial information that is reliable, useful, and timely to support this effort. Federal financial management systems requirements for the core financial system, managerial cost system, and 12 other administrative and programmatic systems, such as grants, property, revenue, travel, and loans, which are part of an overall financial management system, have been developed. Beginning in 1999, OMB required agencies to purchase commercial-off-the- shelf (COTS) software that had been tested and certified through the Joint Financial Management Improvement Program (JFMIP) Program Management Office against the systems requirements that I just mentioned as well as the standard general ledger issued by the Department of Treasury. With these requirements, the federal government has better defined the functionality needed in its financial management systems which has helped the vendor community understand federal agencies needs. Concurrently, there has been an evolving realization that agencies need to change their business processes to adapt to the practices embedded in commercially available software versus modifying the software to accommodate their existing practices. Looking at financial management systems from another perspective, the federal government has acted on opportunities to consolidate operations. For example, a number of agencies perform accounting or business operations on behalf of others, consequently, the number of agencies processing payroll has been dramatically reduced from 22 to 4. According to OMB, through these initiatives, millions of dollars will be saved through shared resources and processes and by modernizing on a cross-agency, governmentwide basis. Further, OMB had established agency task forces that focused on developing Centers of Excellence to (1) reduce the number of systems that each individual agency must support, (2) promote standardization, and (3) reduce the duplication of efforts. If we were to compare the state of financial management systems today to where agencies were 15 years ago, the evolution has been dramatic. On the other hand, systems are at the top of our list in terms of remaining challenges for the future. As I will discuss later, agencies continue to struggle with developing and implementing integrated systems that achieve expected functionality within cost and timeliness goals. Preparing Auditable Financial Statements—Most CFO Act agencies have obtained clean or unqualified audit opinions on their financial statements. Unqualified audit opinion for CFO Act agencies financial statements have grown from 6 in fiscal year 1996 to 18 in fiscal year 2005. Improvements in timeliness have been even more dramatic this year. Agencies were able to issue their audited financial statements within the accelerated reporting time frame—all of the 24 CFO Act agencies issued their audited financial statements by the November 15, 2005 deadline, set by OMB, just 45 days after the close of the fiscal year. The CFO Act calls for agency financial statements to be issued no later than March 31st, which is 6 months after the fiscal year end, and in the earlier years some agencies were unable to meet that timeframe. OMB has incrementally accelerated the financial statement issuance date to address the timeliness of the information provided by the financial statements. Just a few years ago, most considered this accelerated timeframe unachievable. While the increase in unqualified and timely opinions is noteworthy, we are concerned about the number of CFO Act agencies that have had to restate certain of their financial statements to correct errors. I will discuss these issues in more detail later in this statement. Preparing Performance and Accountability Reports—Another clear indication of progress to date is the preparation of the annual Performance and Accountability Reports (PAR) by CFO Act agencies. The PARs provide financial and performance information that enables the President, the Congress, and the public to assess the performance of an agency relative to its mission and to demonstrate accountability. These reports summarize program, management, and financial performance data, including the Annual Performance Reports required by GPRA with annual financial statements and other reports, such as agencies’ assurances on internal control, accountability reports by agency heads and Inspectors’ General assessments of agencies’ most serious management and performance challenges. These reports serve as the federal government’s report to the American public and provide an accounting for the return on the taxpayers’ investment. This information is also provided to decision - makers who are interested in CFO Act agencies’ performance, such as OMB and the Congress. Furthermore, the Association of Government Accountants recognizes federal agencies for their high-quality performance and accountability reports by its annual awarding of the Certificate of Excellence in Accountability Reporting (CEAR). For the most recent evaluation of 21 agencies performance and accountability reports, 10 agencies were recognized for achieving excellence in their reports. Of particular note, the Social Security Administration and Department of Labor have received the CEAR award for the past 7 and 5 years, respectively. As part of our effort to be a model agency, GAO has been awarded the CEAR award since it first applied in 2001 and for the 19th consecutive year, independent auditors gave our financial statements an unqualified opinion with no material weaknesses and no major compliance problems. Strengthening Internal Control—Accountability is part of the organizational culture that goes well beyond receiving an unqualified audit opinion; the underlying premise is that agencies must become more results-oriented and focus on internal control. In December 2004, OMB revised its Circular No. A-123, Management’s Responsibility for Internal Control, to provide guidance to federal managers on improving the accountability and effectiveness of federal programs and operations by establishing, assessing, correcting, and reporting on management controls. Requiring federal managers, at the executive level, to focus on internal control demonstrates a renewed emphasis on identifying and addressing internal control weaknesses. Based on our 2005 assessment of high-risk programs, three programs previously designated as high-risk, largely due to financial management weaknesses, were removed from the list. The Department of Education’s Student Financial Aid Programs, the Federal Aviation Administration’s Financial Management, and the Department of Agriculture’s Forest Service Financial Management all sustained improvements in financial management and internal control weaknesses and thus warranted removal. Further, as I testified before this subcommittee earlier this year, thousands of internal control problems were identified and fixed over the past two decades, especially at the lower levels where internal control assessments were performed and managers could take focused actions to fix relatively simple problems. But again as I will discuss later, this type of work is far from complete. Developing New Accounting Standards—Another definitive example of progress made to date as well as a critical component of improved financial performance is the establishment of the Federal Accounting Standards Advisory Board (FASAB). In conjunction with the passage of the CFO Act, the OMB Director, Secretary of the Treasury, and the Comptroller General established FASAB to develop accounting standards and principles for the newly required financial statements and other federal entities. FASAB is comprised of a 10-member advisory board of 4 knowledgeable individuals from government and 6 non-federal members selected from the general financial community, the accounting and auditing community, and academia to promulgate proposed accounting standards designed to meet the needs of federal agencies and other users of federal financial information. The mission of FASAB is to develop accounting standards after considering the financial and budgetary information needs of congressional oversight groups, executive agencies, and other users. These accounting and reporting standards are essential for public accountability and for an efficient and effective functioning of our democratic system of government. To date, FASAB has issued 30 statements of federal financial accounting standards (SFFAS) and 4 statements of federal financial accounting concepts (SFFAC). The concepts and standards are the basis for OMB’s guidance to agencies on the form and content of their financial statements and for the government’s consolidated financial statements. The standards developed by FASAB have been recognized by the American Institute of Certified Public Accountants as generally accepted accounting standards for federal entities. While there has been marked progress in financial management, as I have just highlighted, a number of challenges still remain. The principal challenges remaining are (1) modernizing financial management systems, (2) improving financial reporting, (3) building a financial management workforce for the future, (4) addressing long-standing internal control weaknesses, and (5) ensuring the continuity of financial management reform. Fully meeting these challenges will enable the federal government to provide the world-class financial management anticipated by the CFO Act. While there continues to be much focus on the agency and governmentwide audit opinions, getting a clean audit opinion, though important in itself, is not the end goal. The end goal is the establishment of a fully functioning CFO operation that includes (1) modern financial management systems that provide, reliable, timely, and useful information to support day-to-day decision-making and oversight and for the systematic measurement of performance; (2) a cadre of highly qualified CFOs and supporting staff; and (3) sound internal controls that safeguard assets and ensure proper accountability. First and foremost, agencies must take full advantage of modern technology and develop financial management systems that are integrated with the range of other business systems. The federal landscape is littered with far too many unsuccessful financial management system implementation efforts. Most notable has been the Department of Defense (DOD) where billions of dollars have been invested in financial management systems with little return on the investment. DOD has historically been unable to develop and implement business systems on time, within budget, and with the promised capability. For example, we recently reported that the Department of Navy spent approximately $1 billion for four largely failed pilot Enterprise Resource Planning (ERP) system efforts, without marked improvement in its day-to-day operations. The Navy now has underway a new ERP project, which early Navy estimates indicate will cost another $800 million. While the new project, as currently envisioned, has the potential to address some of the Navy’s financial management weaknesses, it will not provide an all-inclusive end- to-end corporate solution for the Navy. Further, there are still significant challenges and risks ahead as the project moves forward, such as developing and implementing 44 system interfaces with other Navy and DOD systems and converting data from legacy systems into the ERP system. The results of the fiscal year 2005 assessments performed by agency inspectors general or their contract auditors under FFMIA show that these problems continue to affect financial management systems at most of the 24 CFO Act agencies. While the problems are much more severe at some agencies than at others, the nature and severity of the problems indicate that overall, management at most CFO Act agencies do not yet receive the complete range of information needed for accountability, performance management and reporting, and decision making. As we recently testified in September 2005, managerial cost accounting essentially entails answering very simple questions such as how much does it cost to do whatever is being measured, thus allowing assessments of whether those costs seem reasonable. In other cases, it could involve establishing a baseline for comparison with what it costs others to do similar work or achieve similar performance. To date, accumulating and analyzing the relevant financial and nonfinancial data to determine the cost of achieving performance goals, delivering programs, and carrying out discrete activities has proven difficult to do. Among the barriers standing in the way of this enhanced data are nonintegrated financial systems, lack of accurate and timely recording of data, inadequate reconciliation procedures, noncompliance with accounting standards, including the cost management standard, and failure to adhere to the U.S. Government Standard General Ledger (SGL). What is most important is that the problem has been recognized. Across government, agencies have efforts underway to implement new financial management systems or to upgrade existing systems. Agencies expect that the new systems will provide reliable, useful, and timely data to support day-to-day managerial decision making and assist taxpayer and congressional oversight. Whether in government or the private sector, implementing and upgrading information systems is a difficult job and brings a degree of new risk. Organizations that follow and effectively implement accepted best practices in systems development and implementation (commonly referred to as disciplined processes) can manage and reduce these risks to acceptable levels, organizations that do not typically suffer the consequences. For example, our work at DOD and the National Aeronautics and Space Administration this past year has shown that these agencies, which have experienced significant problems in the past in implementing new financial management systems, were not following the necessary disciplined processes for efficient and effective development and implementation of such systems. As I mentioned earlier, NASA is on its third attempt to implement a new financial management system. The first two attempts, which cost $180 million, failed and the current system initiative which is expected to cost close to $1 billion, has experienced problems. As we pointed out in recent testimony before this subcommittee, many of the problems NASA has been experiencing with its financial management system stemmed from inadequate implementation of disciplined processes. As the federal government moves forward with ambitious financial management system modernization efforts to identify opportunities to eliminate redundant systems and enhance information reliability and availability, adherence to disciplined processes is a crucial element to reduce risks to acceptable levels. In the area of financial reporting, we see two challenges: (1) the elimination of restatements and (2) greater transparency in financial reporting. Many CFO Act agencies have obtained clean or unqualified audit opinions on their financial statements, but the underlying agency financial systems and controls still have some serious problems. This manifested itself last year when a number of CFO Act agencies had to restate their financial statements. As we have previously testified, at least 11 of the 23 CFO Act agencies restated their fiscal year 2003 financial statements, and 5 CFO Act agencies restated their fiscal year 2002 financial statements. The restatements to CFO Act agencies’ fiscal year 2003 financial statements ranged from correcting 2 line items on an agency’s balance sheet to numerous line items on several of another agency’s financial statements. The amounts of the agencies’ restatements ranged from several million dollars to over $91 billion. Nine of those 11 agencies received unqualified opinions on their financial statements originally issued in fiscal year 2003. Seven of the 9 auditors issued unqualified opinions on the restated financial statements, which in substance replace the auditors’ opinions on their respective agencies’ original fiscal year 2003 financial statements. For 2 of these 9 agencies, the auditors not only withdrew their unqualified opinions on the fiscal year 2003 financial statements but also issued other than unqualified opinions on their respective agencies’ restated fiscal year 2003 financial statements because they could not determine whether there were any additional misstatements and the effect that these could have on the restated fiscal year 2003 financial statements. We have reported on some of these agency restatements and have work ongoing at a number of other agencies to more fully understand the issues surrounding these restatements. We have not yet had a chance to look in any depth at restatements for fiscal year 2005 because fiscal year 2005 financial statements, which would identify any such restatements, were just issued by the deadline of November 15. We did note though, that there were a number of restatements. The second challenge is that current financial reporting does not clearly and transparently show the wide range of responsibilities, programs, and activities that may either obligate the federal government to future spending or create an expectation for such spending. The current financial reporting model provides information on financial position and changes in such position during the year, as well as budget results. However, more important than current and short-term deficits, we face large and growing structural deficits in the future due primarily to known demographic trends, rising health care costs and relatively low federal revenues as a percentage of the economy. Our nation’s current fiscal path is unsustainable and failure to highlight, analyze and appropriately respond to the resulting long-term consequences could have significant adverse consequences on our future economy and standard of living. While the Statement of Social Insurance will provide long-term information for those specific programs, in our view, more comprehensive reporting is necessary to fully and fairly reflect the nation’s longer-term fiscal challenges. Consequently, a top priority should be communicating important information to users about the long-term financial condition of the U.S. government and annual changes therein. Furthermore, FASAB recognized that tax expenditures, which can be large in relation to spending programs that are measured under federal accounting standards, may not be fully considered in entity reporting. Reporting information on tax expenditures would ensure greater transparency of and accountability for such expenditures. Changing the way business is done in a large, diverse, and complex organization like the federal government is not an easy undertaking. According to a survey of federal CFOs, federal finance organizations of the future will have fewer people, with a greater percentage of analysts, as opposed to accounting technicians. However, today, most functions within federal finance organizations are focused primarily on (1) establishing and administering financial management policy; (2) tracking, monitoring, and reconciling account balances; and (3) ensuring compliance with laws and regulations. While they recognize the need for change, according to the CFOs surveyed, many questions remain unanswered regarding how best to facilitate such changes. When it comes to world-class financial management, our study of nine leading private and public sector financial organizations found that leading financial organizations often had the same or similar core functions (i.e., budgeting, treasury management, general accounting, and payroll), as the federal government. However, the way these functions were put into operation varied depending on individual entity needs. Leading organizations reduced the number of resources required to perform routine financial management activities by (1) consolidating activities at a shared service center and (2) eliminating or streamlining duplicative or inefficient processes. Their goal was not only to reduce the cost of finance but also to organize finance to add value by reallocating finance resources to more productive and results-oriented activities like measuring financial performance, developing managerial cost information, and integrating financial systems. The federal financial workforce that supports the business needs for today is not well-positioned to support the needs of tomorrow. A JFMIP study indicated that a significant majority of the federal financial management workforce performs transaction support functions of a clerical and technical nature. These skills do not support the vision of tomorrow’s business that will depend on an analytic financial management workforce providing decision support. The financial management workforce plays a critical role in government because the scale and complexity of federal activities requiring financial management and control is monumental. Building a world-class financial workforce will require a workforce transformation strategy devised in partnership between CFOs and agency human resource departments, now established in law as Chief Human Capital Officers, working with OMB and the Office of Personnel Management. Agency financial management leadership must identify current and future required competencies and compare them to an inventory of skills, knowledge, and current abilities of current employees. Then, they must strategically manage to fill gaps and minimize overages through informed hiring, development, and separation strategies. This is similar to the approach that we identified when we designated strategic human capital management as a high-risk area in 2001 recognizing that agencies, working with Congress and OPM, must assess future workforce needs and determine strategies to meet those needs, especially in light of long-term fiscal challenges. Achieving the financial management vision of the future will be directly effected by the workforce who support it. Earlier, I noted that while important progress in strengthening internal control has been made, the federal government faces numerous internal control problems, some of which are long-standing and are well- documented at the agency level and governmentwide. As we have reported for a number of years in our audit reports on the U.S. government’s consolidated financial statements, the federal government continues to have material weaknesses and reportable conditions in internal control related to property, plant, and equipment; inventories and real property; liabilities and commitments and contingencies; cost of government operations; and disbursement activities, just to mention a few of the problem areas. As an example, consider DOD which has many known material internal control weaknesses. Of the 25 areas on GAO’s high-risk list, 14 relate wholly or partially to DOD, particularly its financial management problems. Overhauling DOD’s financial management controls and operations represents a challenge that goes far beyond financial accounting to the very fiber of DOD’s range of business operations, management information systems, and culture. Although the Secretary of Defense and several key agency officials have shown commitment to transformation, as evidenced by key initiatives such as the Business Management Modernization Program and the Financial Improvement and Audit Readiness Plan, little tangible evidence of significant broad-based and sustainable improvements has been seen in DOD’s business operations to date. For DOD to successfully transform its business operations, it will need a comprehensive and integrated business transformation plan; people with the skills, responsibility, and authority to implement the plan; an effective process and related tools, such as a business enterprise architecture; and results-oriented performance measures that link institutional, unit, and individual personnel goals and expectations to promote accountability for results. As I testified before you in February 2005, we support OMB’s efforts to revitalize internal control assessments and reporting through the December 2004 revisions to Circular No. A-123. These revisions recognize that effective internal control is critical to improving federal agencies effectiveness and accountability and to achieving the goals established by Congress. They also considered the internal control standards issued by the Comptroller General, which provide an overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. Effective internal control, as envisioned in the newly revised Circular No. A-123, inherently includes a successful strategy for addressing improper payments. Our prior work has demonstrated that attacking improper payment problems requires a strategy appropriate to the organization involved and its particular risks. We have found that entities using successful strategies to address their improper payment problems shared a common focus of improving the internal control system—the first line of defense in safeguarding assets and preventing and detecting errors and fraud. Moreover, as we testified before this subcommittee in July of this year, even though progress has been made, certain agencies have not yet performed risk assessments of all their programs and/or estimated improper payments for their respective programs. I pointed out in my February 2005 testimony, six issues critical to effectively implementing the changes to Circular No. A-123, specifically, the need for 1. development of supplemental guidance and implementation tools to help ensure that agency efforts are properly focused and meaningful; 2. vigilance over the broader range of controls covering program objectives; 3. strong support from managers throughout the agency, and at all levels; 4. risk-based assessments and an appropriate balance between the costs and benefits of controls; 5. management testing of controls in operation to assess if they are designed adequately and operating effectively, and to assist in formulating corrective actions; 6. management accountability for control breakdowns. Since that time, in July 2005, the CFO Council, in collaboration with the President’s Council on Integrity and Efficiency (PCIE) and OMB, issued an implementation guide to assist departments and agencies in addressing the Circular No. A-123 requirements related to internal control over financial reporting. As I mentioned earlier, this is a positive first step to helping the federal government clearly articulate its objectives and criteria for measuring whether the objectives of Circular No. A-123 have been successfully achieved. Equally important will be the rigor with which these criteria are applied. The federal government has always faced the challenge of sustaining the momentum of transformation because of the limited tenure of key administration officials. The current administration’s PMA has served as a driver for governmentwide financial management improvements. It has been clear from the outset that the current administration is serious about improved financial management. We have been fortunate that, since the passage of the CFO Act, all three administrations have been supportive of financial management reform initiatives. And, as I discussed earlier, we’ve seen a positive cultural shift in the way the federal government conducts business. Given the long-term nature of the comprehensive changes needed and challenges still remaining to fully realize the goals of the CFO Act, it is unlikely they will all occur before the end of the current administration’s term. Therefore, sustaining a commitment to transformation in future administrations will be critical to ensure that key management reforms such as the CFO Act are fully attained. In closing, over the past 15 years, we have seen continuous movement toward the ultimate goals of accountability laid out in the CFO Act. I applaud the CFO and audit communities for the tremendous progress that has been made. While early on some were skeptical, the CFO Act has dramatically changed how financial management is carried out and the value placed on good financial management across government. Sound decisions on the current results and future direction of vital federal programs and policies, while never easy, are made less difficult when decision makers have timely, reliable, and useful financial and performance information. Across government, financial management improvement initiatives are underway, and if effectively implemented, have the potential to greatly improve the quality of financial management information. Proper accounting and financial reporting practices are even more essential at the federal level than they were 15 years ago given the difficult spending challenges and the long-term fiscal condition of the federal government. Further, I want to reiterate the value of sustained congressional interest in these issues, as demonstrated by this subcommittee’s leadership. It will be key that going forward, the appropriations, budget, authorizing, and oversight committees hold agency top leadership accountable for resolving remaining problems and that they support improvement efforts that address the challenges for the future I highlighted today. The federal government has made tremendous progress in the past 15 years, and sustained congressional attention has been and will continue to be a critical factor to ensuring achievement of the CFO Act’s goals and objectives. Mr. Chairman, this completes my prepared statement and I want to thank you for the opportunity to participate in this hearing and for the strong support of this Subcommittee in addressing the need for financial management reform and accountability. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For information about this statement, please contact Jeffrey C. Steinhoff at (202) 512-2600 or McCoy Williams, Director, Financial Management and Assurance, at (202) 512-6906 or [email protected]. Individuals who made key contributions to this testimony include Felicia Brooks, Kay Daly, and Chanetta Reed. Numerous other individuals made contributions to the GAO reports cited in this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1990, the Chief Financial Officers (CFO) Act, heralded as the most comprehensive financial management reform legislation in 40 years, was enacted. The Act's goal is to improve management through reliable, useful, and timely financial and performance information for day-to-day decisionmaking and accountability. This testimony outlines the legislative history of the CFO Act, and its key elements, progress to date in implementing the Act, and the challenges for the future. Prior to passage of the CFO Act, the seemingly never ending disclosures of fraud, waste, abuse, and mismanagement in federal programs painted a picture of a government unable to manage its programs, protect its assets, or provide taxpayers with the effective and economical services they expect. The enactment of the CFO Act represented a broad-based recognition that federal financial management was in great need of fundamental reform. The Act mandated a financial management leadership structure; required the preparation and audit of annual financial statements; called for modernized financial management systems and strengthened internal control; and required the systematic measurement of performance, the development of cost information, and the integration of program, budget, and financial systems. In the 15 years since the enactment of the CFO Act, the federal government has made substantial progress in strengthening financial management. The past 3 administrations have all made financial management reform a priority. Improved financial management has been one of the cornerstones of the President's Management Agenda from the outset of the current administration. There has been a clear cultural change in how financial management is viewed and carried out in the agencies and a recognition of the value and need for good financial management throughout government, which was not the case in 1990. There are now qualified CFOs across government, who bring to the job proven track records in financial management. Financial management systems and internal control have been strengthened. Generally accepted government accounting standards have been developed. For fiscal year 2005, 18 of 24 CFO agencies received clean audit opinions on their financial statements, up from just 6 in fiscal year 1996. This year's audited financial statements were issued in just 1 and 1/2 months after the close of the fiscal year as opposed to 5 months, which is the deadline in the Act. Agencies are also now preparing performance and accountability reports that tie together financial and performance information. Though not yet auditable, primarily because of problems in the Department of Defense, comprehensive annual consolidated financial statements are being issued in 2 and 1/2 months as opposed to the 6-month timeframe allowed in the Act. While there has been marked progress in the past 15 years and the CFO Act has proven itself as the foundation for financial accountability, GAO has identified five principal challenges to fully realizing the world-class financial management anticipated by the CFO Act. The need to: (1) modernize and integrate financial management systems to provide a complete range of financial and cost information needed for accountability, performance reporting, and decision making, with special emphasis on the Department of Defense, which has deeply-rooted systems problems, (2) build a more analytic financial management workforce to support program managers and decisionmakers, (3) solve long-standing internal control weaknesses, (4) enhance financial reporting to provide a complete picture of the federal government's overall performance, financial condition, and future fiscal outlook, and (5) ensure that financial management reform is sustained given the leadership changes that occur at the end of any administration and the long-term nature of many of the ongoing reform initiatives. The continuing strong support of the Congress has been a catalyst to the important progress that has already taken place and will be essential going forward.
Since 2007, a series of overlapping food, fuel, and financial crises have negatively affected the world economy, prompting the IFIs to respond. (See fig. 1.) Between 2007 and 2009, a financial crisis in the U.S. subprime mortgage market gradually extended to other developed countries and became a global financial crisis, which in turn generated a global economic crisis impacting developed and developing countries—including LICs—with varying degrees of intensity. Prior to, and partly in parallel with the financial crisis, global food and fuel prices increased sharply between mid-2007 and mid-2008. In some cases this exacerbated the impact of the financial crisis and some governments banned food exports. Although international food and fuel prices dropped precipitously as the global financial crisis unfolded in late 2008, they have resurged to record highs again in 2011, igniting concerns about a repeat of the 2008 food crisis and its negative impact on poor people in developing countries, particularly LICs. IFIs and global leaders have launched a coordinated international response to the crises. In March 2009, the World Bank announced a comprehensive crises response framework that could channel additional donor contributions to poor countries to bolster ongoing World Bank activities. To help ensure that IFIs would have sufficient resources to respond to the crises, in April 2009 the Group of Twenty (G-20) world leaders committed to measures designed to increase IFI resources available to LICs, including through voluntary bilateral contributions to the World Bank’s crises response framework. The G-20 also endorsed the IMF’s intention to increase financing for LICs, including from resources derived from proceeds of IMF gold sales. LICs, the majority of which are in sub-Saharan Africa, comprised a population of about 810 million people at the outset of the crises in 2007. Most LICs depend to some extent upon imports of food and fuel, and about half are classified as fragile states challenged by weak capacity, poor governance, political instability, ongoing violence or the legacy of past conflict, as shown in fig. 2. According to IFIs, these factors can render countries vulnerable to crises driven by fluctuations in international food and fuel prices. ........................................ Online, hover over a country name in the table for more information. For print version, see appendix II, page 53. In food-importing countries, imports of basic foodstuffs outweighed exports over the past 3 years, according to the UN Food and Agriculture Organization. We define net energy consumers as those countries for which net energy production was less than consumption in 2009. The World Bank uses the term “fragile” to refer to countries with particularly weak policies and institutions, as well as those with the presence of a UN or regional peace-keeping or peace-building mission during the past 3 years. The World Bank and IMF provide financial and technical assistance to member countries. Two World Bank institutions—IDA and IFC—assist LICs. IDA, the primary World Bank financer to LICs, provides no-interest loans and grants to eligible countries that have limited or no access to international credit markets. IDA funds long-term programs in agriculture, infrastructure, and social services such as health and education, and provides technical assistance for programs in economic and institutional development to strengthen country policies and institutional capacity. Commitments to these programs are disbursed at different rates depending on a number of factors, including recipient country capacity and whether the project is an investment lending project or a development policy lending project. The rate of disbursements is important because committed funds cannot be used by recipient countries until the funds are disbursed. IFC provides investments and advisory services to build the private sector in developing countries, including LICs. The IMF provides economic surveillance, lending, and technical assistance to its member countries. IMF surveillance involves the monitoring of economic and financial developments and the provision of policy advice. The primary purpose of IMF lending is to assist countries facing balance-of-payments difficulties, and IMF loans to LICs are intended to help foster economic growth and reduce poverty. IMF lending is conditional upon borrowing countries’ implementation of policies. To help countries manage their economies, the IMF provides guidance and training on how to strengthen institutions and design appropriate macroeconomic, financial, and structural policies. Since 1996 the World Bank and IMF have participated in bilateral and multilateral efforts to relieve the debt burdens of poor countries to help them achieve long-term economic growth and debt sustainability, meaning they can make their future debt payments on time without rescheduling. To assess how a country’s current level of debt and prospective new borrowing affect its ability to service its debt in the future, the World Bank and IMF jointly conduct a DSA. DSAs include an analysis of a country’s projected debt burden over the next 20 years and its vulnerability to shocks. In 2009, we reported that the World Bank and IMF had improved their DSAs, including by considering the strength of a country’s policies and institutions, and that the DSAs identified numerous ambitious actions countries should take in order to avoid future unsustainable debt levels. The food, fuel, and financial crises negatively impacted LIC economies, but the slowdown in growth was less than experienced by advanced economies. Our analysis shows that during the crises period of 2007 through 2009, key economic indicators slowed or declined for 38 LICs compared to the pre-crises period of 2004 through 2006. While the average annual growth rate in real gross domestic product (GDP), or national income, for the 38 LICs remained positive during the crisis period, it declined by an average of about 1 percentage point, dropping from an average of 7.1 percent during the pre-crises period to an average of 6.2 percent during the crises period. The largest decline occurred between 2007 and 2009 when real GDP growth fell nearly 2 percentage points, from 7.1 percent to 5.3 percent. (See fig. 3.) Nine countries experienced an actual decline in their real GDP in 2008 or 2009: Cambodia, Chad, Eritrea, Guinea, Madagascar, Mauritania, Niger, Solomon Islands, and Zimbabwe. The slowdown in the growth rate of real GDP in LICs was milder than the downturn in real GDP growth experienced by advanced economies during the crises, which declined from 2.7 percent in 2007 to -3.4 percent in 2009. According to the IMF, the LICs’ period of growth prior to the crises provided a cushion, helping many countries weather the food and fuel price increases between 2007 and 2008 and the global financial crisis. However, IFIs have reported that lower growth rates caused by the crises could lead to increases in poverty in LICs. Moreover, the slowdown in real GDP growth occurred while LICs’ inflation was rising. The average annual inflation rate for 38 LICs increased from 8.6 percent during the pre-crises period to an average of 11.6 percent during the crises, peaking at nearly 14 percent in 2008. (See fig. 4.) Higher food and fuel prices contributed to rising inflation. World food prices were stable from 2001 to the beginning of 2007, and then climbed steeply during 2007 and 2008. After a brief downturn in the latter part of 2008, world food prices began rising again, resulting in a net increase in prices of about 74 percent between January 2007 and May 2011. (See fig. 5.) Similarly, world crude oil prices rose sharply during 2007 and 2008 and, after receding through early 2009, rose again through May 2011, resulting in a net price increase of over 99 percent when compared to January 2007. (See fig. 6.) While the human and social development impacts of higher prices vary by country, the resurgence in prices has triggered renewed concern. Our previous work shows that many LICs were experiencing protracted food emergencies and had severe and widespread malnourishment even prior to the onset of the crises. In April 2011, the IFIs warned that the resurgence of higher food prices was increasing the cost of food imports in LICs, aggravating existing balance-of-payments problems and putting pressure on government budgets. Moreover, in July 2011, the UN World Food Program declared a food crisis in eastern Africa due mainly to low domestic harvests resulting from consecutive droughts. The average current account (trade) deficit-to-GDP ratio for the 38 LICs increased from 3.6 percent in 2007, to 5.4 percent in 2008, and decreased to 4.2 percent in 2009, as shown in figure 7. Twenty-eight countries experienced a widening of their current account deficit-to-GDP ratio during the crises period compared to the pre-crises period. Moreover, the fiscal deficit-to-GDP ratio increased from an average of 2.6 percent in 2007 to 3.7 percent in 2009 for 37 LICs, attributable more to rising expenditures than declining revenues. The fiscal deficit-to-GDP ratio for the LICs averaged 1.8 percent between 2004 and 2006 and nearly doubled to 3.2 percent between 2007 and 2009. The IMF reported that most LICs adopted a countercyclical fiscal response, such as preserving or expanding spending to support the economy and protect the poor. The growth rate of real primary expenditures accelerated, leading to a widening of the fiscal deficit-to-GDP ratio. The IMF also reported that LICs could increase spending, in part, because they had established sufficiently strong fiscal positions before the crises began. Net foreign direct investment inflows for 39 LICs declined by 17 percent during 2009, ending a generally steady increase since 2000, as shown in fig. 8. Twenty-three of the 39 countries, or nearly 60 percent, had lower net foreign direct investment inflows in 2009 compared to 2007. In response to the crises, we found that IDA met its goal to increase the amount of financial assistance and partly met its goal to increase the speed of disbursements to LICs, but the impact of IDA’s actions on LIC government spending has been difficult to establish. The IFC increased assistance to LICs but its response was limited by capacity constraints and is difficult to measure. The IMF significantly boosted financial assistance to LICs, but its contribution to LIC government spending increases during the crises has been difficult to establish. IDA met its goal to increase the amount of financial assistance and partly met its goal to increase the speed of disbursements to LICs. In addition, IDA reported that its crises response initiatives supported LIC government spending but we found that the impact has been difficult to establish. The World Bank responded to crises in LICs through regular IDA lending and by establishing initiatives. The World Bank committed a total of $18.1 billion in IDA funds to LICs during the crises response period between 2008 and 2010 through both regular IDA lending and crisis response initiatives. This represented an increase in new commitments of approximately $5 billion, or 39 percent, as compared to commitments made between 2005 and 2007. These resources were part of a fixed 3- year allocation, replenished in 2008 prior to the onset of the global economic crisis. This allocation represented an increase of 12.8 percent as compared to the 2005-2008 IDA allocation and was intended in part to help countries achieve the Millennium Development Goals (MDGs). To respond to the crises within the context of this fixed set of resources, IDA could choose to shift its priorities toward crisis response, accelerate disbursements of existing IDA funds, or provide additional funds from donors or internal resources. To complement regular IDA lending during the crises, the World Bank established a crisis response framework comprised of five initiatives that committed $12.2 billion in financial assistance to LICs between 2008 and 2010. These commitments included $10.8 billion in existing IDA funds and $1.4 billion in new financial assistance. The five initiatives are: the Global Food Crisis Response Program (food program), established in May 2008 to help countries reduce the impact of high food prices on the poor by providing rapid financial assistance, policy advice, and social protection services such as food stamps and school feeding programs for the most vulnerable; the IDA Fast Track Facility, established in December 2008 to help countries offset the impacts of the financial crisis on governments’ budget expenditures, including social and infrastructure programs; the Rapid Social Response Program (social protection program), established in April 2009 to help countries mitigate the impacts of crises by promoting social protection programs through rapid financing of immediate interventions in safety nets and other areas, and by improving capacity needed to establish and implement effective safety net systems; the Infrastructure Recovery and Assets Platform (infrastructure program), established in March 2009 to help countries mitigate the impacts of the crises by supporting critical infrastructure investments and new project development and implementation, and; the Pilot Crisis Response Window, established in November 2009 to help reduce the need for countries to make tradeoffs between financing crises response efforts or long-term development programs by providing new financing that was additional to countries’ existing IDA funds. In response to the crises, IDA committed $1.4 billion in new financial assistance to 36 LICs between 2008 and 2010. Four of the five initiatives—the food, social protection, and infrastructure programs and the Pilot Crisis Response Window—aimed to increase the amount of financial assistance available to LICs using additional donor contributions and internal World Bank resources. We found that IDA committed $1.1 billion in new financial assistance to LICs through the Pilot Crisis Response Window and $288 million in new grant assistance to LICs through the food and social protection programs. The World Bank reported that governments’ requests for social protection grants to establish and enhance safety net systems provided by the social protection program significantly exceeded the availability of new resources. By April 2011, LICs had submitted 133 project proposals totaling $161 million against available funding of about $58.5 million. These projects were to establish or enhance social protection activities and safety net systems benefiting the poorest, as well as to improve the data and institutional capacity necessary for effective implementation. The World Bank further reported that it has established a permanent Crisis Response Window effective July 2011 that could be used to continue to fund activities supporting both crises response and preparedness in IDA-eligible countries, including LICs. Finally, donors did not provide additional funding to the infrastructure initiative, which was originally designed to provide up to $3 billion to help offset the impact of soaring energy prices. Four of the five IDA initiatives—the Fast Track Facility, food program, social protection program, and Pilot Crisis Response Window—were designed to increase the disbursement speed of commitments made from existing IDA funds. While disbursement rates are a useful metric for capturing the World Bank’s response to the immediate needs of recipient countries through the crises response initiatives, we recognize that there are other less quantifiable considerations for assessing the impacts and effectiveness of development assistance. These initiatives were also designed to increase the speed of project preparation and processing, which occurs prior to project approval. According to the World Bank, preparation time was reduced during the crisis for both investment lending and development policy lending programs. To determine whether the disbursement speed of commitments made through the crises response initiatives had increased, we compared the first year disbursement rates for each initiative to the first year disbursement rate of projects approved from 2008 through 2010 that did not fall under any initiative. The World Bank uses a different methodology to report disbursement rates. We did not use the World Bank’s standard disbursement rate methodology because our analysis sought to isolate those activities which were explicitly undertaken in response to the crises. These initiatives committed approximately $3.9 billion to 32 LICs from existing IDA funds. Three of four initiatives increased the speed of disbursements. The infrastructure program, which committed $6.9 billion in existing IDA funds in addition to the $3.9 billion, did not have a goal to increase the speed of disbursements. More specifically:  The first year disbursement rate was 69.1 percent for the Fast Track Facility, compared to a first year disbursement rate of 33.5 percent for projects that were not funded through an initiative.  The first year disbursement rate was 64.5 percent for the food program, compared to a first year disbursement rate of 33.5 percent for projects that were not funded through an initiative. However, we found that almost half of the commitments made through existing IDA funds, about $405 million, went to three projects in Ethiopia and Bangladesh. When these projects are excluded from the analysis, the disbursement rate for this initiative declines to 39 percent.  The first year disbursement rate for the social protection program was 34.1 percent, slightly higher than the first year disbursement rate of 33.5 percent for projects that were not funded through an initiative. However, the social protection program did not increase the speed of disbursements when compared to social protection projects approved during the pre-crises period, which had a first year disbursement rate of 47.8 percent. According to the World Bank, increasing the speed of disbursements for social protection programs in LICs has been challenging due to a lack of existing social protection programs and recipient countries’ capacity to effectively implement them. To address this challenge and facilitate crises preparedness in LICs, the World Bank intends to continue to finance the development of social protection programs. In early 2011 donors emphasized the importance of a continued focus on capacity building and improved data collection in LICs, to help overcome these constraints.  The Pilot Crisis Response Window did not increase the disbursement speed of commitments, with a first year disbursement rate of 27.5 percent, compared to a first year disbursement rate of 33.5 percent for projects that were not funded through an initiative (see fig. 9). In 2009, the Development Committee, an advisory group to the World Bank and IMF, also urged the acceleration of the delivery of financial assistance to recipient countries. However, we found that the World Bank did not accelerate disbursements for both investment lending projects or development policy lending projects for the group of LICs on average as a whole even though a majority of countries received disbursements faster during the crises response period. Specifically, the average first year disbursement rate to LICs was 16.8 percent for all investment lending projects approved between 2008 through 2010, as compared to an average first year disbursement rate of 17.3 percent for all investment lending projects approved between 2005 through 2007. Similarly, the average first year disbursement rate to LICs was 90.7 percent for all development policy lending projects approved between 2008 through 2010, as compared to an average first year disbursement rate of 96.9 percent for all development policy lending projects approved between 2005 through 2007. Overall, the average first year disbursement rate to LICs was 31.1 percent for all projects approved between 2008 and 2010, as compared to an average first year disbursement rate of 39.3 percent for all projects approved between 2005 and 2007, a difference of about 8 percentage points. According to U.S. Treasury, this decline in part reflects the World Bank’s need to ensure that recipient country capacity and governance controls were sufficiently robust to absorb the additional resources provided during the crisis period. However, at the individual country level, total commitments in 22 of 36 LICs, including commitments made to both investment lending projects and development policy lending projects, were disbursed faster during the crises period than during the pre-crises period. Disbursement rates, which vary over time, depend on a number of factors, including recipient country capacity, need, and governance, and the type of lending. For example, commitments to Burundi, a fragile state with limited capacity, increased by 114 percent while disbursements increased by 46 percent, which results in a lower disbursement rate during the crises response period. (See fig.10.) LIC governments reported mixed experiences relating to the timeliness of the World Bank’s response to crises. Some governments said they received financial support very rapidly, while others noted that World Bank support had been sluggish. For a more detailed analysis of World Bank commitments and disbursement rates to individual countries, see appendix III. The World Bank reported that speed was facilitated by the Bank’s new rapid response policy and increased use of development policy lending where circumstances permitted. In addition, World Bank and U.S. Treasury officials reported that the restructuring of existing lending portfolios facilitated an expedient response in some countries. Two initiatives were designed to support domestic spending in recipient countries during the crises, in areas including social services, education, and infrastructure. In 2010, the World Bank reported that both initiatives— the Fast Track Facility and the infrastructure program—met this goal. For example, the World Bank reported that the Fast Track Facility operation in the Democratic Republic of Congo prevented the government from having to cut essential social spending or resort to inflationary spending. Similarly, the World Bank reported that the infrastructure program supported domestic spending and helped to mitigate the direct impacts of the crisis. However, as we previously reported, IFIs do not independently track developing countries’ poverty-reducing expenditures and instead rely upon developing countries’ governments to provide such data, even though the accuracy of these data and country capacity to provide this information is questionable. Additionally, for the infrastructure program, the World Bank developed a rapid diagnostic tool to identify at-risk countries and provide a detailed assessment of crises impacts and associated country infrastructure spending needs, but conducted the diagnostic in only one LIC, Bangladesh. Therefore, we found that the degree to which World Bank actions impacted government spending has been difficult to establish. The IFC responded to the food and fuel crises through lending in the agriculture and energy sectors and responded to the financial crisis through existing and new initiatives and by enhancing coordination with donors, but its response was limited by capacity constraints. Between 2008 and 2010, IFC increased its new lending commitments in LICs while new IFC commitments overall declined and foreign direct investment in LICs declined by 17 percent between 2008 and 2009. As a result, IFC’s investments during the crises in LICs increased as a percentage of net foreign direct investment. Annual IFC commitments in LICs in fiscal years 2009 and 2010 exceeded $900 million each year, while commitments in fiscal year 2008 were about $460 million. IFC also committed a total of $1.1 billion to LICs through two crisis response initiatives, the pre- existing Global Trade Finance Program and the new Global Trade Liquidity Program, which supported $3.4 billion in trade through credit guarantees and risk sharing. According to officials, IFC also developed new approaches for coordinating with other multilateral institutions in LICs at the regional level. The Joint Action Plan for Africa, established in 2009, for example, developed a method for IFC to collaborate more closely with other lenders in support of development activities in Africa. Similarly, officials said that agreements with donors, made to enhance the response to the financial crisis, will allow IFC to quickly coordinate with donors in response to a future crisis. According to IFC officials, IFC’s response was limited by internal and external constraints. Internally, under its Articles of Agreement IFC must undertake its financing on terms and conditions which it considers appropriate, taking into account, among other things, the terms and conditions normally obtained by private investors for similar financing. In addition, IFC has relatively limited resources as compared to other IFIs. Officials told us that because of these constraints, much of its crisis response relied on donor governments to provide additional funds. In some cases, this dependence negatively affected the speed of IFC’s response because IFC could not respond until donor governments fulfilled their commitments. IFC also faced external capacity constraints in recipient countries. For example, officials explained that in Ethiopia, foreign investors, including IFC, are often subject to additional scrutiny by the government, which has limited IFC’s ability to do business there. Overall, IFC officials said that the actions they took to respond to the crises sent a positive signal to the market, but officials noted that this is difficult to measure and did not provide quantitative evidence of this effect. During the crises response period between 2008 and 2010, the IMF response included committing approximately $4.9 billion in new lending to 28 LICs, temporarily lowering interest rates on its loans, and doubling the limit individual countries could borrow against. In addition, the IMF provided $250 billion to support all of its members, including LICs, which collectively received the equivalent of $5.8 billion. Governments could use these funds to boost international reserves, cushion against shocks, or meet balance-of-payment needs. Moreover, the IMF changed its lending instruments to address crisis impacts, aiming to make them more flexible and tailored to specific country needs. For example, according to the IMF, the newly created Rapid Credit Facility provides low-access, rapid, and below-market-rate financial assistance to LICs facing an urgent balance-of-payments need, without requiring program-based conditions. According to IMF officials, these efforts were supplemented by technical support and surveillance activities, which also played a role in assisting LICs through the crises. In addition, the IMF reported that its policy advice and programs in LICs were supportive of a countercyclical policy response and higher government spending during the crisis. For example, according to the IMF, government spending increased in almost 90 percent of LICs with IMF programs in 2009. We found that IMF loans to LICs increased more than sixfold from approximately $748 million between 2005 and 2007 to about $4.9 billion between 2008 and 2010. In the three countries we reviewed—Burundi, Ethiopia, and Tanzania—the IMF reported that country-specific program goals were achieved. However, circumstances may change quickly, and in one case, inflation resurged soon after the program ended. While conclusions from this sample are not generalizable to all LICs, these examples illustrate how IMF-supported programs operated in these three countries. In July 2008, Burundi started a 3-year $76 million arrangement with goals to support poverty reduction and macroeconomic stability. Approximately 3 years later, a June 2011 IMF review stated that performance under the program has been broadly satisfactory, despite the impact of the food and fuel shocks. At the same time, however, the IMF also lowered Burundi’s 2011 economic growth projection to 4.2 percent, due in part to the expectation that higher food and fuel prices will continue.  Ethiopia requested a $240 million arrangement in August 2009 to help steer the economy through the global financial crisis, with the goals of reducing inflation and building international reserves. In October 2010, an IMF review concluded that the program was on track and that government policies to reduce inflation and increase reserves had been successfully implemented. According to IMF officials, the program ended in November 2010 and inflation rose to about 30 percent in May 2011, mainly because the government did not implement agreed-to reforms.  Tanzania began a $328 million 12-month arrangement in May 2009 with the goal of mitigating the adverse impact of the global financial crisis and addressing a projected deterioration in balance of payments stemming from a decline in exports and foreign direct investment. The IMF’s subsequent review determined that country program goals were met, and the program was concluded on schedule a year after inception. The IMF also noted that LICs supported by an IMF program increased government spending during the crises, including on health and education services, more than countries that were not supported by an IMF program, and implied that this was attributable to the IMF programs “as Fund financing reduced liquidity constraints and helped catalyze donors’ support.” However, we found that this causal link has been difficult to establish because the comparison groups differ in important ways. In order to conclude that differences in government spending are driven by IMF programs, the groups of countries being compared need to be as similar as possible. Our analysis of the data underlying the IMF’s assertion found that non-program LICs consistently differed from program LICs across certain measures of institutional quality and macroeconomic policy. Furthermore, the finding that program LICs increased spending more than non-program LICs is highly sensitive to the inclusion of a few countries in the group of non-program LICs, which either did not need a program or could not obtain one. Non-program LICs had lower scores on a variety of measures of institutional quality, such as political stability, government effectiveness, and rule of law. Countries with institutional weaknesses in these categories may overlap with the fragile states, as shown in fig. 2. In addition, non-program LICs had higher inflation rates and larger budget deficits prior to the crises than LICs with IMF-supported programs, which may indicate that non-program countries had less capacity to use fiscal and monetary policy to respond to the crises. We conducted several analyses to determine if the IMF’s results were sensitive to small changes in the underlying sample of countries designed to make the set of non-program LICs more like LICs with IMF programs. In one analysis, we omitted the countries with the two lowest scores on political stability in 2009 (Sudan and Yemen) from the set of non-program LICs, thereby making the sample more similar to program LICs. Based on the new sample, program LICs no longer increased spending more than non-program LICs. In another analysis, we omitted the countries with the two largest budget deficits prior to the crisis (Eritrea and Guyana) from the set of non-program LICs and similarly found that program LICs no longer increased spending more than non-program LICs. Importantly, it does not necessarily follow from these sensitivity analyses that IMF- supported programs were ineffective at increasing spending. A reasonable estimate of what might have happened in the absence of an IMF-supported program is necessary to assess the impact of programs on spending, which the IMF analysis implicitly assumes is the experience of non-program LICs. A more rigorous analytical approach would be needed to conclude that IMF-supported programs resulted in increased government spending during the crises. Analytical approaches that systematically account for differences between program and non-program countries would be necessary to credibly conclude whether or not an IMF- supported program led to greater public spending. The IMF and World Bank prepare annual debt distress ratings, which assess countries’ ability to repay their debt. The IMF and World Bank did not lower any LICs’ debt distress rating as a result of the food, fuel, and financial crises. However, we found that some of the underlying macroeconomic projections might prove too optimistic based on current risks to the global economic recovery and rising commodity prices, as well as on our review of the debt sustainability analysis (DSAs) for three countries. If these projections ultimately prove too optimistic and countries’ ability to repay their debt declines significantly, some multilateral institutions could subsequently choose to provide more grants than loans to help lower the risk of debt problems reemerging. Our review of these DSAs is nongeneralizable and meant to be illustrative, not representative. The debt distress rating is the IMF and World Bank’s assessment of the risk that a country will not be able to repay its future debt. In assessing risk and determining a sustainable debt level, the DSA considers the strength of the country’s policies and institutions based on the World Bank’s Country Policy and Institutional Assessment index (CPIA). The index classifies LICs as weak, medium, or strong performers, with debt burden thresholds associated with each performance category, as shown in figure 11. For example, Ethiopia is in the “medium” performer category, which means that its performance will be judged against the debt burden threshold indicators for that category. The threshold indicator for the present value debt-to-export ratio is 150 percent. Exports are an important source of funding for repaying debt. The IMF and World Bank have determined that debt-to-export levels in excess of 150 percent put LICs ability to repay debt at risk. According to Ethiopia’s 2010 DSA, Ethiopia’s debt-to-export ratio is projected to reach a high of 133 percent in 2011 and then decline. Burundi, which is classified as a “weak” performer, faces a lower, more constraining threshold of 100 percent. According to its 2010 DSA, Burundi’s debt-to-export ratio exceeded this limit throughout the projection period by a wide margin. For example, from 2011 through 2013, that ratio was projected to be at or above 200 percent. The IMF and World Bank use the extent and duration of the threshold breaches to determine the country’s debt distress risk rating, as discussed below. The assessment of the country’s risk of debt distress—meaning the country cannot service its debt without resorting to exceptional finance (such as debt relief) or a major correction in balancing its income and expenditures—depends on how the country’s debt indicators compare with these debt burden threshold indicators under the DSA’s “baseline” scenario, as well as under alternative scenarios and stress tests. The baseline is the main macroeconomic scenario which describes the evolution of the debt and the macroeconomic variables based on realistic assumptions and projections of key macroeconomic variables such as GDP, inflation, exports, imports, and government revenues. Countries are classified into four categories—low, moderate, high risk, or in debt distress—according to their likelihood of debt distress, based on the extent and duration of breaches in their threshold indicators. (See fig. 12.) Debt burden thresholds are not rigid ceilings, and, according to the IMF and World Bank, the debt distress rating seeks to strike a balance between a mechanistic use of the categories and a judgmental approach. Countries classified as “in debt distress” or “high risk of debt distress” receive 100 percent grant financing from IDA, while countries at moderate risk receive 50 percent grants and 50 percent concessional loans, and countries at low risk continue to receive 100 percent concessional loan financing. As shown in figure 12, 13 of the LICs are “in” or at “high” risk of debt distress and 24 are either at “moderate” or “low” risk. The DSA uses a 3-year moving average CPIA score in determining a country’s policy performance in order to reduce variations in the risk of debt distress rating stemming from small annual fluctuations in the CPIA that do not represent a material change in countries’ capacity to service their debt. If following the release of the new annual CPIA score, the updated 3-year moving average CPIA rating breaches the applicable CPIA boundary, the country’s performance category would change only if the size of the breach exceeds 0.05; if below 0.05, the country’s performance category would change only if the breach is sustained for 2 consecutive years. Although the crises adversely impacted LICs’ economies, the IMF and World Bank did not change any country’s debt distress rating as a result of the crises, indicating that they did not expect the crises to adversely impact LIC economies enough to significantly impair their ability to repay debt. The IMF forecasted a rebound in LIC growth in line with the forecast of a quick recovery for the global economy. The global recovery, which the IMF subsequently reported is subject to risks, is expected to boost demand for LIC exports and improve access to foreign capital, both of which are expected to facilitate private sector growth. According to the IMF, the LICs’ period of growth prior to the crises provided a cushion, helping countries weather the food and fuel price increases between 2007 and 2008 and the global financial crises. As a result, LICs were able to implement countercyclical policies, such as preserving or expanding spending to support the economy and protect the poor, and expanding public investment. For reasons other than the crises, the IMF and World Bank changed 10 LICs’ debt distress ratings from 2007 through 2010. (See fig. 13.) Nine ratings improved for the following reasons:  Six countries received full and irrevocable debt relief from government and multilateral creditors.  Two countries, Chad and Niger, had higher projected GDP growth stemming from growth in mineral sectors. Chad achieved higher GDP growth due to oil sector growth. Niger is implementing large uranium and oil projects, which are expected to boost exports and government revenues significantly.  Ethiopia’s rating changed due to the inclusion of workers’ remittances as an important source of debt service financing and resilience of the Ethiopian economy to the global economic crisis. Only one rating worsened. Burkina Faso’s debt distress rating changed from moderate in 2007 to high in 2008 because the country was reclassified from a strong to medium performer and therefore exceeded the new, lower debt burden indicators. The implication of a change in a country’s debt distress rating is that a country whose rating improves will generally receive a larger proportion of concessional loans, and if it worsens, it will receive a greater proportion of grants, as shown in figure 13. According to the IMF and World Bank, the DSA’s quality depends to a large extent on the realism of the projections under the baseline scenario. As explained in their policy paper, realistic means that the scenario takes account of a country’s growth potential as well as its capacity constraints, including the risk that governments do not implement desired policy reforms. Further, historical averages for the key macroeconomic variables for the past 10 years may provide some guidance about the extent of realism in the baseline scenario projections. The IMF has indicated that explicit justification is required if sustainable debt ratios are driven by DSA assumptions that deviate sharply from historical norms. We assessed the realism of the 2010 DSA projections for three countries—Burundi, Ethiopia, and Tanzania—and found that a too optimistic tone potentially prevailed. While conclusions from this sample are not generalizable to all LICs, these examples are illustrative of how DSAs are conducted. For the sample we reviewed, we based our conclusion on our analysis of the divergence between the DSA projections and their historic values, as well as on the reasonableness of the DSAs’ underlying assumptions that countries (1) would realize growth-enhancing investments; (2) would implement agreed-to reforms, such as tax reforms that would boost government revenues; and (3) would not be subject to adverse country-specific factors, such as recurring droughts, floods, and political instability. For the three countries we reviewed, we found that macroeconomic projections did not adequately consider the country’s vulnerabilities, such as failure to implement reforms, inability to make planned investments, or recurrence of adverse weather or political instability. The 2010 DSA for Burundi projects that real GDP growth rate will increase from its 10-year historical average of 2.7 percent to an average of 4.7 percent over the medium-term. This projected strong GDP growth depends on several factors, including an increase in anticipated export earnings from privatization of the coffee sector, which accounts for about two-thirds of total exports, and integration into the East Africa Community, which could give Burundi access to a broad market of about 120 million people and attract more investment. However, Burundi might not meet the GDP projection if it does not realize the higher export earnings from reform of the coffee sector. Privatization of the coffee sector is occurring more slowly than expected, with only 13 of 117 coffee processing facilities sold because there were few interested buyers. While the government plans to accelerate the sale of the remaining stations beginning in 2011, it is not clear whether investors will buy them. In June 2011, the IMF lowered Burundi’s 2011 growth projection from 4.5 percent to 4.2 percent, noting that higher food and fuel prices were likely to continue to increase throughout the year. The IMF reported that risks to Burundi’s macroeconomic outlook are significant and include higher food and fuel prices and a worsening of the political, social, and security situation, which would endanger donor support and could further worsen debt indicators. Nonetheless, the 2010 DSA assumes Burundi’s security and political situation will continue to improve. Moreover, Burundi’s 2010 DSA projected a large decrease in the fiscal deficit-to-GDP ratio, from a 5 percent average during 2007 through 2009, to 1.3 percent in 2015, based on a widening of the tax base as a result of continued tax reforms as well as reductions in spending. The IMF said that mobilizing domestic revenue is critical for Burundi’s fiscal sustainability. However, the fiscal deficit projections assume that Burundi will control government wages and reduce defense and security spending. In 2010, the IMF and World Bank changed Ethiopia’s risk of debt distress from “moderate” to “low” based on the inclusion of workers’ remittances, which means that Ethiopia now receives 100 percent concessional loan financing instead of 50 percent grants and 50 percent concessional loans. In addition, Ethiopia’s 2010 DSA projected that Ethiopia would achieve strong export growth and implement key reforms. Ethiopia’s 2010 DSA projected exports as a percent of GDP to rise to 19.1 percent by 2015, compared to the 10-year average of 13.5 percent, and to further increase to a 31 percent average during 2016 through 2030. However, in April 2011 IMF staff expressed concern that Ethiopia’s failure to implement monetary reforms, including removing the government- imposed bank credit ceilings, as well as highly negative real interest rates, were hindering the commercial banks’ financing role, which is fundamental to higher growth. In May 2011, IMF staff lowered estimates for Ethiopia’s real GDP growth rate from the 7.7 percent forecasted in Ethopia’s 2011 DSA to 6 percent for 2011 through 2012 due to high inflation, restrictions on private bank lending, and a more difficult business environment. The projected large increase in growth in the 2010 DSA depended on anticipated growth in service exports resulting from an expected increase in electricity exports based on current energy investments, greater investment in the national airline, and continued good harvests supporting agriculture. The DSA notes Ethiopia’s debt profile is very sensitive to export growth assumptions. The inclusion of workers’ remittances as a source of debt repayment was a main reason for the improvement in Ethiopia’s debt distress rating in 2010. While the DSA projects workers’ remittances to remain large and stable at 8.5 percent of GDP, it did not provide historical data or additional information upon which to base this conclusion. Moreover, IMF staff reported that Ethiopia’s risk of external shocks, such as droughts and high international commodity prices, is high. Ethiopia depends on rain-fed agriculture, which accounts for nearly half of GDP and 85 percent of employment. For the last 30 years, Ethiopia has been hit by droughts every 5 to 7 years, as well as frequent increases in international prices. However, staff told us that country-specific factors, such as weather- related shocks, were not specifically incorporated in the baseline scenario as such. Tanzania’s 2010 DSA projects higher export and GDP growth. The 2010 DSA projects a 6.5 percent real GDP growth rate in 2011, rising to 7.5 percent in 2015. The growth in real GDP is based on expected returns from the increase in infrastructure investment, including a rise in agricultural productivity and improved food distribution through investment in rural roads and markets, which is to be financed by additional domestic and external borrowing on less concessional terms. Also, Tanzania’s DSA projects an increase in the export-to-GDP ratio from 24.1 percent in the medium term to 28.5 percent in the long-term based on the country’s potential to substantially increase commodity and manufacturing exports. Following discussions with Tanzanian government officials, in Tanzania’s 2011 DSA the IMF projected a slowdown in real GDP growth for fiscal year 2011/12 (July through June), from the earlier projected rate of 7.1 percent to 6.6 percent. This revision was based on adverse weather, rising fuel prices, and lagging investment. The poor rainfall disrupted electricity generation and lagging investment coupled with higher demand for electricity led to power rationing, which adversely impacted growth. The rising cost of fuel increased the replacement cost of power generation. In addition, the ongoing drought could adversely affect the 2011 food harvest. To the extent that DSA projections prove too optimistic, debt problems may reemerge. This could become evident in the projections in future DSAs, and, if the deviations from the prior projections are significant enough, the country’s debt distress rating could change, meaning the country could receive more grants than loans. IMF and World Bank staff advise that the quality of the DSA hinges critically on the projections and assumptions underlying the baseline scenario, since alternative assumptions can lead to substantially different debt dynamics. The causes of optimism in the DSAs could be at the global macroeconomic level as well as at the country level. According to the IMF, there are increased risks to global economic recovery, including slower growth and extreme volatility in commodity prices. At the country level, our assessment of three countries’ DSAs illustrates how projections and assumptions can change over a relatively short period of time, potentially affecting a country’s risk of debt distress. For example, Burundi’s present value of debt-to-exports ratio already exceeds the country-specific threshold by a wide margin throughout the projection period. Burundi faces challenges in generating higher government revenue through tax reform and increasing export earnings due to slower than anticipated coffee sector reforms. Lower fiscal revenues or declining GDP growth would lead to a considerable deterioration of its debt ratios, according to its 2010 DSA. This could lead to Burundi being classified as “in debt distress.” However, since Burundi is already to receive only grants from creditors, options to assist the country financially might be limited. Similarly, if Ethiopia does not achieve the export growth projections in its DSAs, its debt ratios and performance ratings could worsen, and, if significant enough, could lead to a worsening of its debt distress rating. If this occurs, the terms of Ethiopia’s financing from certain lenders could change. Ethiopia now receives its financial assistance as all concessional loans, but a change in risk rating could lead to financing with a larger grant component. Regarding Tanzania, IMF staff reported in 2011 that revenue collection had fallen short of ambitious targets, and that the rapidly increasing fiscal deficit was being financed by increasingly expensive resources due to a shift from mostly grants to loans. Though Tanzania’s current risk of debt distress is low, maintaining current spending policies could widen the fiscal deficit, leading to rising debt servicing costs, with an adverse impact on the debt indicators. In May 2011, IMF reported that Tanzania faces formidable challenges given widespread poverty, high population growth, and tremendous dependence on foreign aid, and the near-term economic outlook is subject to considerable uncertainty with a rising risk of donor aid shortfalls and higher international fuel prices. Moreover, the effectiveness of the debt distress rating and associated financing requirements in keeping countries’ debt at sustainable levels depends on their broader use by borrowers and creditors. Some other multilateral institutions—including the African Development Bank, Asian Development Bank, Inter-American Development Bank and International Fund for Agricultural Development—and developed countries use the debt distress rating system to make decisions about their terms of financing. If projections ultimately prove too optimistic and countries’ ability to repay their debt declines significantly, some multilateral institutions could subsequently choose to provide more grants than loans to help lower the risk of debt problems reemerging. The food, fuel, and financial crises resulted in slower economic growth, higher deficits, and increased inflation for LICs. While the overall impact of the crises on LICs may have been milder than in the advanced economies, the high rate of poverty in these countries increases their overall vulnerability. For example, our previous work shows that many LICs were experiencing protracted food emergencies and had severe and widespread malnourishment even prior to the onset of the crises. The IFIs responded to the crises by increasing the amount of resources made available to the LICs. The IMF increased lending to LICs more than sixfold to almost $5 billion. The World Bank committed $18.1 billion through regular lending and five new crises response initiatives that committed $12.2 billion in financial assistance to LICs, including $1.4 billion in new funding. The World Bank provided funding as a mix of loans and grants, depending on the performance and debt vulnerability of each country. The World Bank was able to meet its goal of increasing the speed of disbursement for several initiatives, but the overall picture is mixed, especially when compared to the pre-crises period. Furthermore, in the case of both institutions, the impact of these new resources on LIC government spending during the crises has been difficult to establish. According to the World Bank and IMF, the crises did not significantly impair the countries’ ability to repay their future debt, because they expected the world economy to reestablish its pre-crises growth levels and the LICs to implement reforms necessary to achieve projected future growth levels. However, the increased risk to the global recovery and the extreme volatility of commodity prices may undermine the realization of these expectations. To date, the mix of loans and grants provided by the multilateral development banks have been largely unaffected by the crises. We found that the projections for our case study countries may prove too optimistic, which could contribute to debt problems re-emerging as the amount of loans countries receive could be greater than what would be considered sustainable. However, given that the IFIs update the DSAs on a regular basis, any excessive optimism should become evident over time, and the World Bank and other lenders could then increase the amount of grants they provide which would help mitigate potential debt problems. The U.S. Treasury, World Bank, and IMF provided written comments on a draft of this report, which are reprinted in appendixes IV, V, and VI, respectively. The U.S. Treasury commented that the IFIs appropriately responded to the crisis and effectively managed the trade-offs associated with quickly disbursing funds in an environment of limited capacity to absorb aid and that the United States strongly advocated for increased IFI engagement in LICs during the crises. The Treasury letter also stated that we provided a good overview of how the IMF responded forcefully to the crisis. In our discussion on the impact of IMF programs on government spending, they suggested we should have examined the impact of IMF programs on social spending. However, we would emphasize that accounting for the differences between program and non-program countries is critical to estimating the impact of IMF programs on spending during the crisis, which the IMF did not do in their 2010 report. The U.S. Treasury also noted that speed of disbursements is just one measure of effective crisis response and that it is important to consider trade-offs between speed of disbursements and the need to ensure adequate governance structures and fiduciary controls are in place. We included this information in our report. In addition, the Treasury stated that by working closely with the World Bank and other multilateral development banks (MDBs) to put in place the right fiduciary arrangements and strengthen country capacity to absorb and manage MDB assistance, the Treasury can improve the quality of the World Bank’s and other MDBs’ interventions and improve the monitoring and reporting of development results. The World Bank stated that it welcomes and agrees with our overall conclusion that IFIs, including IDA and IFC, met many goals in response to the crises in LICs. The World Bank also stated that it achieved a significant increase in both its commitments and its disbursements to LICs. We acknowledge that the World Bank responded to crises in LICs by increasing its commitments and disbursements through regular IDA lending and by establishing initiatives. Our analysis focused on the initiatives because these were specifically designed to respond to the crises. Our calculations for the overall commitments and disbursements, as well as the disbursement rates, differed from the World Bank’s because our methodology sought to isolate those activities which were explicitly undertaken in response to the crises. The World Bank said that IDA accelerated assistance delivery without compromising attention to governance and aid effectiveness. We acknowledge that disbursement rates, which vary over time, depend on a number of factors, including recipient country capacity, need, and governance, and the type of lending. The World Bank said there is growing evidence that IDA- supported public spending for essential services increased. As we previously reported, IFIs do not independently track developing countries’ poverty-reducing expenditures and instead rely upon developing countries’ governments to provide such data, even though the accuracy of these data and country capacity to provide this information is questionable. Finally, regarding debt sustainability, the World Bank noted that our analysis is based on a sample of just three countries and thus cannot assess the realism of the 2010 DSA projections. We based our conclusions on our assessment of the realism of the 2010 DSA projections for three countries as well as on the current risks to global economic recovery, reported by the IMF in August 2011. The IMF indicated broad agreement with the findings of our report, including the overview of the impact of the crisis on LICs. While the IMF suggested that our assessment is narrow, we paid sufficient attention to a range of response efforts, mentioning the IMF’s call for countercyclical policy responses, improved macroeconomic conditions in LICs, doubling of access levels, and modifications to lending instruments. The IMF acknowledged that comparing program with non-program countries does not prove a causal link from program engagement to higher spending and notes a recent related study. We include a reference to the study entitled “What Happens to Social Spending in IMF-Supported Programs” but also note that it does not necessarily reflect the results during the crises response period. The IMF also stated that growth assumptions underlying LIC DSAs have been borne out so far and that DSAs have built-in methods for addressing risks. We described the IMF’s use of alternative scenarios and stress tests to arrive at a country’s debt distress rating. However, we noted that these tests are very general and do not adequately reflect country-specific risks including political instability, adverse weather, global economic crises, and failure to implement reforms or make planned investment. Our analysis of the three countries’ DSAs is intended to be illustrative and not generalizable. Our conclusion, that projections which might be too optimistic could be mitigated by future DSAs and additional grants, is not dependent on these three countries. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to Members of Congress; U.S. Treasury, the IMF, and the World Bank. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our objectives were to examine (1) the economic impact of the crises on low-income countries (LIC), (2) international financial institutions’ (IFI) responses and reported results, and (3) IFIs’ assessment of the impact of the crises on LICs’ ability to repay their debt. Economic Impact of Crises To examine the impact of the crises on LICs’ economic performance, we collected and analyzed key macroeconomic data from 1990 through 2010 for 38 of the 40 LICs for which data were available, except as where noted. We analyzed variables including real gross domestic product (GDP), current account, fiscal deficit, government expenditures and revenue, consumer price index measure of inflation, and foreign direct investment. We obtained these data series from the widely used IMF and World Bank databases—World Economic Outlook, International Financial Statistics, and World Development Indicators. We computed LICs’ average economic performance with respect to each of the key economic variables, including current account and fiscal deficits and inflation, using a real GDP weighted average based on purchasing power parity (PPP) GDP, so that the resulting weighted average reflects each country’s size in terms of their share in total GDP of the entire group of LICs. We used the PPP GDP weights to construct weighted averages for the other variables, including fiscal and current account deficits, government revenue and expenditures, and the consumer price index measure of inflation. We analyzed the LIC group’s macroeconomic performance over the 2007 through 2009 crises period, and compared this to the group’s performance during the pre-crisis period from 2004 through 2006 to determine whether economic performance improved or deteriorated. We also disaggregated the results to determine which countries experienced improvements and deteriorations in each of the key macroeconomic variables. We corroborated our results with data from the economic and financial forecasting firm, IHS Global Insight. We also examined and assessed the DSA’s incorporation of World Economic Outlook assumptions concerning the global pace of recovery, including those of the country’s key trading partners. We also compiled information on international food and oil price data from the UN Food and Agriculture Organization’s Food Price Index and the U.S. Department of Energy. Additionally, we reviewed IMF country reports and World Bank Country Assistance Strategies, which also contain limited historical information on the key macroeconomic variables. Some IMF data is based on developing country government data with greatly varying statistical capacity across countries, and we discussed these limitations with IMF officials. We determined that the data were sufficiently reliable for summarizing countries’ past macroeconomic performance. To examine IFIs’ responses to the crises and reported results, we analyzed documents and data from the World Bank and the IMF. For the World Bank, these documents include proposals and framework documents for the crisis response initiatives; Country Assistance Strategies and Interim Strategy Notes; Project Information Documents and Implementation Completion Reports; and Independent Evaluation Group reports and approach papers. For the IMF, we reviewed IMF country reports; countries’ letters of intent; research papers; and the Independent Evaluation Office reports. We also reviewed joint World Bank-IMF publications. We interviewed officials from these institutions, as well as from the Department of State and the U.S. Agency for International Development. To evaluate whether the World Bank’s crisis response was consistent with the institution’s stated goals of increasing the speed of disbursements, we analyzed World Bank data on financial commitments and disbursements made to LICs between 2005 and 2010. To do this, we determined the first year disbursement rate for all projects approved during the crises response period (2008 through 2010); all projects approved during the pre-crises period (2005 through 2007); and all projects approved during the crises response period under each initiative, as well as those approved during the crises response period outside of any initiative. We did not use the World Bank’s standard disbursement rate methodology because our analysis sought to isolate those activities which were explicitly undertaken in response to the crises. For our analysis, this included only projects approved after the World Bank first stated its intention to respond to any of the three crises. This occurred in early 2008, with the establishment of the Global Food Crisis Response Program. We also identified the total number of projects and the amount of funding committed in association with any of the World Bank’s crisis response initiatives. To measure the speed of disbursements, we first calculated the total disbursements for each project that took place during the first four quarters following project approval. We then determined the average disbursement rate by using a weighted average, which is computed as the ratio between the sum of first year disbursements and the sum of the commitments. One type of project, “additional financing” projects, tracks disbursements under the “parent” project, though commitment amounts are recorded under the “additional financing” project. In our dataset, there were 160 of these projects, out of a total of 622. To ensure that disbursements from “additional financing” projects were captured in our analysis, we developed a methodology that calculated the remaining commitment balance of the “parent” project as of the quarter in which the additional financing project is approved. We then added the balance to the new commitment from the additional financing project to form the denominator of the disbursement ratio. We then calculated the first year disbursement rate by determining the first four quarters of disbursements under the “parent” project following the approval date of the “additional financing” project. We used disbursement data through June 2011, the latest available, to ensure as many projects as possible (97 percent of all projects) had 4 quarters of disbursements. Eighteen projects approved in the fourth quarter of 2010 had only 3 quarters of disbursement data. We used disbursement data through June 2008 for the pre-crisis period to make our analysis comparable. We then compared various disbursement rates to one another to reach our conclusions. We assessed the reliability of the data we used in our analysis by comparing the consistency of the data among the various sources, and discussing the data with World Bank and IMF officials. For the data used to determine World Bank disbursement rates, we interviewed World Bank officials to understand their database and correct errors in the data. We determined that the data used in our analysis were sufficiently reliable for our purposes. To determine U.S. dollar values associated with the IMF’s response to the crises, we used information on IMF program and funding levels. To calculate the portion of the $250 billion in support that the LICs received, we totaled the amount each country received in Special Drawing Rights, the IMF’s unit of account, and multiplied that by the August 28, 2009 conversion rate to arrive at a U.S. dollar value. To calculate that IMF loans to LICs increased more than sixfold from approximately $748 million between 2005 and 2007 to about $4.9 billion between 2008 and 2010, we used data from the “IMF Lending Arrangements” online tool. For each year, we totaled new lending to LICs in Special Drawing Rights, then converted that total to dollars using the year-end exchange rate. We also assessed the sensitivity of the results of an IMF analysis of government spending in LICs in 2009 using data from the IMF and the Worldwide Governance Indicators. We compared program LICs and non- program LICs using measures of institutional quality from the World Governance Indicators and indicators of macroeconomic policy from the IMF’s World Economic Outlook database. We conducted sensitivity analyses by omitting countries with the lowest scores on certain measures of institutional quality or the least favorable pre-crisis macroeconomic policies from the sample of non-program LICs. We assessed the reliability of data used in our sensitivity analyses and found them to be sufficiently reliable for summarizing and ranking countries’ institutional quality and macroeconomic policy. To examine the extent to which IFI’s assessments of LICs’ ability to repay their debt was impacted by the crises, we reviewed the changes in each of the 40 LIC’s Country Policy and Institutional Assessments (CPIA) and debt distress ratings over the period 2007 through 2010 to determine if the crises led to a change in a country’s rating. We used CPIA data from the World Bank’s online database and the debt distress ratings from each of the LIC’s debt sustainability analyses (DSA) over the period. To illustrate how the DSAs are conducted and how macroeconomic projections affect the reliability of the debt distress rating, we focused on three case study countries: Burundi, Ethiopia, and Tanzania. We selected these countries based on criteria that include the number of IFI projects, amount of IFI financial support, and country conditions. For example, between fiscal years 2008 and 2010, Ethiopia and Tanzania were in the top three recipients of IDA assistance by dollar value, receiving approximately $4.8 billion, while Burundi is a post-conflict fragile state. For each of our three case-study countries, we analyzed the country’s 2010 World Bank-IMF Bank DSAs, as well as the associated IMF program reviews and Article IV consultations, and World Bank Country Assistance Strategies. We also reviewed prior and subsequent DSAs to make comparisons and check for data consistency. In addition, we met with IMF staff responsible for the DSA preparation for preparing each of the case-study countries. We based our assessment of the DSA’s ability to accurately reflect the country’s debt vulnerabilities on our analysis of the DSA’s macroeconomic projections and the underlying assumptions, which form the basis of the country’s risk of debt distress. These include the DSA’s projections and assumptions regarding key macroeconomic variables, such as real GDP and export growth, and the divergence of the growth rates of these variables from their historic values; assumptions regarding the LIC’s implementation of reforms and productivity-increasing investment; and assumptions concerning the country’s vulnerability to external shocks, including adverse weather, political or regional instability, and rising food and fuel prices. To determine economic performance over the study period for our three case study countries, and to calculate historical growth rates for key macroeconomic variables, we relied primarily on IMF and World Bank databases—World Economic Outlook, International Financial Statistics, World Development Indicators, and Balance of Payments Statistics. We also utilized data from IMF Article IV consultations and country program reviews. We compared GAO-calculated 10-year historical averages for the most important macroeconomic variables with the DSA’s historical averages for these variables, which form the basis for the DSA’s projections and debt ratios. We evaluated the IFIs’ determination of a country’s risk of debt distress partly based on the extent to which the values of historical key macroeconomic variables diverge from their projected values; and, if so, whether the DSA provides reasonable justification for this divergence. In making this assessment, we also considered additional information available in the Article IVs, World Bank Country Assistance Strategies, and Global Insight Country Intelligence Reports. We also based our assessment of the IFIs’ determination of the country’s risk of debt distress on the DSA’s consideration of country- specific factors, such as the country’s susceptibility to weather-related shocks; political instability; and implementation of institutional reforms that would enhance a country’s growth prospects, particularly in the economic and debt management areas. We discussed our approach and preliminary findings with officials from the IMF. We assessed the reliability of data used in our country analysis based on the consistency of data across various sources and determined them to be sufficiently reliable to make nongeneralizable assessments of the DSAs for the three case study countries. We conducted this performance audit from September 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (15%) assistance (85%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (50%) assistance (50%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (52%) assistance (48%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (54%) assistance (46%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (24%) assistance (76%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (68%) assistance (32%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (37%) assistance (63%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (77%) assistance (23%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (42%) assistance (58%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (47%) assistance (53%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (54%) assistance (46%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (60%) assistance (40%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (48%) All bilateral assistance (52%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (85%) assistance (15%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (45%) assistance (55%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (37%) assistance (63%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (66%) assistance (34%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (56%) assistance (44%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (32%) assistance (68%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (47%) assistance (53%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (38%) assistance (62%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (30%) assistance (70%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (59%) assistance (41%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (48%) assistance (52%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (43%) All bilateral assistance (57%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (48%) assistance (52%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (35%) assistance (65%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (43%) assistance (57%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (51%) All bilateral assistance (49%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2009) All multilateral assistance (19%) assistance (81%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (48%) assistance (52%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (54%) assistance (46%) Note: Average Annual Official Development Assistance (ODA), for which the main objective is economic development and welfare, was calculated using data from 2008 and 2009 and does not include funding for 2010. ODA excludes certain items such as military aid and antiterrorism activities. Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (4%) assistance (96%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2009) All multilateral assistance (24%) assistance (76%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (60%) assistance (40%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (47%) assistance (53%) Note: Average Annual Official Development Assistance (ODA), for which the main objective is economic development and welfare, was calculated using data from 2008 and 2009 and does not include funding for 2010. ODA excludes certain items such as military aid and antiterrorism activities. Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (45%) assistance (55%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (41%) assistance (59%) Assistance (ODA) (Dollars in millions) Population (nearest million; 2009) GNI per capita ($USD; 2008) All multilateral assistance (41%) assistance (59%) Population (nearest million; 2009) Assistance (ODA) (Dollars in millions) GNI per capita ($USD; 2008) All multilateral assistance (15%) assistance (85%) Between 2008 and 2010, the World Bank committed $12.2 billion in financial assistance to 38 LICs through five crisis response initiatives, including $10.8 billion from existing International Development Association (IDA) funds, and $1.4 billion from new financial assistance. Figure 14 shows World Bank commitments to 38 LICs between 2008 and 2010. Within the four crisis response initiatives that sought to increase the speed of disbursements of commitments from existing IDA funds, first year disbursement rates varied by country, as shown in figures 15 through 18. 1. We emphasized that accounting for the differences between program and non-program countries is critical to estimating the impact of IMF programs on spending during the crisis, which the IMF did not do in their 2010 report. 2. We acknowledged that the speed of disbursements is one measure, among others, of effective crisis response. We acknowledged the Treasury’s statement that the need to ensure that recipient country capacity and governance controls were sufficiently robust to absorb the additional resources provided during the crisis period played a role in the speed of disbursements. 1. We included information about commitments and disbursements in our report, although our figures differ from the Bank’s because we used different methodologies. Our methodology isolated those activities which were explicitly undertaken in response to the crises. 2. We acknowledged that the World Bank responded to crises in LICs through regular IDA lending and by establishing initiatives. Our analysis focused on the initiatives because these were specifically designed to respond to the crises. 3. Our analysis sought to isolate those activities which were explicitly undertaken in response to the crises. We did not assess the World Bank’s standard approach to calculating disbursements. To measure the speed of disbursements, we first calculated the total disbursements for each project that took place during the first four quarters, including the quarter of project approval. We then determined the average disbursement rates for different groups of projects by using a weighted average, which is computed as the ratio between the sum of first year disbursements and the sum of the commitments for all projects that belonged to a group. 4. We acknowledged that disbursement rates, which vary over time, depend on a number of factors, including recipient country capacity, need, and governance, and the type of lending. 5. As we previously reported, IFIs do not independently track developing countries’ poverty-reducing expenditures and instead rely upon developing countries’ governments to provide such data, even though the accuracy of these data and country capacity to provide this information is questionable. We focused on the Fast Track Facility and the infrastructure program because these two initiatives were explicitly designed to support domestic spending in recipient countries during the crises. 6. We based our conclusion that the underlying macroeconomic projections might prove too optimistic on the realism of the 2010 DSA projections for three countries—Burundi, Ethiopia, and Tanzania—as well as on the current risks to the global economic recovery and rising commodity prices, reported by the IMF in August 2011. Our analysis of the three countries’ DSAs is intended to be illustrative and not generalizable. Our conclusion, that projections which might be too optimistic could be mitigated by future DSAs and additional grants, is not dependent on these three countries. 1. We reported on a wide range of IMF responses, including the IMF’s call for countercyclical policy responses, improved macroeconomic conditions in LICs, doubling of access levels, and modifications to lending instruments. 2. We believe that comparisons of program and non-program country performance can be misleading without appropriate context or analysis. The IMF acknowledged that comparing program with non- program countries does not prove a causal link from program engagement to higher spending and notes a recent related study. The study found that IMF-supported programs were associated with increased spending on education and health as percentage of GDP or a percentage of spending in LICs, based on data from 1985 through 2009. We include a reference to the study, entitled “What Happens to Social Spending in IMF-Supported Programs”—which was released after our audit work had concluded—and we described its conclusions and relevance in a footnote. In particular, the study’s results represent the average effect of an IMF-supported program over the time period, and therefore do not necessarily reflect the results during the crises response period. 3. We discussed the DSAs’ assumption for “quick recovery” in relation to both LICs and the global economy. The IMF forecasted LICs’ recovery in line with the forecast of a quick recovery for the global economy, which was expected to boost demand for LICs’ exports. In August 2011, the IMF reported renewed risks to the global recovery, which means that projections for future export growth could be too optimistic. However, in commenting on this report, IMF noted that the “quick recovery” assumption had been borne out so far. Our report described the IMF’s use of alternative scenarios and stress tests to arrive at a country’s debt distress rating. However, we noted that these tests are very general and do not adequately reflect country- specific risks including political instability, adverse weather, global economic crises, and failure to implement reforms or make planned investment. Cheryl Goodman, Assistant Director; Marc Castellano; RG Steinman; Michael Hoffman; Elizabeth Kowalewski; and Arian Terrill made key contributions to this report. The team benefited from the expert advice and assistance of Adam Vogt, Etana Finkler, Bruce Kutnick, Fang He, Shirley Min, Leah DeWolf, Kathryn Buldoc, Martin de Alteriis, Tom McCool, Heneng Yu, Mary Moutsos, Mae Liles, and Holly Dye.
The 40 poorest countries in the world, known as low-income countries (LICs), have been negatively impacted by successive food, fuel, and financial crises since 2007. In response, international financial institutions (IFI), including the World Bank and International Monetary Fund (IMF), have taken actions to increase financial assistance for affected countries. Between 2008 and 2010, Congress appropriated $3.3 billion to the World Bank's International Development Association, which funds development programs in LICs. Congress also authorized the U.S. representative at the IMF to vote to approve the sale of some of the IMF's gold to increase lending to LICs. LICs' ability to repay debt remains important as financing levels rise and decisions are made about the mix of loans and grants they receive. GAO was asked to examine (1) the economic impact of the crises on LICs, (2) IFIs' responses and reported results, and (3) IFIs' assessment of the impact of the crises on LICs' ability to repay their debt. GAO analyzed documents and information from the World Bank and the IMF, including data on macroeconomic indicators, financial commitments, and debt analyses. GAO interviewed staff from the World Bank, IMF, and U.S. Treasury. GAO selected three African countries for more thorough analysis, a sample that is meant to be illustrative, not representative.. In LICs, the recent food, fuel, and financial crises resulted in slower economic growth, higher deficits, and higher inflation, but the macroeconomic impacts were less than experienced by the advanced economies. The crises also slowed foreign direct investment in LICs, which had been growing steadily since 2000. During the crises period, LICs' average economic growth slowed from 7.1 percent in 2007 to 5.3 percent in 2009. IFIs have reported that lower growth rates caused by the crises could lead to increases in poverty in LICs, and our previous work shows that many LICs were experiencing protracted food emergencies and had severe and widespread malnourishment even prior to the onset of the crises. During the crises, food and fuel prices rose significantly, then declined, and have risen again in 2011 to levels experienced during the crises. In response to the crises, IFIs increased funding and disbursed some funds more quickly to LICs, but the impact of these actions on LIC government spending has been difficult to establish. Between 2008 and 2010, the World Bank committed $18.1 billion through regular lending and five crisis response initiatives, an increase of 39 percent from the pre-crises period. Total first year disbursements also increased by 12.7 percent. Three of four of the initiatives designed to increase the speed of disbursements met their goal. However, the proportion of committed funds that have been disbursed in the first year following project approval declined, as compared to the pre-crises period. Disbursement rates depend on several factors, including recipient country capacity, need, and governance; and the type of lending. The World Bank's International Finance Corporation responded to the crises through investments, trade initiatives, and enhanced coordination with donors, but its response was limited by the availability of resources and recipient countries' limited ability to implement programs quickly. The IMF boosted lending to LICs more than sixfold to $4.9 billion, which governments could use to bolster their reserves or make international payments. While most LIC governments' spending increased during the crises, we found that the impact of World Bank and IMF actions on spending has been difficult to establish. According to IFIs' analysis, the crises did not significantly impair LICs' ability to repay their future debt, and thus did not necessitate an increase in their access to grants, which do not have to be repaid, relative to loans. The reliability of this analysis depends on the realism of IFIs' projections, which include quick economic recovery, implementation of policy reforms, and low inflation. According to IFIs' projections, the ability of six LICs to repay their debt improved during the crises, and thus they received more loans instead of grants. However, the IMF subsequently reported renewed risks to the global economic recovery, meaning that projections for future export growth, government revenue, and inflation might be too optimistic. This report contains no recommendations. The World Bank, IMF, and U.S Treasury generally agreed with our findings but identified areas to provide greater context.
Three primary types of pipelines form a 2.2 million-mile network across the nation. Natural gas transmission pipelines transport natural gas over long distances from sources to communities. Natural gas distribution pipelines continue to transport natural gas from transmission lines to consumers. Hazardous liquid pipelines transport crude oil to refineries and refined oil products, such as gasoline, to product terminals. OPS, within DOT’s Research and Special Programs Administration (RSPA), is responsible for enhancing the safety of and reducing the potential environmental impacts of transporting natural gas and hazardous liquids through pipelines. The agency primarily carries out this responsibility through regulation, oversight, enforcement, and R&D. OPS sets and enforces regulations that pipeline operators must follow in designing, constructing, maintaining, and operating pipelines. State agencies responsible for overseeing pipeline safety help OPS to enforce its regulations. In December 2000, it began implementing a new risk-based regulatory approach, called “integrity management.” Under this approach, operators are required, in addition to meeting minimum safety standards, to better protect pipeline segments where a leak or rupture could have significant consequences, such as near highly populated areas, by conducting new tests of these segments, completing repairs according to specified schedules, and developing comprehensive plans for addressing the range of risks facing these segments. The agency’s R&D program is aimed at advancing the most promising technologies for ensuring the safe operations of pipelines. For example, current R&D projects seek to develop new and improved techniques for assessing the condition of pipelines and detecting anomalies—such as leaks, corrosion, and damage from excavators—that can lead to pipeline accidents. From 1998 through 2002, a total of 1,770 pipeline accidents occurred, resulting in 100 fatalities and $621 million in property damage. OPS’s R&D program has undergone major changes in the last several years. In particular, the agency has developed a new agenda for its R&D program, using the input of key experts and stakeholders, and has received significant increases in funding for this program. Until 2001, most of the research funded by OPS was aimed at helping the agency perform its regulatory function or was in response to an accident investigation or congressional direction. In November 2001, the agency held an R&D planning workshop to gain the perspectives of a variety of experts and stakeholders on areas of R&D that have the most potential for enhancing pipeline safety. Attendees included representatives of federal and state agencies, research organizations, industry groups, pipeline companies, and technical organizations that set industry safety standards. OPS used the R&D priorities identified in this workshop to develop a new agenda for its R&D program, focusing on three main areas: (1) developing new technologies for preventing damage and detecting leaks, (2) improving technologies for operating, controlling, and monitoring the condition of pipelines, and (3) improving pipeline materials. From March through December 2002, the agency issued announcements requesting project proposals in these areas, asking that prospective funding recipients provide at least 50 percent of the proposed project’s cost. As of May 2003, it had funded 10 R&D proposals it received in response to these announcements. In addition, after its November 2001 R&D workshop, OPS established a Web site on its R&D program in order to improve communications with experts, stakeholders, and the public about its R&D agenda and activities. OPS’s budget for its R&D program has risen more than sevenfold since fiscal year 1998, with the most significant increases occurring since fiscal year 2001. Figure 1 shows the agency’s budgeted amounts for R&D from fiscal years 1998 through 2003. OPS’s budget for R&D rose steadily from fiscal year 1998 to fiscal year 2001, from $1.3 million to $2.8 million. In fiscal year 2002, the agency received $4.8 million for its R&D program, which was $2 million more than RSPA had requested for the program. Agency officials attribute this funding increase to increased concerns for pipeline safety within Congress following the tragic pipeline accidents in Bellingham, Washington (1999), and Carlsbad, New Mexico (2000), which together caused 15 fatalities. For fiscal year 2003, RSPA requested and received about $4 million in additional funding for the program, for a total of $8.7 million. OPS officials told us that this requested increase was a response to heightened congressional interest in achieving technological solutions to pipeline safety, as evidenced by legislative proposals that called for increased attention to this area. RSPA is proposing funding for OPS’s R&D program of $9.2 million in fiscal year 2004, an increase of about $0.5 million above the fiscal year 2003 amount. OPS officials explained that they intend to use most of this increase for a study, required by the Pipeline Safety Improvement Act of 2002, to assess the performance of controllers who monitor pipeline operations. Overall, agency officials also attribute recent increases in funding for OPS’s pipeline safety R&D program to a recognition of the challenges posed by the agency’s new integrity management regulatory approach and the criticality of the nation’s pipeline infrastructure, in the aftermath of the terrorist attacks of September 11, 2001. OPS’s pipeline safety R&D program is continuing to evolve in response to new directives in the Pipeline Safety Improvement Act of 2002 for the planning and reporting of federal pipeline R&D efforts. The act, which became law in December 2002, assigned the Secretary of Transportation responsibility for developing a 5-year plan for pipeline R&D and transmitting the plan to Congress by December 2003, in coordination with DOE and the National Institute of Standards and Technology. (OPS officials told us that the Secretary has delegated this responsibility to OPS.) DOE operates an R&D program that is focused on developing future technologies to improve the integrity, reliability, and security of the natural gas infrastructure, including pipelines and storage facilities. In comparison with OPS’s R&D program, which focuses on the development of quick-to-market technologies that could become available in the short term (1-3 years) or midterm (3-5 years), DOE’s program focuses on technologies that could become available in the midterm (3-5 years) or longer term (5-8 years). The National Institute of Standards and Technology does not operate an R&D program focused on pipelines, but, reflecting its expertise in materials research, the act assigns it a key role in planning future pipeline R&D. The Department of the Interior’s Minerals Management Service (MMS), although not assigned an R&D planning role in the act, funds pipeline R&D, including research on offshore pipeline safety. Consequently, OPS plans to include that agency in efforts to develop a 5-year plan for pipeline R&D. The act requires the heads of DOT, DOE, and the National Institute of Standards and Technology to jointly report annually to Congress, beginning in December 2003, on the status and results of implementation of the plan. Since fiscal year 2001, OPS has allocated its rising R&D funding to three main areas of pipeline safety R&D that were identified at its 2001 workshop: (1) developing new technologies for preventing damage to pipelines and detecting leaks, (2) improving technologies for operating, controlling, and monitoring the condition of pipelines, and (3) improving the performance of pipeline materials. The agency has also allocated some R&D funding to a fourth area, efforts to improve the agency’s mapping and information systems. On the basis of our work, we believe that the agency’s R&D funding is generally aligned with its mission and pipeline safety goals. The agency has obtained the views of external experts and stakeholders in determining what types of R&D are aligned with its mission of ensuring the safe, reliable, and environmentally sound operation of the nation’s pipeline transportation system. The agency has also recently improved coordination with other federal agencies that fund pipeline R&D in order to avoid overlap between their R&D programs. Both of these practices have been recommended by leading organizations that conduct scientific and engineering research. OPS has also linked its R&D efforts with its performance goals of reducing the impacts of pipeline incidents, including fatalities and injuries, and reducing spills of hazardous material. In its plans, the agency has described how new and improved technologies resulting from its R&D funding can help achieve these performance goals. Finally, a number of key experts and stakeholders told us that, in their view, the agency has chosen appropriate R&D areas to fund. OPS allocates its R&D budget to three major areas involving the research and development of pipeline safety technologies as well as to a fourth area—efforts to improve the agency’s pipeline mapping and information systems—that does not involve such research and development. Figure 2 shows how the agency plans to distribute its fiscal year 2003 R&D budget of $8.7 million among these areas. OPS plans to spend the largest share of its R&D budget, 46 percent, or $4.0 million, on the area of Damage Prevention and Leak Detection, which includes the development of new technologies to prevent damage to pipelines, detect pipeline defects, and quickly and accurately locate and control pipeline leaks. Damage to pipelines from “third parties,” such as companies performing excavation work, is the leading cause of pipeline failures and can lead to property damage and injuries or fatalities. OPS plans to allocate 21 percent of its R&D budget, $1.9 million, to the area of Enhanced Operations, Controls, and Monitoring, which includes improvements in technologies for operating, controlling, and monitoring the integrity of pipelines to help identify and prioritize pipeline safety problems and solutions. The agency intends to spend a slightly lesser amount, 19 percent of its R&D budget, or $1.7 million, on the area of Improved Materials Performance, which includes improvements in pipeline materials in order to extend the integrity and lifetime of installed pipelines and their various components. Finally, the agency plans to allocate the smallest portion of its R&D budget, 14 percent, or $1.2 million, to the area of Mapping and Information Systems, which includes efforts to improve the collection, integration, and analysis of data on the location and safety performance of pipelines. These efforts make pipeline mapping information available to federal, state, and local officials and support pipeline inspection activities of OPS and its state partners. Since fiscal year 2001, OPS’s allocation of funding to each of the three main areas of pipeline safety R&D—Damage Prevention and Leak Detection; Enhanced Operations, Controls, and Monitoring; and Improved Materials Performance—has risen significantly, while its allocation to Mapping and Information Systems efforts has remained level. The tripling of the agency’s R&D budget—from $2.8 million in fiscal year 2001 to $8.7 million in fiscal year 2003—has enabled it to increase funding for these three R&D areas. Specifically, OPS has increased funding for R&D efforts in Damage Prevention and Leak Detection from $1.3 million in fiscal year 2001 to $4.0 million in fiscal year 2003, an increase of over 200 percent. The agency has increased funding for Enhanced Operations, Controls, and Monitoring from $309,000 in fiscal year 2001 to $1.9 million in fiscal year 2003, an increase of more than 500 percent. OPS started funding Improved Materials Performance research in fiscal year 2002, increasing funding in this area to a level of $1.7 million in fiscal year 2003. Agency officials explained to us that they allocated funding to these three R&D areas in fiscal years 2002 and 2003 based on the results of their 2001 R&D planning workshop. For example, they added Improved Materials Performance to their R&D agenda because it was identified as a priority area at the workshop. They have also considered other factors in deciding how to allocate funding. For example, the agency significantly increased funding for R&D in the areas of Damage Prevention and Leak Detection and Enhanced Operations, Controls, and Monitoring because of a great need for improved performance in these areas. OPS officials explained that, because the agency’s new risk-based regulatory approach requires pipeline operators to assess and mitigate risks to pipeline segments where a leak or rupture could have significant consequences, these operators need better tools and methods for monitoring pipelines and making necessary repairs. They also noted that OPS’s R&D results assist in the creation of industry standards on the appropriate use of new technologies. In addition, officials explained that they decided to allocate a significant portion of their R&D budget to the area of Improved Materials Performance because, on the basis of current information on the development of pipeline technologies, they believed that advances in this area held much promise for improving pipeline safety. Finally, OPS has allocated about $1.2 million per year to the Mapping and Information Systems area since fiscal year 2001 in order to maintain efforts to improve these systems. (See fig. 3.) OPS has provided $3.0 million in funding to 10 projects related to Damage Prevention and Leak Detection since fiscal year 2001. Examples of funded projects include the following: OPS provided $0.6 million in funding to five projects focused on improving in-line inspection techniques, including “smart pigs” and other technologies, for detecting damage and defects in pipe walls. Such improved techniques can help to prevent pipeline leaks or ruptures by making possible the early detection and repair of damage and defects. In partnership with the U.S. Air Force, OPS provided $1.2 million in funding to a project focused on developing an approach for detecting pipeline leaks using an airborne laser system that measures levels of chemicals in the atmosphere just above the earth’s surface. OPS has provided $0.9 million in funding to six projects related to Enhanced Pipeline Operations, Controls, and Monitoring since fiscal year 2001. Most of this funding—$0.6 million—has been allocated to two projects to improve alternative inspection techniques, called direct assessment, for identifying internal and external corrosion and other defects in pipelines that cannot accommodate smart pigs. This is a significant issue for natural gas pipelines. One industry association estimates that only about 35 percent of the total natural gas pipeline mileage can accommodate smart pigs, which are typically used to assess the condition of liquid pipelines. OPS officials told us that they are planning to fund three additional R&D projects in this area in June 2003. As of May 2003, OPS has provided $0.1 million in funding to one project in the area of Improved Materials Performance. This project seeks to develop a “smart” composite pipe that will allow for real-time monitoring of the condition of the pipe through a remote monitoring system. The agency requested proposals in this area in December 2002 and expects to start funding some of these proposals in the summer of 2003. Among the types of proposals that OPS has requested are proposals to develop materials that better withstand third-party damage, corrosion, and higher grade/strength steels; and materials that facilitate the operation of pipelines at higher design pressures. Finally, of the roughly $1.2 million that OPS has allocated each year since fiscal year 2001 to the Mapping and Information Systems area, it spent or plans to spend about $800,000 each year for efforts to improve the National Pipeline Mapping System, which depicts the location of pipelines in relation to areas that are populated or environmentally sensitive, and about $400,000 each year for efforts to integrate information systems the agency uses in overseeing pipeline safety in cooperation with the states. The agency expects to continue funding this area at this level for the foreseeable future in order to improve and update these systems continually. OPS officials explained that these mapping and information systems assist OPS inspectors and state and local officials in their efforts to oversee pipelines and protect the community and environment from pipeline leaks or ruptures. OPS’s mission is to ensure the safe, reliable, and environmentally sound operation of the nation’s pipeline transportation system. It has indicated in its budget and plans that its R&D program supports this broad mission as well as the following more specific performance goals: (1) to reduce deaths, injuries, property damage, and economic disruptions resulting from pipeline incidents and (2) to reduce the amount of oil and other hazardous liquids spilled from pipelines. The agency has described how new and improved technologies resulting from its R&D funding can help achieve these performance goals. For example, the number of pipeline incidents and the amount of hazardous material spilled could be reduced through the use of improved technologies for detecting third-party damage, corrosion, and defects and the use of improved pipeline materials that can better withstand damage and corrosion. The Committee on Science, Engineering, and Public Policy—a joint committee of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine—has recommended the use of expert review to determine whether a research program is focused on the subjects most relevant to an agency’s mission. Under this form of review, experts in related fields as well as potential users of the research evaluate the relevance of research to an agency’s mission and goals and its potential value to intended users. OPS has used expert review to help it develop a research agenda that is aligned with its mission and goals. At its November 2001 R&D planning workshop, it asked a variety of experts as well as potential users of research to identify the types of R&D that would be most likely to enhance pipeline safety. Participants included representatives from federal and state agencies with pipeline responsibilities, pipeline companies and their associations, research groups, and technical organizations that set industry safety standards for pipelines. The agency subsequently used the results of this workshop in developing its research agenda, guided by an R&D planning panel composed of key experts from such groups. OPS has also used peer review, a form of expert review, in deciding which R&D proposals to fund, a practice that is recommended by the Committee on Science, Engineering, and Public Policy. OPS’s review panels have included representatives from other federal agencies that conduct pipeline R&D, industry associations, and associations of state agencies with pipeline safety responsibilities. The Pipeline Safety Improvement Act, enacted in December 2002, requires that the Secretary of Transportation consult with a variety of groups in preparing a 5-year plan for pipeline safety R&D, which must be provided to Congress by December 2003. In response, OPS is continuing to involve various experts and stakeholders in its R&D planning. Agency officials have told us that, in preparation for developing this 5-year plan, they are in the process of obtaining updated external views in order to reassess research priorities. This has involved participating in the pipeline R&D planning efforts of industry associations and research organizations, discussing R&D priorities with state agency officials, and reconvening their R&D planning panel of outside experts. In developing the plan, agency officials also plan to consult with OPS’s two technical advisory committees. Finally, OPS plans to hold another R&D workshop during the winter of 2003-04. Coordination among federal agencies that conduct related research helps to avoid duplication and ensure that each agency performs research that is aligned with its particular mission and goals. The Committee on Science, Engineering, and Public Policy has recommended that agencies establish a formal process for coordinating similar fields of research, in order to improve collaboration, help keep important questions from being overlooked, and avoid duplication of effort. Since 2001, OPS has increased efforts to coordinate pipeline R&D with DOE and the Department of the Interior’s MMS, both of which also conduct research related to pipelines. This increased coordination has taken the form of mutual participation in panels that review R&D proposals and workshops to plan R&D activities. According to OPS officials, officials of these agencies have used these opportunities to communicate about their respective pipeline R&D efforts and avoid duplication. However, these agencies have not had a formal mechanism in place that defines each agency’s responsibilities for pipeline R&D. The Pipeline Safety Improvement Act requires that the heads of DOT, DOE, and the National Institute of Standards and Technology develop a memorandum of understanding to formally coordinate pipeline R&D efforts. (Although the institute does not operate an R&D program focused on pipelines, the act assigned it a key role in pipeline R&D based on its expertise in materials research.) In response, OPS, DOE, and the institute have developed such a memorandum and are in the process of finalizing it. The Pipeline Safety Improvement Act also requires that DOT coordinate with DOE and the National Institute of Standards and Technology in developing a 5-year plan for pipeline R&D. In response, OPS is involving DOE and the institute, as well as MMS, in efforts to develop such a plan. These agencies are also considering holding joint workshops on pipeline R&D in the future. In addition, OPS and the National Institute of Standards and Technology have started to participate in each others’ proposal review panels and are discussing entering into an agreement to have the institute conduct some research on pipeline materials. We asked a number of key experts and stakeholders for their views on the extent to which OPS’s R&D agenda is aligned with its mission and goals. These individuals included officials in DOE and MMS, representatives of four industry associations, a former head of a state agency that regulates gas pipelines, the heads of two leading pipeline R&D organizations, two foremost technical experts in pipeline safety, and an environmentalist active in pipeline safety. Six of these individuals have been or are members of OPS advisory committees or R&D planning or review panels. They generally told us that, in their view, the agency has chosen to fund appropriate areas. The pipeline safety R&D priorities of the experts who completed our questionnaire are generally consistent with OPS’s R&D priorities. Of the three main R&D areas that OPS is currently funding, Damage Prevention and Leak Detection received the most scores of high or very high funding priority; Enhanced Operations, Controls, and Monitoring received the second highest number of such scores; and Improved Materials Performance received the third highest number. This ranking corresponds to the relative levels of funding OPS has assigned to these areas, as described in the previous section. However, the experts’ level of support for Improved Materials Performance was much lower than that for the other two main R&D areas that OPS is funding. OPS officials told us that they are currently updating their research agenda, using the input of experts and stakeholders, and that they will consider our questionnaire results in this process. To obtain the views of experts on pipeline safety R&D priorities, we asked 55 experts to complete a questionnaire indicating the funding priority they would assign to various types of pipeline safety R&D, using categories identified as part of OPS’s 2001 R&D planning workshop. Table 1 provides a description of the main categories of R&D we asked experts to prioritize. The first three categories correspond to the main areas of R&D that OPS is currently funding. Although the fourth category—Arctic and Offshore Technologies—was identified as a main area of pipeline R&D at its workshop, OPS decided not to include it as a main area in its R&D agenda. Agency officials told us that they made this decision because R&D related to Arctic and Offshore Technologies was not considered to be a high priority by participants at its workshop and because MMS funds some R&D in this area and is the primary offshore regulator. We did not include Mapping and Information Systems—an area that OPS is currently funding from its R&D budget—as a category for the experts to rate because it was not identified as a main category of R&D at the 2001 workshop. Figure 4 shows how the 49 experts who completed our questionnaire rated the four categories of pipeline safety R&D. We also asked experts to rate specific types of R&D within each category. (See app. I for how the experts rated specific types of R&D within these main categories and for information on the agency’s funding of these specific types of R&D. See app. II for information on our methodology for selecting experts and obtaining their views.) The experts who completed our questionnaire strongly supported the Damage Prevention and Leak Detection and Enhanced Operations, Controls, and Monitoring categories of R&D as important areas for OPS to fund. Ninety-two percent of the experts (45 of 49) indicated that the Damage Prevention and Leak Detection category should receive high or very high funding priority. Within this category, experts assigned the most scores of high or very high funding priority to the following types of R&D: improvements in the ability of in-line inspection tools, such as “smart pigs,” to detect damage and defects (39 of 49), and the development of new technologies, such as the innovative application of ultrasonics, that can be used for inspecting pipelines (38 of 49). Several experts we interviewed highlighted the need to improve methods for detecting damage to pipelines, citing the fact that third-party damage is the leading cause of pipeline accidents. According to both liquid and gas pipeline associations, current inspection tools cannot reliably detect such damage to pipelines. Eighty percent of the experts (39 of 49) indicated that the Enhanced Operations, Controls, and Monitoring category should receive high or very high funding priority. Within this category, the type of R&D that received the most scores of high or very high funding priority (37 of 49) was the improvement of alternative inspection techniques, called direct assessment, to identify corrosion and other defects in pipelines that cannot accommodate in-line inspection devices known as smart pigs. This is a significant issue for natural gas pipelines because the majority of these pipelines cannot currently accommodate smart pigs, which are typically used to assess the condition of liquid pipelines. In contrast to the experts’ views on the importance of these first two categories, less than one-third of the experts considered the remaining two categories of R&D, Improved Materials Performance and Arctic and Offshore Technologies, to be a high priority for OPS to fund. Thirty-one percent of the experts (15 of 49) assigned scores of high or very high funding priority to the Improved Materials Performance category, and 20 percent (10 of 49) assigned such scores to the Arctic and Offshore Technologies category. However, within the category of Improved Materials Performance, about half (25 of 49) of the experts indicated that the type of R&D aimed at developing damage- and defect-resistant materials should receive high or very high funding priority. Such materials could be used in the replacement of existing pipe or in the installation of new pipe. One researcher we interviewed noted that such materials are particularly important for the gas pipeline industry, which is expanding its infrastructure in response to increased demands for natural gas. One industry association estimates that the natural gas industry will need to install about 49,500 miles of transmission pipeline from 2001 through 2015 to meet increased demand in the United States. Some differences exist in the views of experts from the following three subgroups: (1) federal and state government and public interest organizations, (2) pipeline industry and technical and consulting organizations, and (3) research organizations. As shown in table 2, experts from all three subgroups generally gave the category of Damage Prevention and Leak Detection the highest ranking, followed by the category of Enhanced Operations, Controls, and Monitoring. However, experts from research organizations considered the categories of Improved Materials Performance and Arctic and Offshore Technologies to be more important for OPS to fund than did experts from the other two subgroups. For example, 70 percent of experts from research organizations (7 of 10) rated Improved Materials Performance as a high or very high priority compared with 19 percent of experts from government and public interest organizations (3 of 16) and 22 percent of experts from pipeline industry and technical and consulting organizations (5 of 23). In addition, 60 percent of the researchers (6 of 10) rated Arctic and Offshore Technologies as a high or very high priority for OPS compared with 19 percent of experts from government and public interest organizations (3 of 16) and only 4 percent of experts from pipeline industry and technical and consulting organizations (1 of 23). An OPS official told us that he believed that researchers rated the Improved Materials Performance category more highly than other experts did because researchers have the best and most current information about the “state of the art” in technology development and are more aware of opportunities in this area. A leading expert from a pipeline research organization noted that the foundation of pipeline R&D has been the development of defect-resistant steels and that, as a consequence, researchers in this area are very interested in R&D that will lead to further improvements in the performance of pipeline materials. He also explained that researchers may have rated the Arctic and Offshore Technologies category more highly than the other types of experts who completed our questionnaire because researchers may be more aware of the need for such R&D to support the construction of new pipelines in these areas in order to reach new energy supplies. Although OPS has received significant increases in funding for its R&D program in recent years, the agency has not developed a systematic process for evaluating the effectiveness of its R&D program. For example, the agency tracks and disseminates information on the progress of individual R&D projects but has not developed a process for assessing and reporting on the results of its R&D program as a whole. Such a process is needed to demonstrate the program’s progress toward achieving its objectives, such as the development and use of new technologies that can improve pipeline safety. OPS has taken some preliminary steps toward developing an evaluation process for its R&D program and could benefit from adopting identified best practices for systematically evaluating the outcomes of federal R&D programs. Leading research organizations, the Office of Management and Budget, and GAO have identified a number of such practices, including setting clear R&D goals and measuring progress toward these goals, using expert review to evaluate the quality of research outcomes, and reporting periodically on evaluation results. The results of evaluations can be used to refocus the direction of R&D programs periodically, as necessary, to ensure that resources are most effectively utilized. Although OPS has funded R&D to develop pipeline safety technologies since the mid-1990s, the agency’s efforts to evaluate the outcomes of this R&D have been limited and have focused on individual projects. OPS’s R&D contracts define project goals and require research performers to meet specific milestones for the development of a technology. Contracts also require research performers to report quarterly and at the end of the project on results, including milestones achieved and patents applied for and received. OPS has made some efforts to disseminate the results to date of individual R&D projects. For example, it has started to put “success stories” on its Web site that describe achievements in ongoing projects, such as the development of product prototypes. These success stories help to communicate the results of individual projects to industry and other interested parties. At the program level, OPS has not yet established specific quantifiable goals for its R&D program or a method for measuring progress toward these goals. OPS has indicated, in various planning documents, that its R&D program will help achieve its performance goals of reducing the impacts of pipeline incidents, including fatalities and injuries, and reducing spills of hazardous material. However, agency officials have acknowledged that it is difficult to show the effect of the R&D program on these performance goals. A more immediate objective of the program, according to agency plans, is to promote the transfer of new and improved pipeline safety technologies to the market in the near term. In deciding which R&D proposals to fund, OPS gives preference to those that plan to bring a new product to market within 5 years. In addition, agency officials told us that OPS plans to promote the use of new technologies by providing information to potential users and its state partners about them and, when appropriate, by encouraging their use through regulation. Agency officials told us that the R&D program aims to have 80 percent of its projects result in products on the market within 5 years. Such an objective is specific and measurable, but OPS has not formally established it as a goal in any plan or developed a method for measuring progress toward achieving it. Furthermore, since the agency has not yet established specific goals or outcome measures for its R&D program, it does not have a process for documenting and reporting on the extent to which this program is achieving its goals. OPS officials explained that they have not yet developed a process for evaluating the outcomes of the agency’s R&D program because, prior to 2001, the program’s budget was relatively low and, since restructuring the program in 2001, they have focused program efforts on building a process for setting research priorities. However, officials do recognize the need for evaluating R&D outcomes and have taken some preliminary steps toward developing an evaluation process for their R&D program. OPS is considering some possible measures of the outcomes of its R&D program as a whole, such as the number of new patents resulting from R&D efforts. In addition, agency officials told us that, although tracking the transfer to the market of new pipeline safety technologies can be challenging, OPS intends to track the use of new technologies in the future through its process for inspecting operators’ “integrity management” programs. For example, OPS inspectors could document the use of new or improved technologies by companies to evaluate the condition of their pipelines. Agency officials noted that the agency will develop inspection protocols that require inspectors to collect data on the use of new technologies after their proposed integrity management rule for natural gas transmission pipelines is finalized. OPS is also considering the number of documented R&D “success stories”—summaries of the accomplishments of individual R&D projects— as a possible measure of program results. However, in previous reviews of R&D programs operated by other federal agencies, we have found that the success story approach is selective and does not adequately assess programwide performance. In early June 2003, OPS presented a potential set of performance measures for its R&D program to its R&D planning panel of outside experts in order to obtain their views on these measures. This panel includes representatives of DOE, MMS, the National Institute of Standards and Technology, pipeline industry associations, state agencies with pipeline responsibilities, and a key pipeline research organization. OPS intends to refine its set of measures based on comments received from this panel and to continue obtaining the views of this panel as it moves forward in developing an evaluation process for its R&D program. Finally, OPS officials also told us that the agency intends to obtain the views of experts on its R&D outcomes as well as on its future R&D priorities at its next R&D workshop, scheduled for the winter of 2003-04. However, OPS is in the beginning stages of planning this workshop and has not defined a process for using experts’ views to evaluate the outcomes of its R&D program. OPS officials told us that they are considering including information on the effectiveness of the agency’s R&D program in the annual reports to Congress on pipeline R&D that the agency is required to submit, starting in December 2003. The Pipeline Safety Improvement Act requires that DOT, DOE, and the National Institute of Standards and Technology jointly provide these annual reports to Congress, but does not fully specify what types of information should be included in these reports. Since OPS is in the beginning stages of developing an evaluation process for its R&D program, it could benefit from adopting best practices for systematically evaluating federal R&D programs. Leading organizations that conduct scientific and engineering research, the Office of Management and Budget, and GAO have identified a number of these best practices. Although the uncertain nature of research outcomes over time can make it challenging to demonstrate the results of such R&D programs, these practices are designed to enable agencies to systematically assess and report on these results regularly in accordance with the Government Performance and Results Act of 1993. These assessments can be used to refocus the direction of R&D programs periodically, as necessary, to ensure that resources are most effectively utilized. Identified best practices are discussed in the following sections. We have previously reported that, to be effective, any R&D program must be directed toward a clear, measurable goal. Such goals help ensure a direct linkage between R&D program efforts and an agency’s overall performance goals and mission. Applied research programs, such as OPS’s R&D program, are directed toward achieving specific useful outcomes, such as the development of new technologies, which can help accomplish agency performance goals. The Committee on Science, Engineering, and Public Policy recommended in a 1999 report that agencies operating applied research programs measure progress toward practical outcomes and noted that such measurement can usually be performed annually using milestones. Similarly, in May 2002 the Office of Management and Budget established investment criteria for federal R&D programs that require these programs to clearly define goals and track progress toward these goals using appropriate outcome measures and interim milestones. Indicators that have been used to measure the outcomes of R&D include the achievement of specific targets for developing new or improved technologies and patent applications filed and granted. However, measuring research outcomes can be challenging. For example, outcomes may not occur for a number of years and may be difficult to track. In its 1999 report and again in 2001, the Committee on Science, Engineering, and Public Policy recommended the use of expert review, supplemented by quantitative methods, to evaluate research regularly. Expert review can be a useful addition to performance measures because of the value of the reviewers’ deep knowledge of the field. Such review can be performed on a somewhat longer term basis, rather than annually, and does not require that the final impact of the research be known. Peer review, a form of expert review, includes an independent assessment of the technical and scientific merit or quality of research by peers with essential subject matter expertise and perspective equal to that of the researchers. In 1999, we reported that some federal agencies, such as the Department of Agriculture, the National Institutes of Health (NIH), and DOE, use peer review to help them evaluate the performance of programs and determine whether to continue or renew research projects. The Committee on Science, Engineering, and Public Policy reported in 2001 on the use of expert review, including peer review, by NIH, DOE, the National Science Foundation, the Department of Defense, and the National Aeronautics and Space Administration to evaluate the quality of their research programs. These agencies used varying methods for carrying out this review, including convening panels of experts who use defined evaluation processes and obtaining the views of external advisory committees. The Committee on Science, Engineering, and Public Policy has also noted that expert evaluation of applied research programs requires the input of potential users of the results of the research, since the ultimate usability of these results is an important factor in determining the worth of the research. Similarly, key experts and stakeholders we interviewed noted that the degree to which new technologies are actually used would be a good indication of the effectiveness of OPS’s R&D program. One industry association representative we interviewed noted that a “constant theme” raised by pipeline companies is the need for R&D efforts to produce new technologies that they can actually use in operating their pipelines. Periodic reporting by applied research programs on results can help keep key stakeholders—including oversight organizations and potential users of new technologies—up-to-date on program accomplishments. According to the Committee on Science, Engineering, and Public Policy, applied research programs can usually report annually on progress in meeting milestones. In addition, a retrospective analysis over several years is needed to evaluate outcomes that take more than 1 year to emerge. The committee also has recommended that agencies demonstrate the value of their review processes by publicly describing them to oversight groups, the potential users of research results, and the general public. One expert we interviewed stressed the importance of periodic public reporting by OPS on research goals and outcomes and on the method for evaluating outcomes, in order to disseminate research results and build support for its R&D program. OPS has made significant progress in establishing a pipeline safety research agenda that is aligned with its mission and goals and that incorporates the views of experts and stakeholders. However, without a systematic process for evaluating the outcomes of its R&D program, the agency is not able to demonstrate that it is effectively using its increased resources for R&D to foster new and improved technologies that can enhance pipeline safety. Identified best practices for evaluating federal R&D programs—including setting clear quantifiable R&D goals and measuring progress toward these goals, using expert review to evaluate the quality of research outcomes, and reporting periodically on evaluation results—can guide OPS as it moves forward in developing an evaluation process for its program. By following such practices, the agency can help ensure that it develops a systematic evaluation process that will enable it to determine and demonstrate the results of its investment in pipeline safety R&D. OPS could use such an evaluation process to periodically refocus the direction of its program in order to make the most effective use of resources. Furthermore, although the Pipeline Safety Improvement Act’s requirement for annual reports on pipeline R&D, starting in December 2003, does not specify in detail what information should be included in these reports, this requirement provides an opportunity for the agency to keep Congress informed about the results of evaluations of its R&D program. In addition, such reporting, along with other communication methods already in use by the agency, can keep other interested parties—including the pipeline industry, state pipeline safety agencies, pipeline safety advocates, and researchers—up-to-date on the program’s progress in advancing the most promising pipeline safety technologies. To improve OPS’s ability to demonstrate the effectiveness of its R&D program and make the most effective use of program funds, we recommend that the Secretary of Transportation direct OPS to develop a systematic process for evaluating the outcomes of its R&D program that incorporates identified best practices and include in the annual reports to Congress, which are required by the Pipeline Safety Improvement Act, information on the results of R&D evaluations. We provided DOT with a draft of this report for review and comment. DOT officials, including OPS’s Director of Program Development, provided oral comments on the draft on June 13, 2003. The officials generally agreed with the report’s findings and conclusions. They emphasized that they are starting to develop a framework for evaluating the effectiveness of their pipeline safety R&D program and that they intend to finalize this framework by December 2003 by documenting it in the 5-year plan and first annual report on pipeline R&D that DOT is required to submit to Congress, jointly with DOE and the National Institute of Standards and Technology. They also noted that they agree with and intend to implement our recommendations and provided some technical clarifications, which we have incorporated as appropriate. We are sending copies of this report to the Secretary of Transportation, the Administrator of RSPA, RSPA’s Associate Administrator for Pipeline Safety, the Director of the Office of Management and Budget, and appropriate congressional committees. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Individuals making key contributions to this report are listed in appendix III. We asked selected experts to review the following descriptions of specific types of pipeline safety research and development (R&D) and assign a funding priority to each, based on its importance in achieving the Office of Pipeline Safety’s (OPS) mission of ensuring the safe, reliable, and environmentally sound operation of the nation’s pipeline transportation system. Experts used the following scale: 1=little or no funding, 2=some funding priority, 3=moderate funding priority, 4=high funding priority, and 5=very high funding priority. Experts could also indicate that they did not know or had no basis to judge the funding priority for a particular type of R&D. The following table shows, for each type of R&D, the number of experts who assigned it a high or very high funding priority and OPS’s current and planned allocation of funding to it. A total of 49 experts completed our questionnaire. Allocated $592,500 to five projects in November 2002 for periods of 9 to 24 months. Allocated $500,000 to one project in November 2002 for a period of 24 months. Allocated $182,000 to one project in April 2001 for period of 12 months. Requested proposals in March 2002 but did not fund any of those received. Requested additional proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Allocated $534,521 to two projects in July 2002 for periods of 23 to 24 months. Requested proposals in March 2002 but did not fund any of those received. Requested additional proposals in December 2002 and plans to make funding decisions in July 2003. OPS’s current and planned allocation of funding to this type of R&D Requested proposals in March 2002 but did not fund any of those received. Requested additional proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in March 2002 but did not fund any of those received. Requested additional proposals in December 2002 and plans to make funding decisions in July 2003. Allocated $600,000 to one project in April 2001 for a period of 12 months. Allocated an additional $600,000 to this project in April 2002 for an additional 12 months. Plans to allocate an additional $600,000 to this project in May 2003 for an additional 12 months. Allocated $572,000 to two projects in January 2003 for periods of 12 to 26 months. Allocated $297,000 to one project in January 2003 for a period of 26 months. Allocated $275,000 to one project in January 2003 for a period of 12 months. Allocated $675,281 to four projects in May and July 2002 for periods of 12 to 24 months. Allocated $97,737 to three projects in May 2002 for a period of 12 months. Allocated $70,000 to an additional project in January 2003 for a period of 24 months. Requested additional proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in June 2002 but did not fund any of those received. Requested additional proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Allocated $98,680 to one project in November 2002 for a period of 6 months. Requested additional proposals in December 2002 and plans to make funding decisions in July 2003. Requested proposals in December 2002 and plans to make funding decisions in July 2003. Allocated $7,781 to one project in May 2002 for a period of 12 months. Allocated $50,000 to one project in May 2001 for a period of 12 months. Allocated $59,955 to one project in May 2002 for a period of 12 months. Requested proposals in December 2002 and plans to make funding decisions in July 2003. To perform our work, we reviewed Office of Pipeline Safety (OPS) documentation on its research and development (R&D) funding and analyzed this information to identify trends; reviewed pertinent legislation and agency documents pertaining to the R&D program; and interviewed OPS officials regarding their R&D funding, agenda-setting processes, and processes for evaluating the outcomes of their R&D program. We also interviewed key experts and stakeholders concerning OPS’s management of its R&D program, including the alignment of the agency’s research agenda with its mission and goals, and their views on R&D priorities and gaps. These individuals included officials of the Department of Energy (DOE) and the Department of the Interior’s Minerals Management Service (MMS) who are responsible for pipeline R&D; representatives of pipeline industry associations and leading pipeline research organizations; and several key experts in pipeline safety. Also, we identified best practices for evaluating the outcomes of federal R&D through a review of relevant literature and compared the agency’s processes with these practices. To determine the views of experts on pipeline safety R&D priorities, we sought to identify experts considered to be very knowledgeable about the development of new pipeline safety technologies or pipeline safety issues. To identify appropriate experts, we obtained recommendations on individuals to contact from key organizations, contacted those individuals, and obtained further recommendations from them on additional individuals to contact. We identified initial individuals to contact through prior work on pipeline safety issues or through recommendations from OPS. These initial contacts included officials in DOE and MMS, representatives of four industry associations, a former head of a state agency that regulates gas pipelines, the heads of two leading pipeline R&D organizations, two technical experts in pipeline safety, and an environmentalist active in pipeline safety. Six of these individuals have been or are members of OPS advisory committees or R&D planning or review panels. We obtained recommendations from these individuals on experts who could provide us with views on pipeline safety R&D priorities. We based our final selection of experts on the criteria of knowledge, balance, and independence. We considered indications of their extent of knowledge of pipeline safety R&D, as evidenced by the number of times they had been recommended, their participation in OPS’s R&D planning and review activities, or other relevant factors. We included individuals from a variety of groups in order to achieve a balanced representation of experts, including some who are relatively independent of OPS and the pipeline industry. We included individuals from federal and state agencies, pipeline safety advocacy groups, industry associations, pipeline companies, technical and consulting organizations, and research organizations. We also provided our list of identified experts to the National Academy of Sciences and OPS for their review and comment. We contacted 55 individuals whom we had identified as appropriate experts for our review and asked them to complete a questionnaire indicating their views on pipeline safety R&D priorities. Forty-nine individuals responded, for an 89 percent response rate. Our results pertaining to experts’ views on R&D priorities represent the views of only the experts who responded to our questionnaire. In a number of cases, these individuals collaborated with others in their organizations in completing their questionnaires. Listed below are the organizational affiliations of experts who completed our questionnaire. Government and Public Interest Organizations Federal Agencies Federal Energy Regulatory Commission Minerals Management Service, Department of the Interior National Institute of Standards and Technology, Department of Commerce National Transportation Safety Board Office of Fossil Energy, Department of Energy State Agencies and Associations National Association of Pipeline Safety Representatives National Association of Regulatory Utility Commissioners New York State Department of Public Service Railroad Commission of Texas Virginia State Corporation Commission Washington Utilities and Transportation Commission Pipeline Safety Advocacy Groups Common Ground Alliance Cook Inlet Keeper Safe Bellingham Pipeline Industry and Technical/Consulting Organizations Industry Associations American Gas Association American Petroleum Institute Association of Oil Pipelines Interstate Natural Gas Association of America Pipeline Companies BP Pipelines, North America ConocoPhillips CMS Panhandle Companies Duke Energy El Paso Corporation Enbridge Pipelines Enron Explorer Pipeline Company ExxonMobil Pipeline Company KeySpan Energy Peoples Energy Technical/Consulting Organizations Accufacts, Inc. Batten and Associates, Inc. Duckworth Pipeline Integrity Services, Inc. HSB Solomon Kiefner and Associates, Inc. National Association of Corrosion Engineers Advantica, Inc. Battelle CFER Technologies Edison Welding Institute Gas Technology Institute Ohio State University, Fontana Corrosion Center Pipeline Research Council International, Inc. Southwest Research Institute Texas A&M University, Department of Mechanical Engineering University of Florida, Department of Chemical Engineering In the questionnaire, we asked respondents to review descriptions of various main categories of pipeline safety R&D as well as specific types of R&D within these main categories and indicate what funding priority they would assign to each. (See table 1 for descriptions of the main R&D categories. See app. I for descriptions of the types of R&D within these main categories.) We based the R&D categories and descriptions on materials prepared as part of an R&D planning workshop held by OPS in 2001, in which a variety of experts and stakeholders participated; on announcements the agency subsequently issued soliciting proposals for R&D in various areas; and on other OPS documents related to pipeline safety R&D. We compiled the scores obtained from the questionnaires to produce a ranking of R&D priorities representing the views of the experts who completed our survey. We also analyzed our results to determine whether any differences existed in the responses of experts from the three subgroups: government and public interest organizations, industry and technical and consulting organizations, and research organizations. In addition, we identified organizations that had bid on R&D funding from OPS in fiscal year 2002 and conducted a separate analysis of the responses of experts from these organizations to determine how they compared with those of other experts who completed our questionnaire. Seven of the experts who completed our questionnaire are from organizations that had bid on OPS R&D funding within this time frame. We conducted our work from January 2003 through June 2003 in accordance with generally accepted government auditing standards. In addition to those named above, Sharon Dyer, Etana Finkler, Judy Guilliams-Tapia, Brandon Haller, Bert Japikse, Nancy Kingsbury, Donna Leiss, Gary Stofko, Ron Stouffer, and Stacey Thompson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
From 1998 through 2002, a total of 1,770 pipeline accidents occurred, resulting in 100 fatalities and $621 million in property damage. The Office of Pipeline Safety (OPS) within the Department of Transportation operates a research and development (R&D) program aimed at advancing the most promising technologies for ensuring the safe operation of pipelines. In fiscal year 2003, OPS received $8.7 million for its R&D program, a sevenfold increase since fiscal year 1998. In response to a directive from the House Committee on Appropriations, GAO (1) assessed OPS's distribution of funding among various areas of R&D and the alignment of this funding with its mission and goals, (2) surveyed experts to obtain their views on R&D priorities, and (3) determined how OPS evaluates R&D outcomes. OPS distributes its R&D budget among four main areas. For example, in fiscal year 2003, the office plans to allocate its $8.7 million budget as follows: 46 percent ($4.0 million) to developing new technologies to prevent damage to pipelines and prevent leaks; 21 percent ($1.9 million) to improving technologies for operating, controlling, and monitoring the condition of pipelines; 19 percent ($1.7 million) to improved pipeline materials, such as materials that are resistant to damage and defects; and 14 percent ($1.2 million) to efforts to improve data on the location and safety performance of pipelines. On the basis of our work, we believe that OPS's R&D funding is generally aligned with its mission and pipeline safety goals. OPS has taken a number of steps to ensure this alignment. For example, it obtained the views of a variety of experts and stakeholders in deciding on its R&D priorities and has described in various plans how its R&D efforts can lead to new and improved technologies that can help achieve its safety performance goals, such as reducing the impacts of pipeline accidents. The pipeline safety R&D priorities of the experts we surveyed are generally consistent with OPS's R&D priorities. For example, most assigned a high priority to the two areas of R&D that receive the highest amount of funding from OPS. OPS's efforts to evaluate the outcomes of its R&D have been limited. The agency has taken some preliminary steps toward developing an evaluation process for its R&D program, such as identifying possible measures of program results. Leading research organizations, the Office of Management and Budget, and GAO have identified a number of best practices for systematically evaluating the outcomes of federal R&D programs, such as setting clear R&D goals, measuring progress toward goals, and reporting periodically on evaluation results. These best practices can help OPS to determine the effectiveness of its R&D program in achieving desired outcomes, such as the development and use of new and improved technologies that can enhance pipeline safety.
Overall, the Service’s financial condition has continued to decline. Large deficits continue as volumes and revenues decline; rates and debt are spiraling upward; capital needs are going unmet; and the Service’s liabilities exceed its assets. Despite multiple rate increases, the Service’s net income continually declined from fiscal year 1995 through fiscal year 2001(see fig. 1). The rate increase implemented in 1995 averaged 10.2 percent and was the largest percentage rate increase during this period. Costs, in general, have been difficult to reduce in the short term since the Service has high fixed costs, such as 6 days per week delivery of mail to approximately 138 million addresses—a figure that grows by nearly 2 million annually—and maintenance of a national retail infrastructure of 38,000 post offices, branches, and stations. The Service is also nearing its $15 billion statutory debt limit. To conserve cash, it is cutting back on its capital outlays, which will hinder modernization of the Service’s infrastructure. In its budget prepared before September 11, the Service estimated that it would incur a $1.35 billion deficit in fiscal year 2002, and recently updated its deficit estimate to approximately $1.5 billion. The Service reported almost a $200 million deficit for the first 2 quarters of fiscal year 2002 combined. The Service recently estimated that it would lose an additional $400 million to $800 million in the third quarter. Similarly, the Service has not updated its outlook for the fourth quarter—which has a budgeted $1.4 billion deficit. It should also be noted that mail volumes in the third and fourth quarters might fall below the budget targets if current trends persist. Further, productivity increases continue to be difficult to achieve and sustain. During the first 2 quarters of fiscal year 2002, productivity fell below budgeted targets. For example, in the first quarter, which was affected by the extraordinary events of last fall, productivity fell by 1.1 percent, compared with a budgeted 1 percent increase. In the second quarter, productivity rose by 1 percent but was budgeted to rise by 1.5 percent. Productivity targets for the remainder of this fiscal year are budgeted to decline. On the other hand, the fourth quarter deficit is estimated to be offset by about $1 billion in additional revenues from the rate increase scheduled to occur June 30, 2002. Thus, the Service will have implemented multiple rate increases since January 2001 (see table 1). The scheduled increase averages 7.7 percent for all rates. In addition, the price of a First-Class stamp will increase by 3 cents, that is—an 8.8 percent increase. Despite this rate increase, the Service is headed for its third consecutive annual deficit. The Service’s poor financial outlook for fiscal year 2002 was compounded by further declines in mail volume in the wake of incidents of terrorism, including anthrax in the mail, and the economic slowdown. Total mail volume declined 4.5 percent in the first 2 quarters of fiscal year 2002, compared with the first 2 quarters in the previous fiscal year, while total revenues declined 0.4 percent —a revenue decline that was mitigated by rate increases implemented in January and July 2001 that averaged a cumulative 6.2 percent. Mail volumes declined in the first 2 quarters of fiscal year 2002 for First-Class Mail, Priority Mail, Standard Mail (primarily advertising), and Periodicals (see fig. 2), leading to little revenue growth or declining revenues in each of these categories. Only time will tell how much of the recent volume decline is temporary and how long it will last. (See app. II for details on mail volumes and revenues in the first 2 quarters of fiscal year 2002.) On the positive side, the Service has budgeted and achieved significant cost cutting in fiscal year 2002. For the first 2 quarters of fiscal year 2002, the Service reported that its total costs were 2.7 percent below its budgeted estimates. The Service reported that it reduced budgeted costs by decreasing the number of career employees and by reducing work hours including overtime. The Service had nearly 16,000 fewer career employees at the end of the second quarter, compared with the same period for fiscal year 2001, a decline of about 2 percent. Likewise, total work hours— including both career and noncareer employees— fell by nearly 40 million in the first 2 quarters, a decline of 5.1 percent. Service officials have said that these workhour savings were achieved in part because the Service had less mail to deliver than it did a year ago, and in part through efforts to improve the Service’s efficiency. For example, mail processing work hours fell by 8.1 percent in the first 2 quarters of fiscal year 2002, compared with the same period in fiscal year 2001, a gain aided by initiatives such as deployment of more efficient machines to sort flat- sized mail (e.g., large envelopes and periodicals). To make further progress in improving efficiency, the Service could explore issues related to having sufficient flexibility to redeploy staff as mail volumes fluctuate. In addition to financial difficulties, the Service has also experienced some slippages in service performance. Although the Service has maintained high service levels for delivery of overnight First-Class Mail, its on-time delivery scores for 2-day and 3-day First-Class Mail have generally declined since fiscal year 1999. For example, on-time delivery of 2-day First-Class Mail in the first quarter declined annually from a peak of 86 percent in fiscal year 1999 to 82 percent in fiscal year 2002. Likewise, on- time delivery of 3-day First-Class Mail in the first quarter declined annually from a peak of 87 percent in fiscal year 1999 to 72 percent in fiscal year 2002. Similar, but less pronounced, trends applied in the second quarter over this period. The most recent available data for the second quarter of fiscal year 2002 show on-time delivery of 82 percent for 2-day First-Class Mail and 74 percent for 3-day First-Class Mail. (See app. III for detailed service performance data.) Recently, security restrictions were imposed after September 11 that barred First-Class Mail weighing over 16 ounces from transportation on commercial airlines so that the Service increased its reliance on trucks. It is unclear whether these shifts in transportation modes have also affected on-time delivery of other types of mail, such as the continuing erosion of on-time delivery performance for Priority Mail. The Service’s capital investment program continues to be severely limited by the Service’s financial problems. The Service budgeted $2.2 billion for capital outlays in fiscal year 2002, down from $2.9 billion in fiscal year 2001, and $3.3 billion in fiscal year 2000 (see fig. 3). Budgeted capital cash outlays for fiscal year 2002 are at the lowest level since fiscal year 1995. The Service has continued its capital freeze for most facility investments to save cash and limit debt, resulting in a growing backlog in planned facilities. Limitations on capital investment may have a number of detrimental effects such as deterioration of the Service’s existing physical infrastructure, deferred efficiency gains, and higher future capital costs. Looking forward, the gap between resources and capital investment needs would be exacerbated by the Service’s plans to continue automation efforts, deploy an “information platform” to provide better information on postal operations and the status of mail, and implement any modernization or restructuring of the Service’s infrastructure. Another concern is that the Service has continued to rely on debt to finance its capital program. This trend could not continue if the Service reaches its $15 billion statutory debt limit. The Service’s debt is budgeted to increase by $1.6 billion and reach $12.9 billion by the end of fiscal year 2002, only $2.1 billion below the $l5 billion statutory debt limit. Even the remaining $2.1 billion in borrowing authority may not be available for capital investment in future years, since fiscal prudence might suggest stabilizing debt below the statutory limit to maintain liquidity. Further, the Service has said that its goal is to reduce debt, which might preclude the use of additional debt to finance capital investment. The Service’s Transformation Plan stated that “Since cash flow from operations is linked to net incomes (or losses), stabilizing and reducing debt will require that the Postal Service recover its prior years losses and carefully plan its capital cash outlays so they do not exceed cash flow. As the past two fiscal years have demonstrated, the Postal Service cannot simultaneously generate net losses and reduce its borrowings.” Looking ahead, expenses related to enhancing mail safety and security are a key unknown cost factor. To date, the Service has relied primarily on congressional appropriations to finance capital investment in measures designed to improve mail safety and security. However, uncertainties remain regarding the technologies to be deployed, the associated capital costs, and the subsequent impact on operating costs and postal operations, and the extent to which Congress will pay for these costs in the future. The price tag is likely to be substantial, with the Service requesting about $800 million in supplemental appropriations for fiscal year 2003 to improve mail safety and security. This request is in addition to the $675 million that was appropriated in fiscal years 2001 and 2002 for security purposes. Another uncertainty that may affect the Service’s capital program involves its request for nearly $1 billion in congressional appropriations for revenue foregone, which the Service has said could be used to finance some capital facility projects. Specifically, the Service has proposed accelerating payments for revenue foregone from $29 million annually through 2035 to a single lump-sum payment in fiscal year 2003—a change that would increase the net present value of appropriations received by the Service for this purpose. Congress did not act on a similar proposal last year and has not acted on the Service’s latest request. In the short term, the Service may have to rely primarily on cutting costs and raising rates to address its financial problems. The Service’s Transformation Plan identified numerous short-term steps the Service plans to take under its existing authority to cut costs and improve productivity. Regarding rate increases, an above-inflation rate increase averaging 7.7 percent is scheduled to take effect June 30, 2002, including a 3-cent increase in the price of a First-Class stamp from 34 to 37 cents. However, raising rates may cause mail volumes to decrease and encourage mailers to shift more mail to electronic and other delivery alternatives. Although the Service plans to hold rates steady from June 2002 until calendar year 2004, pressures to increase rates will continue in the long term to cover rising expenses, such as wage increases and growing long- term obligations. The Service’s total liabilities on its balance sheet were $61 billion, which exceeded total assets by $2.3 billion at the end of fiscal year 2001. These liabilities include $32 billion for pensions, $6 billion for workers’ compensation benefits, and $11 billion for debt to the Treasury. In addition, the Service has other major obligations estimated at $49 billion for post-retirement health benefits. These liabilities and obligations amount to almost $100 billion and threaten the Service’s ability to continue to fulfill its mission by providing the current level of universal postal services at reasonable rates on a self-supporting basis. In the long term, the Service’s Transformation Plan recognizes that the Service’s basic business model is not sustainable and that much larger declines in mail volume may be in the offing if mailers increasingly shift to various electronic and other alternatives. Both the Service and we agree that some progress is possible within the current structure, but that a comprehensive postal transformation will be required to fully address the Service’s financial viability and the statutory framework under which the Service operates. In our view, modest tinkering with the existing system will be insufficient to produce a lasting comprehensive transformation that will enable the Service to fulfill its mission in the 21st century. The time has come for comprehensive and fundamental reform. As we have stated previously, this will likely require a special commission to address the most difficult and controversial issues (e.g., defining universal service and infrastructure rationalization). Given the Service’s deteriorating financial situation, progress on comprehensive transformation is urgently needed, and the Transformation Plan has made a valuable contribution by identifying numerous specific steps for making improvements within the current structure. The Service is to be commended for raising controversial issues in its Transformation Plan and taking positions on the changes that it believes are necessary. The Service’s Transformation Plan conveyed a needed sense of urgency when it stated that over the next 2 to 3 years, it is vital that significant progress be made toward defining the long-term structure and role of the Postal Service. To that end, the plan made a range of recommendations to deal with transformation issues through near-term regulatory and legislative reforms and long-term legislative solutions. For the near-term, the plan recommended changes that would give the Service more flexibility in ratemaking, facility closings, purchasing, labor negotiations, and other employment areas. For long-term change, the plan outlined three options and noted the Service’s preferred option—a “Commercial Government Enterprise.” Under this option, the Service would remain an independent establishment of the federal government but would be structured and operated in “a much more businesslike manner.” In addition, the plan contained useful discussions in detailed appendixes, such as how foreign postal administrations have dealt with similar postal reform issues. Although the Transformation Plan dealt with many difficult issues, it did not include an adequate discussion of specific plans or proposals related to some key transformation issues, including the following: the future nature of universal postal service, including its retail and delivery components, and the associated infrastructure; several key human capital issues such as postal pay comparability, performance incentives, labor-management relations, workforce realignment, and management bonus arrangements; various governance, accountability, and transparency issues; and a detailed action plan and recommendations on what mechanisms would be best suited for making progress on certain transformation issues beyond the Service’s direct control. Most importantly, the plan recognized the need for defining universal service but declined to propose a definition of future universal postal retail and delivery services for consideration. More clarity about the scope and quality of universal postal services is needed to facilitate consideration of a range of critical infrastructure and human capital issues. Further, although the Transformation Plan recommended that Congress give the Service much more flexibility, particularly in the ratemaking and new products areas, it is important that any additional flexibility be coupled with an appropriate level of transparency and accountability—issues that the Transformation Plan had less to say about. Because these issues are also critical to postal transformation, I will offer some brief observations about them in this testimony. Our recently issued report contains a more comprehensive discussion of these and other transformation issues. Vast changes in the communications and delivery sectors over the past 30 years—which are continuing at a rapid pace—as well as the Service’s growing financial difficulties, provide an impetus for reconsidering what universal postal services will be needed for the 21st century. Key issues include what postal services should be provided on a universal basis to meet customer needs, how these services should be provided, and how they should be financed—by ratepayers or taxpayers. Some related issues include what quality of universal postal service should be maintained— such as the frequency and speed of mail delivery, the accessibility and scope of retail postal services—and whether certain aspects of universal postal service should be allowed to vary in urban and rural areas. In this regard, it will be important to understand the current situation and opportunities for improvement. The Service is planning to conduct an assessment of its retail, mail processing, and transportation networks that is likely to provide useful information to Congress and stakeholders including the public on areas where service may be redundant, as well as areas where more or better service may be needed. Some benefits that may result from reassessing universal postal service might be maximizing the use of facilities and reducing costs while also improving service. This could be accomplished through the provision of more points of service, improved hours of access, and greater customer convenience to some postal retail services while reducing their cost, as compared with more traditional means of service delivery through “brick and mortar” facilities. For example, the Transformation Plan contained useful discussion about ways to enhance access and reduce the cost of some routine postal services, such as providing stamp sales at grocery stores and through ATMs, making vending machines for stamp purchases available 24 hours a day, and deploying self-service equipment that can be used to mail packages while reducing the anonymity of this mail. We recognize that universal postal service issues are highly sensitive, given the long-standing role that the Service plays in providing essential communications and delivery services to communities across the nation. To make progress in modernizing the infrastructure to support universal postal service—such as the national network of post offices that provide universal access to postal retail services—it will be important for the Service to engage in frank and open discussions with all stakeholders, including the Congress, on issues related to universal postal retail and delivery service. Rationalizing the Service’s infrastructure may entail closing or consolidating certain facilities where there is excess capacity while adding new facilities to address unmet needs, such as in growing areas. Given the difficulty of these issues, Congress could establish a mechanism similar to that used for closing military bases to make progress in this important area. Such a process has been used to overcome public concern about the economic effects of closures on communities and the perceived lack of impartiality of the decision-making process. Under this process, Congress could consider a proposed package of closures and consolidations with an up-or-down vote. Strategic human capital approaches must be at the center of efforts to transform the culture of federal entities, including the Postal Service. Like the rest of the federal government, the Service’s human capital challenges are long-standing and will not be quickly or easily addressed. To link human capital strategies to accomplishing organizational goals and objectives, we have developed a model of strategic human capital management. This model may be useful for the Service as it develops its strategic human capital planning, including a long-term workforce plan. Such strategies would address workforce realignment, aligning individual performance with organizational objectives, performance incentives, and pay comparability. Making changes to the Service’s human capital, or workforce, will include the challenge of dealing with legal requirements and practical constraints. For example, the Service is required by law to maintain employee compensation and benefits on a standard comparable to the compensation and benefits paid for comparable levels of work in the private sector. In addition, when contract disputes cannot be settled between postal labor and management, they must be settled by a third party through binding arbitration. Further, as a practical manner, labor unions and management within the Service have had long-standing adversarial relations. As an example of these limitations, the Service and its major employee unions have often disagreed about how the pay comparability standard should be applied and presented voluminous and contradictory evidence when they have taken this matter to binding arbitration. In addition to compensation, labor-management differences have extended to performance management issues involving incentives and benefits as well as deployment and use of the workforce. Performance management systems can include pay systems and incentive programs that link employees’ performance to specific results and desired outcomes. In this regard, the Transformation Plan recognized the need for a performance-based culture, noted that continuing to improve efficiency and customer value is contingent on exceptional performance by the Service’s employees, and addressed plans for a new performance management system for managers. However, the plan did not discuss how performance-based compensation and incentive systems might cascade throughout the organization—an issue that Service managers and unions have repeatedly disagreed on in the past. For transformation to be successful, it is vital for the Service and its unions to share a common vision for the future and a shared responsibility for finding solutions to the Service’s financial and workforce problems. As I testified before this subcommittee in March, committed, sustained, and inspired leadership and persistent attention will be essential if lasting changes are to be made in the human capital area. In that vein, the postmaster general, postal officials, and leaders of postal labor unions and management associations demonstrated a positive and constructive approach by holding daily meetings last fall to deal with issues related to mail safety and security. The recent announcement of a tentative negotiated settlement of contract talks between the Service and the National Association of Letter Carriers was another positive example. These parties also recently agreed on steps to streamline grievance and arbitration procedures to limit the number of unresolved issues at the local level and reduce the time in handling such disputes. These are positive steps that provide a foundation on which to build; however, much remains to be done. Congress has recently been focusing significant attention on corporate governance, transparency, and accountability issues in light of Enron’s recent decline. Recent events have raised a range of questions regarding what can happen when one or more key players fail to adequately perform their responsibilities. I want to underscore that serving on a board of directors is an important and difficult responsibility that requires being knowledgeable about the industry and finances, asking the right questions, and doing the right thing to protect the public interest. This responsibility is especially challenging in directing the Service, which is facing increasing competition in a rapidly changing market environment. In addition, the board’s audit committee has an important role to play in ensuring fair presentation and appropriate accountability of management in connection with financial reporting, internal control, compliance, and related matters. We believe that a range of governance issues needs to be addressed as part of the Service’s transformation plan. However, the Service’s transformation plan had little to say on these matters other than proposing that the Service be transformed into a Commercial Government Enterprise that would act much more like a business, and, as part of that proposal, its board of governors would be “refocused on fiduciary duties.” Under its current framework, the Service is intended to function in a businesslike manner, which raises the following questions related to its governance structure: What type of governing board would be most appropriate considering the Service’s size, importance, and challenges? How should board members, including the postmaster general and deputy postmaster general, be selected, paid, and held accountable? What should be the roles and functions of the governing board, and is its current part-time status appropriate? Is the present governance structure best suited to selecting well-qualified individuals to direct a $70 billion entity? Or, should the framework follow recent changes in the private sector to (1) develop better-defined criteria for board membership and (2) recognize that various roles on the board may require certain specific backgrounds and skills? Transparency and accountability are fundamental principles to ensuring public confidence in the Service. As part of the proposed change to a more commercial enterprise, questions remain related to whether the Service should be held more directly accountable for its performance and if so, to what extent, to whom, and with what mechanisms. Other questions include What oversight is needed to protect the public interest, including the interest of customers with few or no alternatives to using the mail? How should the PRC and/or other pertinent authorities exercise oversight regarding pricing, competition, and antitrust issues, among other areas? What recourse should customers and competitors have to lodge complaints? What should be the role of Congress and other federal agencies in providing oversight and accountability? What information should the Service be required to provide Congress and the public on its performance, including areas such as financial performance, productivity, and mail delivery? Another issue we have noted, related to transparency and accountability, involves improvements needed in the Service’s financial reporting. The principles for the Service’s financial information are the same as those in our recent testimony on financial reporting issues: financial statements, which are at the center of present-day business reporting, must be timely, relevant, and reliable to be useful for decision-making. We have recently reported that the Service’s financial outlook was repeatedly revised in fiscal year 2001 with little or no public explanation and that greater transparency is needed regarding the Service’s financial and operating results and projections. Accordingly, we have recommended that the Service improve the transparency of its financial information by providing monthly and quarterly financial reports in a user-friendly format on its Web site in a more timely manner. The Service has agreed with our recommendation to improve the transparency of its financial data and stated that it was providing financial reports on its Web site in a more timely and user-friendly manner. To date, the Service has begun to provide monthly financial reports on its Web site. It has also provided one quarterly financial report—for the third quarter of fiscal year 2001. Currently, the Service has posted on its Web site the chief financial officer’s financial presentation for the second quarter of fiscal year 2002. This presentation has less information than the previous publicly available quarterly report—it does not include cash flow data, year-to-date analysis, or changes in outlook. In our opinion, this publicly available information has not provided sufficiently detailed information for stakeholders to understand the Service’s current and projected financial condition or how its financial outlook has changed. More timely, accessible, and reliable financial information is sorely needed. Stakeholders are looking for positive, constructive ways to work through difficult postal transformation issues and the Service’s Transformation Plan was a good start. Many postal transformation issues are complex, and consensus is likely to be hard to achieve on key areas such as a new definition of universal postal service, the associated infrastructure, human capital, governance, accountability, and transparency issues, among others. Further, a successful transformation of the Service will require shared sacrifice. However, given the vital role of our postal system in communications and commerce, and the Service’s declining financial outlook, it’s time for all stakeholders to roll up their sleeves and engage in postal transformation issues. In this regard, we note that the Service and mailers have already made progress, such as through the Mailing Industry Task Force, in identifying concrete ways to enhance efficiency and improve the value of the mail. We also applaud the initiative of the Postmaster General John Potter and PRC Chairman George Omas in agreeing to convene a summit to discuss ways to improve the rate structure and the rate setting process. The Service has a similar opportunity to build working partnerships with its major labor unions and management associations so that the parties can make progress on human capital issues. Another critical partnership involves the Congress and postal stakeholders in working through a range of important, complex, and controversial transformation issues. As we noted in our report, we believe that the Service’s worsening financial situation and outlook intensify the need for Congress to act on meaningful postal reform and transformation legislation. Accordingly, we stated in our recently issued report that Congress should consider and promptly act on incremental legislative change that could help the Service deal with its financial situation. We believe that comprehensive legislative change will be needed to address key unresolved transformation issues—some of which have not been fully addressed by proposed legislation or by the Service’s Transformation Plan. One option is to use the legislative process to enact postal reform legislation, and some major proposals have been made in this area. Another option could be to create an independent commission that would address key unresolved issues and develop a comprehensive proposal for Congress to consider. Meanwhile, the Service’s growing financial problems call for continuing close congressional oversight of its current financial condition and progress in implementing its Transformation Plan. In this regard, it will be important to have greater transparency of the Service’s financial information to minimize possible unexpected surprises and expectation gaps. It will also be important to have greater clarity about the time frames and financial impact associated with the actions outlined in the Transformation Plan that the Service plans to take immediately. To assist the Congress in its oversight responsibilities, we are monitoring the Service’s financial condition and the implementation of its plan. Committed leadership and sustained attention in these areas will be important in order to achieve the results necessary for us to reassess our inclusion of the Postal Service’s transformation efforts and long-term outlook on our High-Risk List. Your strong support for the Service to develop a transformation plan has helped move the discussion forward, and this hearing is further highlighting the need for change. We look forward to working with the Congress in addressing this and other important government transformation issues. In many ways, the challenges facing the Service represent a microcosm of a range of challenges facing other federal agencies. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For further information regarding this testimony, please call Bernard L. Ungar, Director, Physical Infrastructure Issues, on (202) 512-8387 or at [email protected]. Individuals making key contributions to this testimony included Teresa L. Anderson, Hazel J. Bailey, Tida E. Barakat, Gerald P. Barnes, Joshua M. Bartzen, Alan N. Belkin, William J. Doherty, Frederick T. Evans, Michael J. Fischetti, Kenneth E. John, Robert P. Lilly, and Jill P. Sayre. Summary of Key Service Problems and Actions in the Service’s Transformation Plan 1. Net Income: The Service has difficulty generating positive net income, despite recent rate increases, and expects a large deficit in FY 2002. 1.1 Replace the break-even requirement with a provision for a reasonable rate of return. 1.2 Increase the amount of funds in rate cases for capital purposes for new facilities. 2. Cost-cutting/productivity: Costs are increasing faster than revenues and are hard to cut. The Service has difficulty making and sustaining productivity increases. 2.1 Deploy more automation (Postal Automated Redirection System, Automated Flat Sorting Machine (AFSM 100) and tray handing systems for AFSM 100 and 1000; low- cost tray sorters, next generation parcel sorting equipment—the parcel Singulate, Scan, Induction Unit and Automated Package Processing System). 2.2 Increase throughput and reduce nonautomated letter mail stream through equipment modifications and customer incentives (FSM 1000 automated flats feeders and optical character readers; technology upgrades to improve address recognition and enhance feeder systems). 2.3 Develop more automation (e.g., Universal Tray System, automated delivery point sequencing equipment for flats, automation of processing of Business Reply Mail and Courtesy Reply Mail cards). 2.4 Move toward long-term vision of one bundle of mixed letters and flats for each delivery point called the Delivery Point Packaging after delivery point sequencing of flats mail is implemented. 2.5 Deploy flats remote encoding system to consolidate flats keying systems and minimize idle time. 2.6 Reduce tray and piece handlings and improve efficiency of postal operations by working with customers and the mailing industry to explore product redesign and worksharing options. 2.7 Improve delivery productivity through deployment/use of the Delivery Operations Information System to provide data to delivery supervisors; the Managed Service Points system to scan bar codes at the delivery unit and along the carrier’s route of travel; the Delivery Performance Achievement and Recognition System to benchmark, set goals, and give recognition for both city and rural delivery; initiatives to improve rural delivery; and test optimizing city carrier routing and travel paths and the Segway Human Transporter. 2.8 Modernize purchasing procedures by changing postal regulations. 2.9 Implement supply chain management. Consolidate purchases for better quality and lower costs (redesign purchasing organization into interdisciplinary commodity teams; reduce low-dollar value transactions; and forge stronger, more effective relations with key strategic suppliers). 2.10 Revise certain statutory requirements relating to the Service’s supply chain management, such as the Service Contract Act and the Davis-Bacon Act, to reduce costs and administrative requirements not applicable to commercial businesses. Summary of Key Service Problems and Actions in the Service’s Transformation Plan 2.11 Reduce injury compensation costs by expanding the Preferred Provider Organization program throughout the Service, to reduce medical fees below what the Department of Labor’s Office of Workers’ Compensation Program (OWCP) allows, and identify duplicate payments that get through OWCP’s system. 2.12 Reduce injury compensation costs by moving all Federal Employees’ Compensation Act (FECA) recipients to FECA annuity at age 65. The FECA annuity would equate to the same costs as a normal retirement for all present and former Service employees over age 65 on OWCP compensation rolls. 2.13 Reduce injury compensation costs by encouraging OWCP to revise its regulations to allow direct contact with the treating physician by the employing agency (i.e., the Service). 2.14 Reduce injury compensation costs by developing joint strategies with OWCP, such as an accelerated private sector placement program that reduces time for private sector outplacement of injured Service employees from up to 2 years to less than 1 year. Create new internal positions to accommodate injured workers. 2.15 Reduce and deter criminal misuse of workers’ compensation. 2.16 Address issues contributing to escalating FECA costs: Compensation rates are too generous; should be only one rate (66 2/3 percent); no waiting period before wage-loss compensation is paid. 2.17 Optimize the transportation and distribution networks: the Network Integration and Alignment initiative is designed to create a flexible logistics network to reduce the Service and customer costs, increase overall operational effectiveness, and improve consistency of service. Streamline and simplify the distribution network. Consolidate sorting facilities, eliminate excess resources, and determine facility roles and functions. Deploy Surface Air Management System and Surface Air Support System, develop transportation optimization planning and scheduling, develop transportation contract support system; and increase utilization of mail transport equipment. 2.18 Increase retail/customer service productivity (operational standardization, continued automation of mail processing operations that occur in the back rooms of post offices), implement facility design changes where feasible to enable 24-hour access to critical products and services. 2.19 Expand access to postal services by moving simple transactions out of post offices (communicate information on alternative services (i.e., advertising), provide an on-line postage label application so packages can be dropped in a P.O. box, or handed to/picked up by carriers). 2.20 Create new low-cost retail alternatives. Expand self-service alternatives (kiosk services, such as ATMs, new technology for basic stamp purchases and mailing services, automated postage printers, automated postal center— enabling self-service purchase of stamps and mailing of packages). 2.21 Improve performance management around best practices, including standardizing mail processing. Consolidate mail-processing activities and centralize or relocate these activities. Also conduct other labor reviews and standardize mail-processing operations, including those for Priority Mail operations. Implement complement planning, tracking, and management processes. 2.22 Manage realty assets to maximize return by reducing costs related to properties for sale, short-term and development leasing, developmental added value properties, and other programs. Summary of Key Service Problems and Actions in the Service’s Transformation Plan 2.23 Achieve savings in international air transportation by deregulation to convey to the Service authority to competitively contract in the open market. 3. Safety/security: Expenditures and funding to enhance mail safety and security are uncertain. Safety and security needs exacerbate the Service’s financial problems. 3.1 Implement comprehensive plan for improving mail safety and security. 3.2 Enhance security across technology to avoid disruptions in critical operations and protect sensitive information from unauthorized disclosure or modification (education and training, certification process, contingency planning, intrusion protection, automated monitoring). 3.3 Ensure safe, secure, and drug-free work environment (reduce and deter employee-on-employee assaults and credible threats, robberies, illegal drugs). 3.4 Provide for the security of the mail and postal products, services, and assets (reduce and deter mail theft, related identify theft and takeovers, criminal attacks on postal products, services, and assets). 3.5 Combat crimes using the postal system (e.g., mail fraud, and prohibited, illegal, and dangerous mailings). 3.6 Ensure the Service maintains its trusted brand and provides top-rate privacy protection (standardize privacy policies and procedures, streamline compliance procedures, work with internal and external groups to build privacy into data- oriented initiatives). 4. Cash flow: Cash-flow pressures continue because of cost/revenue trends. 4.1 Improve cash flow by generating net income, cost-cutting, moderate rate increase (not until 2004), and planning capital outlays so that they do not exceed cash flow. 5. Debt: The Service has no debt reduction plan as its debt nears the $15 billion limit. 5.1 Reduce debt and remain within the current statutory debt limits. This strategy will be modified as necessary to ensure that the Service preserves its ability to meet all of its cash obligations. To stabilize and reduce debt, the Service will need to recover prior years’ losses and plan capital cash outlays so that they do not exceed cash flow. Also, the Service cannot simultaneously generate net losses and reduce its borrowings. 5.2 Manage the Service’s mix of short- and long-term debt to lower interest expense over time. 6. Basic Business Model: Mail volumes have declined in major revenue producing areas. The Service’s business model, which relies on rising mail volumes to cover rising costs, is not sustainable and needs a comprehensive transformation. 6.1 Transform the Service into a commercial government enterprise (model recommended by the Service). 6.2 Enhance the value of the mail through technology (identify/track mail pieces through Confirm). 6.3 Improve the access, speed, and reliability of accountable mail services (Internet access to delivery time and date of Certified Mail and Return Receipt; other product enhancements). 6.4 Design rates and mail preparation to match customer needs (simplify rate structure, preparation and acceptance requirements for moderate users of bulk mail, and for mailing books and parcels). Summary of Key Service Problems and Actions in the Service’s Transformation Plan 6.5 Position mail as a key communications medium and as a customer relationship management tool (customize postal products to enable small- and medium- sized business customers to leverage mail for promotion). 6.6 Enhance package services (acceptance scanning of return parcels, new parcel categories—reduce number of categories, rate structures, and confusing requirements) by providing, merchandise return, new mail categories, and an on-line postage label application. 6.7 Promote greater ease-of-use to improve customer satisfaction and sales. Transform the Domestic Mail Manual and make rules and regulations more market-responsive. 6.8 Develop a corporate-based pricing plan and a set of strategies to develop market-based pricing. Retain and increase international market share. 7. Rates Increases: Rates for certain categories of mail are rising faster than inflation and more increases possible. 7.1 No rate increases are planned until calendar year 2004. 7.2 Next rate increase is planned to be a moderate and negotiated settlement. 8. Infrastructure/capital investment: Changes to infrastructure are limited by legal requirements and practical constraints. Further, the capital program freeze is unsustainable. 8.1 Lift the self-imposed moratorium on post office closings and consolidations. Follow-up responsibility for actions and milestones Postal Service, milestones: ongoing Postal Service; milestones same as Product Redesign (implementation by summer 2004) except for labels by fall 2002 mailing season Postal Service; milestones: none Postal Service; milestones: none Postal Service Postal Service, mailers, competitors, PRC 8.2 Close unnecessary contract postal units. 8.3 Implement retail access strategies to ensure that customers retain adequate access to products and services. 8.4 Work with the PRC to streamline the post office closing process to minimize turnaround time. 8.5 Repeal statutory administrative notice procedures mandated for closing post offices (see 39 U.S.C. 404(b)), or replace them with more flexible procedures. 8.6 Eliminate appropriations language that discourages post office closings and freezes service levels at the 1983 level. 8.7 Optimize the retail network (through development of a network database, baseline the current retail network; accommodate retail growth demand via a logical system that matches appropriate channel with demonstrated marketplace needs; replace redundant post offices, stations, and branches that do not provide appropriate value with alternative retail channels). 8.8 Upgrade and reengineer the computing infrastructure to support current and new business requirements, as well as to enable the Service to become more efficient and to reduce operating costs. (Upgrade distributed, midrange, and mainframe computing infrastructure and implement technical and corporate shared services initiatives.) 8.9 Provide universal computing connectivity (consolidate voice, data, and video network and implement wireless technology initiative). 9. Human capital: The Service faces difficult human capital challenges, including workforce planning and realignment, performance management, compensation and benefit issues, and labor-management relations. 9.1 Retain employees with skills critical to the Service’s success. (Study retention trends and develop plans for retention and recruitment incentives to allow the Service to compete for talent.) Summary of Key Service Problems and Actions in the Service’s Transformation Plan 9.2 Concentrate recruitment efforts on bringing talent, skills, and experience from within and from outside the Service to address potential loss of Service leadership. (Implement Associate Supervisor Program and Management and Professional Specialist Intern programs, use third parties for marketing/attracting candidates to specialized skill positions, deploy an automated screening process, pilot centralize recruitment structures for hard-to- fill bargaining and nonbargaining positions nationwide, and use Web technology to enhance recruitment and hiring processes.) 9.3 Remove statutory salary cap for Service employees to help recruit and retain selected managers, executives, and officers. 9.4 Utilize succession planning to identify, develop, and select current and future leaders. (Continue executive development programs; hold officers and executives accountable for having/implementing individual development plans for successors.) 9.5 Ensure that a dynamic training curriculum is available to develop a pool of talent to fill leadership positions. (Maximize available training and development programs to have a pool of potential successors at all levels. Establish defined career path for supervisors and managers to facilitate succession at low-level to mid-level positions. Implement more technology-based training. Develop learning management system to coordinate administration, scheduling, tracking, assessment, and testing of learners.) 9.6 Create a performance-based pay system. Redesign performance-based pay systems and assessment systems from executives to front line supervisors and EAS grade-level 15. A new pay system will place a greater focus on rewarding individual rather than group achievement. The Service will consult with postal management associations and then phase in the new performance assessment system. 9.7 Build a highly effective and motivated workforce. Use existing programs and measures to hold district, area, and headquarters leadership accountable for the following activities: improving percentage of favorable responses to Voice of the Employee survey, identifying troubled worksites and develop effective plans to correct problems, supporting District Joint Employee Assistance Program Advisory Committee, supporting Diversity’s continuous education initiatives, maintaining trained Threat Assessment Team and properly prepared Crisis Management Team, providing violence awareness and sexual harassment training according to policy. Organize the most predictive workplace data for use by districts and areas to create proactive interventions: Form predictive profiles to allow the Service to become more proactive in dealing with potential workplace environment issues. 9.8 Improve workforce planning. Move to an integrated workforce planning process with a single function responsible for reporting trends and issues. Fully utilize the provisions of collective bargaining agreements to reposition the workforce as needed to meet customer demands and operational requirements. Execute reduction-in-force avoidance strategies, including voluntary early retirement offerings and internal movement of employees. Consider reduction- in-force alternatives (voluntary reduced hours, retirement incentives, layoffs, voluntary sabbaticals). Seek cost-efficient ways to move people from positions that are no longer necessary. Modify applicable placement, training, and right- sizing processes. 9.9 Expand shared services in accounting and human resources (i.e., sharing technology, people, and other resources within and across administrative functions to reduce costs and improve the quality of administrative services). Follow-up responsibility for actions and milestones Postal Service Postal Service, craft union leaders; milestones in 2002-2003 Postal Service, labor unions, management associations development and refinement of the REDRESS program and the use of labor and management Dispute Resolution Teams.) 10. Liabilities: Liabilities exceed assets, and long-term retirement liabilities are growing. 10.1 Increase income generation and minimize the increase in deferred retirement costs by allowing postal retirement fund assets to be invested in other than federal securities at higher rates of return. This would involve investment of postal-related Civil Service Retirement System and Federal Employees Retirement System retirement fund assets currently managed by the Office of Personnel Management. 11. Transparency and reporting: Greater transparency is needed regarding the Service’s financial operating results and projections. 11.1 Eliminate the postal fiscal year and use only the government fiscal year for internal and external reporting. Convert the Service’s reporting (financial and all other) from the existing accounting period format (i.e., 4-week accounting periods) to a calendar month format, with monthly and quarterly reporting. 11.2 Publish quarterly financial reports for the first, second, and third quarters. 12. Accountability: Limited mechanisms are in place to promote accountability. 12.1 Redesign the performance-based pay system. 13. Incentives: Legal framework (monopoly, break-even requirement, rate- setting) limit incentives to cut/restrain costs or to innovate. 13.1 Replace the break-even requirement with a provision for a reasonable rate of return (also listed as 1.1 above). 13.2 Replace cost-of-service rate regulation (see 15 below). 14. Rate-setting process: The Service’s rate-setting process is lengthy, adversarial and provides limited incentives to control costs. Summary of Key Service Problems and Actions in the Service’s Transformation Plan 14.1 Work with the PRC to improve the rate-setting process and change the rate and classification structure. Initiatives: phased rate changes, operationally targeted experiments, major reclassification effort, segmentation for major products, negotiated service agreements, volume discounts. Initiatives to be considered: contract/customized pricing, bundle pricing for multiple products/services, seasonal discounts and premiums. Improve rate-setting process by streamlining process to allow reasonable pricing changes without extensive regulatory hearings. 14.2 Review the statutory rate-setting process to identify potential changes for improvement. For example, replace the existing statutory system with some form of incentive regulation giving the Service pricing flexibility for competitive products, subject to rules to protect the market from anticompetitive Service activities. 15. Universal service mission/role: The Service has not defined what universal postal services are needed by the American people in the 21st century and the Service’s role in providing such services. 15.1 Obtain greater flexibility to adjust the number of delivery days. 15.2 Obtain greater statutory and regulatory flexibility to redefine universal retail postal service, including standards for access and levels of service. 16. Governance: The Service’s business model and governance structure are problematic and need to be reassessed as part of transformation. 16.1 The Service’s proposed commercial government enterprise model refocuses the board of governors on fiduciary duties.
The U.S. Postal Service continues to face financial and transformation challenges. Since GAO placed the Service's long-term outlook and transformation efforts on its high-risk list, the Service's financial situation has continued to decline, and its operational challenges have increased. The Service took a good first step when it issued its Transformation Plan. The plan provides information about the Service's challenges, identifies many actions the Service plans to take under its existing authority, and outlines steps that would require congressional action. The plan does not, however, adequately address some key issues or include an action plan with key milestones. The catastrophic events of September 11 and subsequent anthrax scares, coupled with the recent economic slowdown, have decreased mail volumes and revenues. However, the Service's financial difficulties are not just a cyclical phenomenon that will fade as the economy recovers. The Service's basic business model, which assumes that rising mail volume will cover rising costs and mitigate rate increases, is questionable as mail volumes stagnate or deteriorate in an increasingly competitive environment. The Service's Transformation Plan recognizes that postal costs are rising faster than revenues and identifies many actions that the Service plans to take under its existing authority, notably through cutting costs and improving productivity.
ERM allows management to understand an organization’s portfolio of top- risk exposures, which could affect the organization’s success in meeting its goal. As such, ERM is a decision-making tool that allows leadership to view risks from across an organization’s portfolio of responsibilities. ERM recognizes how risks interact (i.e., how one risk can magnify or offset another risk), and also examines the interaction of risk treatments (actions taken to address a risk), such as acceptance or avoidance. For example, treatment of one risk in one part of the organization can create a new risk elsewhere or can affect the effectiveness of the risk treatment applied to another risk. ERM is part of overall organizational governance and accountability functions and encompasses all areas where an organization is exposed to risk (financial, operational, reporting, compliance, governance, strategic, reputation, etc.). In July 2016, OMB updated its Circular No. A-123 guidance to establish management’s responsibilities for ERM, as well as updates to internal control in accordance with Standards for Internal Control in the Federal Government. OMB also updated Circular No. A-11, Preparation, Submission, and Execution of the Budget in 2016 and refers agencies to Circular No. A-123 for implementation requirements for ERM. Circular No. A-123 guides agencies on how to integrate organizational performance and ERM to yield an “enterprise-wide, strategically-aligned portfolio view of organizational challenges that provides better insight about how to most effectively prioritize resource allocations to ensure successful mission delivery.” The updated requirements in Circulars A- 123 and A-11 help modernize existing management efforts by requiring agencies to implement an ERM capability coordinated with the strategic planning and strategic review process established by the GPRA Modernization Act of 2010 (GPRAMA), and with the internal control processes required by the FMFIA and in our Standards for Internal Control in the Federal Government. This integrated governance structure is designed to improve mission delivery, reduce costs, and focus corrective actions towards key risks. More specifically, Circular No. A-123 discusses both internal control and ERM and how these fit together to manage agency risks. Our Standards for Internal Control in the Federal Government describes internal control as a process put in place by an entity’s oversight body, management, and other personnel that provides reasonable assurance that objectives related to operations, compliance, and reporting will be achieved, and serves as the first line of defense in safeguarding assets. Internal control is also part of ERM and used to manage or reduce risks in an organization. Prior to implementing ERM, risk management focused on traditional internal control concepts to managing risk exposures. Beyond traditional internal controls, ERM promotes risk management by considering its effect across the entire organization and how it may interact with other identified risks. Additionally, ERM also addresses other topics such as setting strategy, governance, communicating with stakeholders, and measuring performance, and its principles apply at all levels of the organization and across all functions. Implementation of OMB circulars is expected to engage all agency management, beyond the traditional ownership of A-123 by the Chief Financial Officer community. According to the A-123 Circular, it requires leadership from the agency Chief Operating Officer (COO) and Performance Improvement Officer (PIO) or other senior official with responsibility for the enterprise, and close collaboration across all agency mission and mission-support functions. The A-123 guidance also requires agencies to create a risk profile that helps them identify and assess risks arising from mission and mission-support operations, and consider those risks as part of the annual strategic review process. Circular A-123 requires that agencies’ risk profiles include risks to strategic, operations, reporting and compliance objectives. A federal interagency group of ERM practitioners developed a Playbook released through the Performance Improvement Council (PIC) and the Chief Financial Officers Council (CFOC) in July 2016 to provide federal agencies with a resource to support ERM. In particular, the Playbook assists them in implementing the required elements in the updated A-123 Circular. To assist agencies in better assessing challenges and opportunities from an enterprise-wide view, we have updated our risk management framework first published in 2005 to more fully include recent experience and guidance, as well as specific enterprise-wide elements. As mentioned previously, our 2005 risk management framework was developed in the context of risks associated with homeland security and combating terrorism. However, increased attention to ERM concepts and their applicability to all federal agencies and missions led us to revise our risk framework to incorporate ERM concepts that can help leaders better address uncertainties in the federal environment, changing and more complex operating environments due to technology and other global factors, the passage of GPRAMA and its focus on overall performance improvement, and stakeholders seeking greater transparency and accountability. For many similar reasons, the Committee of Sponsoring Organizations of the Treadway Commission (COSO) initiated an effort to update its ERM framework for 2016, and the International Organization for Standardization (ISO) plans to update its ERM framework in 2017. Further, as noted, OMB has now incorporated ERM into Circulars A-11 and A-123 to help improve overall agency performance. We identified six essential elements to assist federal agencies as they move forward with ERM implementation. Figure 1 below shows how ERM’s essential elements fit together to form a continuing process for managing enterprise risks. The absence of any one of the elements below would likely result in an agency incompletely identifying and managing enterprise risk. For example, if an agency did not monitor risks, then it would have no way to ensure that it had responded to risks successfully. There is no “one right” ERM framework that all organizations should adopt. However, agencies should include certain essential elements in their ERM program. Below we describe each essential element in more detail, why it is important, and some actions necessary to successfully build an ERM program. 1. Align the ERM process to agency goals and objectives. Ensure the ERM process maximizes the achievement of agency mission and results. Agency leaders examine strategic objectives by regularly considering how uncertainties, both risks and opportunities, could affect the agency’s ability to achieve its mission. ERM subject matter specialists confirmed that this element is critical because the ERM process should support the achievement of agency goals and objectives and provide value for the organization and its stakeholders. By aligning the ERM process to the agency mission, agency leaders can address risks via an enterprise-wide, strategically-aligned portfolio rather than addressing individual risks within silos. Thus, agency leaders can make better, more effective decisions when prioritizing risks and allocating resources to manage risks to mission delivery. While leadership is integral throughout the ERM process, it is an especially critical component of aligning ERM to agency goals and objectives because senior leaders have an active role in strategic planning and accountability for results. 2. Identify risks. Assemble a comprehensive list of risks, both threats and opportunities, that could affect the agency from achieving its goals and objectives. This element of ERM systematically identifies the sources of risks as they relate to strategic objectives by examining internal and external factors that could affect their accomplishment. It is important that risks either can be opportunities for, or threats to, accomplishing strategic objectives. The literature we reviewed, as well as subject matter specialists, pointed out that identifying risks in any organization is challenging for employees, as they may be concerned about reprisals for highlighting "bad news." Risks to objectives can often be grouped by type or category. For example, a number of risks may be grouped together in categories such as strategic, program, operational, reporting, reputational, technological, etc. Categorizing risks can help agency leaders see how risks relate and to what extent the sources of the risks are similar. The risks are linked to relevant strategic objectives and documented in a risk register or some other comprehensive format that also identifies the relevant source and a risk owner to manage the treatment of the risk. Comprehensive risk identification is critical even if the agency does not control the source of the risk. The literature and subject matter specialists we consulted told us that it is important to build a culture where all employees can effectively raise risks. It is also important for the risk owner to be the person who is most knowledgeable about the risk, as this person is likely to have the most insight about appropriate ways to treat the risk. 3. Assess risks. Examine risks considering both the likelihood of the risk and the impact of the risk on the mission to help prioritize risk response. Agency leaders, risk owners, and subject matter experts assess each risk by assigning the likelihood of the risk’s occurrence and the potential impact if the risk occurs. It is important to use the best information available to make the risk assessment as realistic as possible. Risk owners may be in the best position to assess risks. Risks are ranked based on organizational priorities in relation to strategic objectives. Risks are ranked based on organizational priorities in relation to strategic objectives. Agencies need to be familiar with the strengths of their internal control when assessing risks to determine whether the likelihood of a risk event is higher or lower based on the level of uncertainty within the existing control environment. Senior leaders determine if a risk requires treatment or not. Some identified risks may not require treatment at all because they fall within the agency's risk appetite, defined as how much risk the organization is willing to accept relative to mission achievement. The literature we reviewed and subject matter specialists noted that integrating ERM efforts with strategic planning and organizational performance management would help an organization more effectively assess its risks with respect to the impact on the mission. 4. Select risk response. Select a risk treatment response (based on risk appetite) including acceptance, avoidance, reduction, sharing, or transfer. Agency leaders review the prioritized list of risks and select the most appropriate treatment strategy to manage the risk. When selecting the risk response, subject matter experts noted that it is important to involve stakeholders that may also be affected, not only by the risk, but also by the risk treatment. Subject matter specialists also told us that when agencies discuss proposed risk treatments, they should also consider treatment costs and benefits. Not all treatment strategies manage the risk entirely; there may be some residual risk after the risk treatment is applied. Senior leaders need to decide if the residual risk is within their risk appetite and if additional treatment will be required. The risk response should also fit into the management structure, culture, and processes of the agency, so that ERM becomes an integral part of regular management functions. One subject matter specialist suggested that maximize opportunity should also be included as a risk treatment response, so that leaders may capture the positive outcomes or opportunities associated with some risks. 5. Monitor Risks. Monitor how risks are changing and if responses are successful. After implementing the risk response, agencies must monitor the risk to help ensure that the entire risk management process remains current and relevant. The literature we reviewed also suggests using a risk register or other comprehensive risk report to track the success of the treatment for managing the risk. Senior leaders and risk owners review the effectiveness of the selected risk treatment and change the risk response as necessary. Subject matter specialists noted that a good practice includes continuously monitoring and managing risks. Monitoring should be a planned part of the ERM process and can involve regular checking as part of management processes or part of a periodic risk review. Senior leaders also could use performance measures to help track the success of the treatment, and if it has had the desired effect on the mission. 6. Communicate and Report on Risks. Communicate risks with stakeholders and report on the status of addressing the risks. Communicating and reporting risk information informs agency stakeholders about the status of identified risks and their associated treatments, and assures them that agency leaders are managing risk effectively. In a federal setting, communicating risk is important because of the additional transparency expected by Congress, taxpayers, and other relevant stakeholders. Communicating risk information through a dedicated risk management report or integrating risk information into existing organizational performance management reports, such as the annual performance and accountability report, may be useful ways of sharing progress on the management of risk. The literature we reviewed showed and subject matter specialists confirmed that sharing risk information is a good practice. However, concerns may arise about sharing overly specific information or risk responses that would rely on sensitive information. Safeguards should be put in place to help secure information that requires careful management, such as information that could jeopardize security, safety, health, or fraud prevention efforts. In this case, agencies can help alleviate concerns by establishing safeguards, such as communicating risk information only to appropriate parties, encrypting sensitive data, authorizing users' level of rights and privileges, and providing information on a need-to-know basis. We identified six good practices that nine agencies are implementing that illustrate ERM’s essential elements. The selected good practices are not all inclusive, but represent steps that federal agencies can take to initiate and sustain an effective ERM process, as well as practices that can apply to more advanced agencies as their ERM processes mature. We expect that as federal experiences with ERM evolve, we will be able to refine these practices and identify additional ones. Below in table 1, we identify the essential elements of ERM and the good practices that support each particular element that agencies can use to support their ERM programs. The essential elements define what ERM is and the good practices and case illustrations described in more detail later in this report provide ways that agencies can effectively implement ERM. The good practices may fit with more than one essential element, but are shown in the table next to the element to which they most closely relate. The following examples illustrate how selected agencies are guiding and sustaining ERM strategy through leadership engagement. These include how they have: designated an ERM leader or leaders committed organization resources to support ERM, and set organizational risk appetite. This good practice relates most closely to Align ERM Process to Goals and Objectives as shown in table 1. According to the Chief Financial Officer’s Council (CFOC) and Performance Improvement Council (PIC) Playbook, strong leadership at the top of the organization, including active participation in oversight, is extremely important for achieving success in an ERM program. To manage ERM activities, leadership may choose to designate a Chief Risk Officer (CRO) or other risk champion to demonstrate the importance of risk management to the agency and to implement and manage an effective ERM process across the agency. The CRO role includes leading the ERM process; involving those that need to participate and holding them accountable; ensuring that ERM reviews take place regularly; obtaining resources, such as data and staff support if needed; and ensuring that risks are communicated appropriately to internal and external stakeholders, among other things. For example, at TSA, the CRO serves as the principal advisor on all risks that could affect TSA’s ability to perform its mission, according to the August 2014 TSA ERM Policy Manual. The CRO reports directly to the TSA Administrator and the Deputy Administrator. In conjunction with the Executive Risk Steering Committee (ERSC) composed of Assistant Administrators who lead TSA’s program and management offices, the CRO leads TSA in conducting regular enterprise risk assessments of TSA business processes or programs, and overseeing processes that identify, assess, prioritize, respond to, and monitor enterprise risks. Specifically, the August 2014 TSA ERM Policy Manual describes ERSC’s role to “oversee the development and implementation of processes used to analyze, prioritize, and address risks across the agency including terrorism threats facing the transportation sector, along with non- operational risks that could impede its ability to achieve its strategic objectives.” The TSA CRO told us that its ERSC provides an opportunity for all Assistant Administrators to get together to have risk conversations. For example, the CRO recently recommended that the ERSC add implementation of the agency’s new financial management system to the risk register. According to the CRO, the system’s implementation was viewed as the responsibility of the Chief Financial Officer (CFO) and Chief Information Officer (CIO). However, the implementation needed to be managed at the enterprise-level because if it was not successfully implemented, the entire enterprise would be affected. The CRO proposed adding the implementation of the new financial management system to the TSA risk register to give the issue broader visibility. The ERSC unanimously concurred with the recommendation, and staff from the Office of Finance and Administration—the risk owner—will brief the ERSC periodically on the status of the effort. According to TSA’s ERM Policy Manual, the CRO leads the overall ERM process, while the ERSC brings knowledge and expertise from their individual organizations to help identify and prioritize risks and opportunities of TSA’s overall approach to operations. While the CRO and ERSC play critical roles in ERM oversight, the relevant program offices still own risks and execute risk management, according to the TSA ERM Policy Manual. To launch and sustain a successful ERM program, organizational resources are needed to help implement leadership’s vision of ERM for the agency and ensure its ongoing effectiveness. For example, when FSA began its ERM program in 2004, the Chief Operating Officer (COO) decided to hire a CRO and give him full responsibility to establish the ERM organization and program and implement it across the organization. According to documents we reviewed, the CRO dedicated resources to define the goal and purpose of the ERM program and met with key leaders across the agency to socialize the program. Agency leadership hired staff to establish the ERM program and provided risk management training to business unit senior leaders and their respective staff. Our review of documents shows that the FSA continues to provide ERM training to senior staff and all FSA employees and also participates in an annual FSA Day, so employees can learn more about all business units across FSA including the Risk Management Office and its ERM implementation. In September 2016, the FSA CRO told us that the Risk Management Office had a staff of 19 full-time equivalent (FTE) employees. FSA continues to provide resources to its ERM program and has subsequently structured its leadership by involving two senior leaders and a risk management committee to manage ERM processes. According to the CRO, its risk committee guides the ERM process, tracks the agency’s progress in managing risks, and increases accountability for outcomes. Both the CRO, the Chairman of the Risk Management Committee and the Senior Risk Advisor report directly to the FSA Chief Operating Officer (COO). The CRO manages the day-to-day aspects of assessing risks for various internal FSA operations, programs and initiatives, as well as targeting risk assessments on specific high-risk issues, such as the closing of a large for-profit school. The Chairman of the Risk Management Committee and the Senior Risk Advisor advise the COO by identifying and analyzing external risks that could affect the accomplishment of FSA’s strategic objectives. The Senior Risk Advisor also gathers and disseminates information internally that relates to FSA risk issues, such as cybersecurity or financial issues. In addition, he serves as the Chair of the Risk Management Committee and leads its monthly meetings. Other senior leaders and members involved with the Risk Management Committee were drawn from across the agency and demonstrate the importance of ERM to FSA. Specifically, the committee is chaired by the independent senior risk advisor and comprised of the CRO, COO, CFO, Chief Information Officer (CIO), General Manager of Acquisitions, Chief Business Operations Officer, Chief of Staff, Chief Compliance Officer, Deputy COO, and Chief Customer Experience Officer, and meets monthly. Agency officials said that the participation of the COO, along with that of the other functional chiefs, indicates ERM’s importance and the commitment of staff—namely these executives–in the effort. Developing an agency risk appetite requires leadership involvement and discussion. The organization should develop a risk appetite statement and embed it in policies, procedures, decision limits, training, and communication, so that it is widely understood and used by the agency. Further, the risk appetite may vary for different activities depending on the expected value to the organization and its stakeholders. To that end, the National Institute of Standards and Technology (NIST) ERM Office surveyed its 33-member senior leadership team to measure risk appetite among its senior leaders. Without a clearly defined risk appetite, NIST could be taking risks well beyond management’s comfort level, or passing up strategic opportunities by assuming its leaders were risk averse. The survey objectives were to “assess management familiarity and use of risk management principles in day-to-day operations and to solicit management perspectives and input on risk appetite, including their opinions on critical thresholds that will inform the NIST enterprise risk criteria.” Survey questions focused on the respondent’s self-reported understanding of a variety of risk management concepts and asked respondents to rate how they consider risk with respect to management, safety, and security. The survey assessed officials’ risk appetite across five areas: NIST Goal Areas, Strategic Objectives, Core Products and Services, Mission Support Functions, and Core Values. See figure 2 for the rating scale that NIST used to assess officials’ appetite for risk in these areas. The survey results revealed a disconnect between the existing and desired risk appetite for mission support functions. According to NIST officials, respondents believed the bureau needed to accept more risk to allow for innovation within mission support functions. According to agency officials, to better align risk appetite with mission needs, the NIST Director tasked the leadership team with developing risk appetite levels for those areas with the greatest disagreement between the existing and desired risk appetite, while still remaining compliant with laws and regulations. Agency officials told us the NIST ERM Office plans to address this topic via further engagement with senior managers and subject matter experts. The following examples illustrate how selected agencies are developing a risk-informed culture, including how they have: encouraged employees to discuss risks openly, trained employees on ERM approach engaged employees in ERM efforts, and customized ERM tools for organizational mission and culture. This good practice relates most closely to Identify Risks, one of the Essential Elements of Federal Government ERM shown in table 1. Successful ERM programs find ways to develop an organizational culture that allows employees to openly discuss and identify risks, as well as potential opportunities to enhance organizational goals or value. The CFOC and PIC Playbook also supports this notion that once ERM is built into the agency culture, the agency can learn from managed risks or, near misses, using them to improve how it identifies and analyzes risk. For example, Commerce officials sought to embed a culture of risk awareness across the department by defining cascading roles of leadership and responsibility for ERM across the department and for its 12 bureaus. Additionally, an official noted that Commerce leveraged this forum to share bureau best practices; develop a common risk lexicon; and address cross-bureau risks, issues and concerns regarding ERM practice and implementation. According to the updated ERM program policy, these roles should support the ERM program and promote a risk management culture. They also help promote transparency, oversight, and accountability for a successful ERM program. Table 2 shows the ERM roles and set of responsibilities within Commerce and how they support a culture of risk awareness at each level. To successfully implement and sustain ERM, it is critical that staff, at all levels, understand how the organization defines ERM, its subsequent ERM approach, and organizational expectations on their involvement in the ERM process. The CFOC and PIC Playbook also supports risk awareness as previously stated because once ERM is built into the agency culture, it can be possible to learn from managed risks and near misses when risks materialize, and then used to improve the process of identifying and analyzing risk in the future. Further, the Playbook suggests that this culture change can only occur if top agency leaders champion ERM and encourage the flow of information needed for effective decision making. For example, to promote cultural change and encourage employees to raise risks, PIH trained about half of its 1,500 employees in 2015. Agency officials told us that they plan to expand on the 2015 training and provide training to all PIH employees after 2016. The in-person PIH training includes several features of our identified ERM good practices, such as leadership support and the importance of developing a risk-informed culture. For example, the Principal Deputy Assistant Secretary for PIH was visibly involved in the training and kicked off the first of the five training modules using a video emphasizing ERM. The training contained discussions and specific exercises dedicated to the importance of raising and assessing risks and understanding the leadership and employee roles in ERM. The first training module emphasized the factors that can support ERM by highlighting the following cultural characteristics. ERM requires a culture that supports the reporting of risks. ERM requires a culture of open feedback. A risk-aware culture enables all HUD staff to speak up and then be listened to by decision-makers. Leadership encourages the sharing of risks. By focusing on the importance of developing a risk aware culture in the first ERM training module, PIH officials emphasized that ERM requires a cultural transformation for its success. To enable all employees to participate and benefit from the training, PIH officials recorded the modules and made them available on You-Tube. Our literature review found that building a risk-aware culture supports the ERM process by encouraging staff across the organization to feel comfortable raising risks. Involving employees in identifying risks allows the agency to increase risk awareness and generate solutions to those identified risks. Some ways to strengthen this culture include the presence of risk management communities of practice, the development and dissemination of a risk lexicon agencywide, and conducting forums that enable frontline staff to raise risk-related strategic or operational concerns with leadership and senior management. For example, TSA’s Office of the Chief Risk Officer (OCRO) has sponsored a number of activities related to raising risk awareness. First, TSA has established a risk community of interest open to any employee in the organization, and has hosted speakers on ERM topics. These meetings have provided an opportunity for employees across the administration to learn and discuss risks and become more knowledgeable about the types of issues that should be raised to management. Second, TSA created a risk lexicon, so that all staff involved with ERM would use and understand risk terminology similarly. The lexicon describes core concepts and terms that form the basis for the TSA ERM framework. TSA incorporated the ERM lexicon into the TSA ERM Policy Manual. Third, in January 2016, TSA started a vulnerability management process for offices and functions with responsibility for identifying or addressing security vulnerabilities. Officials told us that this new process is intended to help raise risks from the bottom up so that they receive top level monitoring. According to the December 2015 TSA memo we reviewed, the process centralizes tracking of vulnerability mitigation efforts with the CRO, creates a central repository for vulnerability information and tracking, provides executive engagement and oversight of enterprise vulnerabilities by the Executive Risk Steering Committee (ERSC), promotes cross-functional collaboration across TSA offices, and requires the collaboration of Assistant Administrators and their respective staff across the Agency. See figure 3 below for an overview of how TSA’s vulnerability management process is intended to work. The CRO told us that employees from all levels can report risks with broader, enterprise-level application to the OCRO. Once the OCRO decides the risks are at an enterprise level, the office assembles a working group and submits ideas to the ERSC to decide at what level it should be addressed. The risk is then assigned to an executive who will be required to provide a status update. Fourth, officials in the TSA OCRO told us that TSA has established points of contact in every program office, referred to as ERM Liaisons. Each ERM Liaison is a senior level official that represents their program office in all ERM related activities. TSA also implemented risk management awareness training to headquarters and field supervisors that covered topics such as risk-based decision-making, risk assessment, and situational awareness. Officials told us they are also embedding ERM principles into existing training, so that employees will understand how ERM fits into TSA operations. Customizing ERM tools and templates can help ensure risk management efforts fit agency culture and operations. For example, NIST tailored certain elements of the Commerce ERM framework to better reflect the bureau’s risk thresholds. Commerce has developed a set of standard risk assessment criteria to help identify and rate risks, referred to as the Commerce ERM Reference Card. NIST officials reported that some of the safety and security terms used at Commerce differed from the terms used at NIST and required tailoring to map to NIST’s existing safety risk framework, which is a heavily embedded component of NIST operations and culture. To better align to NIST, the NIST ERM Program split safety and security risks into distinct categories when establishing a tailored ERM framework for the bureau (see table 3). According to agency officials, the NIST ERM Reference Card also leverages American National Standards Institute guidelines, so it does not introduce another separate and potentially conflicting set of terms. Officials told us that these adaptations to the NIST ERM framework help maintain continuity with the Commerce framework, but reflect the particular mission, needs, and culture of NIST. The following examples illustrate how selected agencies are integrating ERM capability to support strategic planning and organizational performance management. These include how they have: incorporated ERM into strategic planning processes, and used ERM to improve information for agency decisions. This good practice most closely relates to Assess Risks, one of the Essential Elements of Federal Government ERM, shown in table 1. Through ERM, an agency looks for opportunities that may arise out of specific situations, assesses their risk, and develops strategies to achieve positive outcomes. In the federal environment, agencies can leverage the GPRAMA performance planning and reporting framework to help better manage risks and improve decision making. For example, Treasury has integrated ERM into its existing strategic planning and management processes. According to our review of the literature and the subject matter specialists we interviewed, using existing processes helps to avoid creating overlapping processes. Further, by incorporating ERM this way, risk management becomes an integral part of setting goals, including agency priority goals (APG), ultimately achieving an organization’s desired outcomes. Agencies can use regular performance reviews, such as the quarterly performance reviews of APGs and annual leadership- driven strategic objective review, to help increase attention on progress towards the outcomes agencies are trying to achieve. According to OMB Circular No. A-11, agencies are expected to manage risks and challenges related to delivering the organization’s mission. The agency’s strategic review is a process by which the agency should coordinate its analysis of risk using ERM to make risk-aware decisions, including the development of risk profiles as a component of the annual strategic review, identifying risks arising from mission and mission-support operations, and providing a thoughtful analysis of the risks an agency faces towards achieving its strategic objectives. Instituting ERM can help agency leaders make risk- aware decisions that affect prioritization, performance, and resource allocation. Treasury officials stated they integrated ERM into their quarterly performance or data-driven reviews and strategic reviews, both of which already existed. Officials stated this action has helped elevate and focus risk discussions. Staff from the management office and individual bureaus work together to complete the template slide, which is used to include a risk element in their performance reviews. As part of this process, they are assessing risk. See figure 4 for how risk is incorporated into Treasury’s quarterly performance review (QPR) template. Officials stated that they believe this approach to prepare for the data-driven review has helped improve outcomes at Treasury. For example, according to agency officials, Treasury used its QPR process to increase cybersecurity. Treasury officials also told us that during the fall and the spring, each Treasury bureau completes the data-driven review templates. Agency officials are to use the summer data-driven review as an opportunity to discuss budget formulation. In winter, they are to use the annual data- driven review to show progress towards achieving strategic objectives. According to agency officials, the strategic review examines and assesses risks identified as part of the data-driven reviews and aggregates and analyzes these results at the cross cutting strategic objective level, which helps improve agency performance. Integrating ERM into this existing data-driven review process avoids creating a duplicative process and increases the focus on risk. In another example, Treasury officials identified implementation of the Digital Accountability and Transparency Act of 2014 (DATA Act) both at Treasury and government-wide as a risk and established “Financial Transparency” as one of its two APGs for fiscal years 2016 and 2017. According to agency officials, incorporating risk management into the data-driven review process sends a signal about the importance of the DATA Act and brings additional leadership focus and scrutiny needed to successfully implement the law. The literature we reviewed notes that ERM contributes to leaders’ ability to identify risks and adjust organizational priorities to enhance decision- making efforts. For example, OPM has a Risk Management Council (RMC) that builds risk-review reporting and management strategies into existing decision making and performance management structures. This includes Performance Dashboards, APG reviews, and regular meetings of the senior management team, as is recommended by the CFOC and PIC Playbook. The RMC also uses an existing performance dashboard for strategic goal reviews as part of its ERM process and to help inform decisions as a result of these reviews. Officials told us they present their dashboards every 6 or 7 weeks to the Chief Management Officer (CMO) and RMC, as part of preparing for their data-driven reviews. Each project and its risks are mapped against the strategic plan. When officials responsible for a goal identify risks, they must also provide action plan strategies, timelines, and milestones for mitigating risks. Figure 5 shows an OPM dashboard to illustrate how OPM tracks progress on a goal of preparing the federal workforce for retirement, for such a risk as an unexpected retirement surge, and documents mitigation strategies to address such events. According to agency officials, the CMO and RMC monitor high level and high visibility risks on a weekly basis. In August 2016, OPM officials told us they were monitoring five to seven major projects, such as information technology (IT) security implementation and retirement services processes. Each quarterly data-driven review includes an in-depth look into a specific goal and the examination of risks as part of the review. Officials told us that in the past 3 years, they have covered each of the strategic goals using the dashboard. According to officials, during one of these reviews, OPM identified a new risk related to having sufficient qualified contracting staff to meet the goal of effective and efficient IT systems. Since OPM considers contracting a significant component of that goal, they decided to create the Office of Procurement Operations to help increase attention to contracting staff. Using ERM, OPM officials told us they believe that they could better prioritize funding requests across the agency, ultimately balance limited resources, and make better informed decisions. The following examples illustrate how selected agencies are establishing a customized ERM program into existing agency processes. These include how they have: designed an ERM program that allows for customized agency fit, developed a consistent, routinized ERM program, and used a maturity model approach to build an ERM program. This good practice relates primarily to Identify Risk and Select Risk Response, two of the Essential Elements of Federal Government ERM shown in table 1. Effective ERM implementation starts with agencies establishing a customized ERM program that fits their specific organizational mission, culture, operating environment, and business processes but also contains the essential elements of an ERM framework. The CFOC and PIC Playbook focuses on the importance of a customized ERM program to meet agency needs. This involves taking into account policy concerns, mission needs, stakeholder interests and priorities, agency culture, and the acceptable level for each risk, both for the agency as a whole and for the specific programs. For example, in 2004, the Department of Education’s (Education) Office of Federal Student Aid (FSA) began establishing a formal ERM program, based on the Committee of Sponsoring Organizations of the Treadway Commission (COSO) ERM Framework, to help address longstanding risks using customized implementation plans. More specifically, FSA’s framework and materials were customized for it to ensure that they were specific to work within a government setting, and to capture the nuances of FSA's business model. Agency officials told us that one reason they adopted a COSO-based model for ERM is that it was geared toward achieving an entity's objectives, and could be customized to meet FSA’s organizational needs as a performance-based organization. Thus, FSA adopted a three-phase approach that allowed for increased maturity over time, and customized it to help the organization adapt to the new program using a COSO-based methodology for risk management. According to FSA documents, the first phase involved creating the ERM organization, designing a high-level implementation plan, and forming its enterprise risk committee to help support its first ERM efforts. The second phase involved creating a strategic plan and detailed project plan to implement ERM. For example, the original FSA ERM Strategic Plan contained an ERM vision statement (see textbox below) for aligning strategic risks with goals and objectives. The FSA Plan also provided its approach for identifying risks that could affect FSA’s ability to achieve these objectives. Federal Student Aid Enterprise Risk Management Original Vision Statement “Our vision is to create the premier Enterprise Risk Management Program in the Federal government. One that provides for an integrated view of risk across the entire Federal Student Aid organization; aligns strategic risks with the organization’s goals and objectives; ensures that risk issues are integrated into the strategic decision making process; and manages risk to further the achievement of performance goals.” During the initial implementation of FSA's ERM program, the ERM strategic goals were to: 1. provide for an integrated view of risks across the organization, 2. ensure that strategic risks are aligned with strategic goals and 3. develop a progressive risk culture that fosters an increased focus on risk and awareness of related issues throughout the organization, and 4. improve the quality and availability of risk information across all levels of the organization, especially for executive management. Finally, according to documents we reviewed, the third phase of FSA’s ERM implementation included developing enterprise-level risk reports, and advanced methods, tools, and techniques to monitor and manage risk. For example, the documents we reviewed showed that some of the key tools that supported FSA’s ERM implementation included ERM terminology, risk categories, risk ratings, and a risk-tracking system. These tools help FSA select an appropriate risk response that works with existing agency processes and culture. A consistent process for risk review that systematically categorizes risk helps leaders to ensure that the consideration of potential risk takes place. The CFOC and PIC Playbook suggests that organizations define risk categories to support their business processes, and use these categories consistently. For example, to identify and review risks, the TSA Risk taxonomy organizes risks into categories so the agency can consistently identify, assess, measure, and monitor risks across the organization, as discussed in the TSA Policy Manual. The TSA Risk Taxonomy captures the risks in all aspects of mission operations, business operations, governance, and information. Figure 6 lists each risk category that is reviewed. The taxonomy helps TSA both collect risks and identify the most critical, and helps ensure that the same vocabulary and categorization system are used across TSA. Officials report that they chose these categories to help break down organizational silos and help identify all types of risks. For example, they did not want “mission risk” to consider only the Federal Air Marshal Service and airport checkpoint screening. Rather, they wanted a broad understanding of risks across the various TSA components. TSA officials stated that they believe the taxonomy will be even more useful when TSA has an automated computer application to help analyze all similar and related risks across the enterprise. OMB is encouraging agencies to use a maturity model approach in the ERM guidance provided in A-123. Results from our literature review and OMB suggested that a maturity model allows the organization to plan for continued agency improvement as its efforts mature. For example, to assist implementing a department-wide ERM process, Commerce developed an ERM Maturity Assessment Tool (EMAT), as well as a comprehensive guidebook and other tools, to share with its 12 bureaus. The EMAT consists of 83 questions to help bureaus determine their ERM maturity (see figure 7 for a sample of EMAT questions). According to agency officials, bureaus are required to conduct EMAT assessments annually. According to agency officials, while the EMAT lays out the basic components of ERM, the bureaus may customize the tool to fit their respective organizations. Commerce expects the bureaus to demonstrate increased levels of maturity over time. Agency officials reported that overall, the level of maturity has increased since the program began. Discussions of the EMAT have allowed bureaus to learn from each other and identify strategies for addressing common challenges. According to officials, these challenges include documenting risk treatment plans and providing the rationale to support management’s risk mitigation choices. The following example illustrates how a select agency is continuously managing risks including how it has: tracked and monitored current and emerging risks. This good practice most closely relates to Monitor Risks, one of the Essential Elements of Federal Government ERM shown in table 1. Continuously managing risk requires a systematic or routine risk review function to help senior leaders and other stakeholders accomplish the organizational mission. The CFOC and PIC Playbook recommends that risks be identified and assessed throughout the year as part of a regular process, including surveillance of leading risk indicators both internally and externally. For example, PIH has two risk management dashboards, which it uses to monitor and review risks. The Risk and Mitigation Strategies Dashboard shown in figure 8, according to PIH officials, helps them monitor risks and mitigation actions that PIH is actively pursuing. Officials told us that the risk division prepares and presents this dashboard to the Risk Committee quarterly. The dashboard provides a snapshot view for the current period, analysis of mitigation action to date, and trends for the projected risk. It tracks the highest level risks to PIH as determined by the Risk Committee, along with the corresponding mitigation plans. Currently, officials told us PIH is managing the top risks using the dashboard. Risk division staff continually update the dashboard to concisely display the status of both risk and mitigation efforts. The second dashboard in figure 9, Key Risk Indicators Dashboard, monitors external, future risks to PIH’s mission. Agency officials told us that the dashboard is used as an early-warning system for emerging risks, which the Risk Committee must address before the next annual risk assessment cycle begins. The dashboard includes a risk-level column that documents the residual risk, and is measured on a five-point scale with one being the lowest and five being the highest, which is assigned by the relevant Deputy Assistant Secretary and Risk Division staff. A trending column indicates whether the risk is projected to increase, decrease, or remain the same. There is also a link to a document that summarizes the risk assessment including the risks and measures to address the risk and anticipated impact. The Risk Committee reviews the dashboard as needed, but not less than quarterly. These two dashboards show how an agency uses the continuous risk review cycle. The cycle allows leaders to treat risks until they are satisfied the risk is under control or successfully treated or managed. The following examples illustrate how selected agencies are sharing information with internal and external stakeholders to identify and communicate risks. These include how they have: incorporated feedback on risks from internal and external stakeholders to better manage risks, and shared risk information across the enterprise. This good practice most closely relates to Communicate and Report on Risks in the Essential Elements of Federal Government ERM shown in table 1. Effective information and communication are vital for an agency to achieve its objectives and this often involves multiple stakeholders, inside and outside the organization. ERM programs should incorporate feedback from internal and external stakeholders because their respective insights can help organizations identify and better manage risks. For example, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) are creating and sharing inter-agency risk information as part of their joint management of the Joint Polar Satellite System (JPSS) program. JPSS is a collaborative effort between NOAA and NASA; the program was created with the President’s Fiscal Year 2011 Budget Request to acquire, develop, launch, operate and sustain three polar-orbiting satellites. The purpose of the JPSS program is to replace aging polar satellites and provide critical environmental data used in forecasting such weather events as the path and intensity of a hurricane and measuring climate variations. These two agencies have a signed agreement, or memorandum of understanding, to share ownership for risk that details the responsibilities for delivering the satellite and overall cost and schedule performance. In particular, NOAA has overall responsibility for the cost and schedule of the program, as well as the entire JPSS program. NOAA manages the ground segment elements needed for data collection and distribution, while NASA manages the system acquisition, engineering, and integration of the satellite, as well as the JPSS Common Ground System. Because of this management arrangement, the Joint Polar Satellite System (JPSS) program also required “joint” risk tracking and management. Other program documentation also points to the agencies’ close collaboration on risk management. The March 2014, JPSS Risk Management Plan describes how risk management practices are planned for consistency with NASA’s risk management requirements and outlines roles and responsibilities. NOAA officials stated that they share programmatic and technical information across the two agencies, and that certain high-level risks are elevated through Commerce quarterly. Our review of meeting agendas and presentations show that NASA and NOAA officials met monthly as part of a NOAA held Agency Program Management Council (APMC) to track JPSS’s progress and that of other satellite programs. These meetings also allowed participants to discuss and approve courses of action for top program risks. During the APMC meetings, the JPSS program director presented status updates and other information including risks. Participants discussed risks, cost, performance, schedule, and other relevant issues for each program. Sharing information helps promote trust within and outside of the organization, increases accountability for managing risks, and helps stakeholders understand the basis for identified risks and resulting treatment plans. Further, internal and external stakeholders may be able to provide new expertise and insight that can help organizations identify and better manage risks. Both the NASA Program Managers and the NOAA Program Director or their representatives attend meetings to discuss potential issues, according to NOAA officials. Each major satellite program also has an independent Standing Review Board. At defined program/project milestones, the Standing Review Board reviews relevant data and writes up its conclusions, presents an independent review of the program/project, and highlights key risks to the convening authorities. NOAA officials said that having a joint risk-sharing process established for JPSS and other joint programs allows them to elevate risks both internally up through the agency, and externally, more quickly and efficiently. For example, for another satellite program, NOAA had to reschedule its launch date due to a problem that arose with the launch service provider. After it became clear that the program was going to miss its schedule baseline, it was elevated up through NOAA. According to NOAA, NASA officials then explained to the APMC the steps they were taking to address the risk. As a result of having a process to elevate the risk, NOAA was able to discuss risks associated with the launch vehicle and how it planned to proceed with a new launch date range. According to NOAA officials, because the APMC discussion developed joint information, this information was available to pass on more quickly to Congress. When discussing potential risks, gathering input from across an enterprise helps to ensure decisions work for all agency groups affected. It also gives groups an opportunity to share any concerns or ideas that can improve outcomes. Appropriate and timely sharing of information within an organization ensures that risk information remains relevant, useful, and current. The CFOC and PIC Playbook also notes that informed decision making requires the flow of information regarding risks and clarity about uncertainties or ambiguities—up and down the hierarchy and across silos—to the relevant decision makers so they can make informed decisions. For example, IRS uses the Risk Acceptance Form and Tool (RAFT) as shown in figure 10 to document business decisions within a consistent framework. As part of the RAFT development process, IRS considers the views of internal and external stakeholders. According to agency officials, the RAFT assists IRS business units in making better risk-based decisions and elevating risks to the appropriate level. IRS officials said the RAFT also encourages units to consider how decisions may affect other units, as well as external stakeholders. As a result, business units often collaborate on key decisions by completing the RAFT, including considering and documenting risks associated with those decisions. According to IRS officials, the RAFT is used as a guide to articulate rationales behind decisions within the context of risk appetite and serves as a documentation trail to support these business decisions. IRS agency officials told us that one goal of its ERM program is to look at risk across the enterprise rather than taking a narrow approach to risk management. This also applies when making risk informed decisions, such as those that would be documented on a RAFT. As such, the RAFT includes the following instructions: “If the decision impacts or involves multiple organizations, coordinate with the respective points-of-contact to ensure all relevant information regarding the risk(s) are addressed in each section.” The form also allows users to identify other business units involved in the decision and external stakeholders affected by the decision. We provided a draft of this report to Office of Management and Budget (OMB) and the 24 Chief Financial Officer (CFO) Act agencies for review and comment. OMB staff provided us with oral comments and stated they generally agreed with the essential elements and good practices as identified in this report. They also provided technical comments that we incorporated as appropriate. We received written responses from the Social Security Administration (SSA) and Department of Veterans Affairs (VA) reprinted in appendices II and III. The SSA and the VA neither agreed nor disagreed with our findings. However, VA mentioned that enterprise risk management should be monitored at a minimum as part of the quarterly reviews of Agency Priority Goals because of the high-level audience led by the Deputy Secretary and suggested that monitoring risks more frequently should be emphasized as a practice that most agencies should follow, among other things. SSA stated that they are adopting the good practices identified in the report. Of the remaining 22 CFO Act agencies, we received technical comments from 10 agencies, which we incorporated as appropriate, 10 had no comments, and two others did not respond. We are sending copies of this report to the Director of OMB as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In addition to the individual named above, William M. Reinsberg, Assistant Director, Carole J. Cimitile, Analyst-in-Charge, Shea Bader, Crystal Bernard, Amy Bowser, Alexandra Edwards, Ellen Grady, Erin E. Kennedy, Erik Kjeldgaard, Robert Gebhart, Sharon Miller, Anthony Patterson, Laurel Plume, Robert Robinson, Cynthia Saunders, Stewart W. Small, Katherine Wulff, and Jessica L. Yutzy made major contributions to this report.
Federal leaders are responsible for managing complex and risky missions. ERM is a way to assist agencies with managing risk across the organization. In July 2016, the Office of Management and Budget (OMB) issued an updated circular requiring federal agencies to implement ERM to ensure federal managers are effectively managing risks that could affect the achievement of agency strategic objectives. GAO's objectives were to (1) update its risk management framework to more fully include evolving requirements and essential elements for federal enterprise risk management, and (2) identify good practices that selected agencies have taken that illustrate those essential elements. GAO reviewed literature to identify good ERM practices that generally aligned with the essential elements and validated these with subject matter specialists. GAO also interviewed officials representing the 24 Chief Financial Officer (CFO) Act agencies about ERM activities and reviewed documentation where available to corroborate officials' statements. GAO studied agencies' practices using ERM and selected examples that best illustrated the essential elements and good practices of ERM. GAO provided a draft of this report to OMB and the 24 CFO Act agencies for review and comment. OMB generally agreed with the report. Of the CFO act agencies, 12 provided technical comments, which GAO included as appropriate; the others did not provide any comments. Enterprise Risk Management (ERM) is a forward-looking management approach that allows agencies to assess threats and opportunities that could affect the achievement of its goals. While there are a number of different frameworks for ERM, the figure below lists essential elements for an agency to carry out ERM effectively. GAO reviewed its risk management framework and incorporated changes to better address recent and emerging federal experience with ERM and identify the essential elements of ERM as shown below. GAO has identified six good practices to use when implementing ERM.
Many billions of tons of carbon in the form of carbon dioxide, a major greenhouse gas, are exchanged naturally each year between the atmosphere, the oceans, and vegetation on the land. Greenhouse gas levels in the atmosphere are determined by the difference between processes that generate greenhouse gases (sources) and processes that destroy or remove them (sinks). Oceans and forests are the primary natural sinks. Humans have affected greenhouse gas levels (primarily carbon dioxide) by introducing new sources—primarily by burning fossil fuels such as coal, oil, and natural gas—and by interfering with natural sinks—primarily by deforestation. Scientists have estimated, for example, that as a result of human activity, the level of carbon dioxide emissions in the atmosphere has risen by almost 30 percent since industrialization began about 250 years ago. Among the nations of the world, the United States contributes the largest amount of carbon dioxide emissions from human activity. In a July 1997 report to the United Nations Framework Convention on Climate Change, the United States estimated that its carbon dioxide emissions from human activity in 1995 were about 5.2 billion metric tons. The United States also estimated that U.S. emissions of methane, another major greenhouse gas, from human activity were about 31 million metric tons (which is equivalent to about 650 million metric tons of carbon dioxide in global warming potential over a 100-year period). The emissions of these two greenhouse gases represent more than 95 percent of the total U.S. greenhouse gas emissions reported. The report also stated that the 1995 emissions levels for carbon dioxide had increased approximately 6 percent and for methane, approximately 4 percent above 1990 levels. Recognizing the potential for cost-effective greenhouse gas emissions reductions in other countries, the United States developed ground rules for a joint implementation program, formally known as the U.S. Initiative on Joint Implementation. Published in final form in June 1994, these ground rules established a pilot program, which is intended to evaluate possible approaches to joint implementation, including developing methods to measure and verify the projects’ achievements and helping to serve as a model for international consideration of joint implementation.Although participants in the pilot program do not receive formal credit for the emissions reductions achieved as a result of the pilot projects, they may receive public recognition for their efforts to combat climate change. Other motivating factors for some participants, according to Initiative officials and other studies of the joint implementation concept, include establishing operations or markets for their products in the host countries and anticipation that their pilot projects will be eligible for credit after the year 2000, when the United Nations’ pilot ends. An interagency Initiative Evaluation Panel, cochaired by senior executives of the Department of Energy (DOE) and EPA, accepts projects into the program and is authorized to certify their net emissions reductions. The Evaluation Panel is supported by an interagency Secretariat, which manages the program’s day-to-day operations, including the implementation of the application and review procedures for project proposals. In 1997, the Secretariat was staffed by eight employees on detail from DOE and EPA. Five of these employees spent less than full-time on the Initiative’s activities. In addition to these employees, however, the Secretariat relies on the expertise and contributions from staff in the other federal agencies that support the Initiative. The Initiative’s budget was $3.8 million in fiscal year 1996 and $2.6 million in fiscal year 1997. Under the Kyoto Protocol, negotiated in December 1997, the United States would be required to reduce its emissions of six greenhouse gases—namely, carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulphur hexafluoride—7 percent below its 1990 emissions level by 2012. The Kyoto Protocol also includes provisions for market-based approaches to reduce emissions of greenhouse gases. Such approaches include emissions trading, joint implementation, and the “clean development mechanism.”The philosophy behind these approaches is that the cost of reducing or capturing emissions varies among countries and that it is more efficient to seek the reductions where the cost is the least. Through the first six rounds of submissions, Initiative officials have used nine criteria and considered four other factors to determine which proposals to accept. The criteria primarily involve ways of measuring the project’s effect in reducing emissions and steps for verifying these reductions. One of the criteria also requires the project’s participants to provide annual reports to the Evaluation Panel on the emissions reduced or captured (sequestered) by the project. The other four factors considered involve determining whether the actions of U.S. participants and the host country support the objectives of United Nations Framework Convention on Climate Change and the potential positive and negative effects of the project on greenhouse gas emissions outside the project’s boundaries and apart from its effect on greenhouse gas emissions. The Initiative uses more criteria than do certain other countries with similar programs, and the U.S. criteria are more strict in some respects. When the pilot program was being developed, an interagency task force led by the State Department established criteria for determining which proposed projects would be accepted into the program. The criteria were developed to help ensure that proposed projects meet the development goals of the host country, while providing greenhouse gas benefits beyond those that would have occurred in the absence of the project. Moreover, the criteria are intended to help ensure that the projects result in real, measurable net emissions reductions. An initial set of nine criteria was proposed in a Federal Register notice on December 17, 1993. Twelve organizations and individuals submitted comments on the proposed criteria. On the basis of these comments, the criteria were revised, and the final criteria were published in the Federal Register on June 1, 1994. These criteria now have been used for evaluating the six rounds of proposals considered through March 1998. Most of the nine criteria relate to identifying and measuring a project’s benefits. For example, one criterion asks whether the proposal provides enough information to determine the level of current and future emissions both with and without the project. A second asks whether the proposal contains adequate provisions for tracking the emissions reduced or sequestered. A third asks whether the proposal provides adequate assurance that the benefits will not be lost or reversed over time. Other criteria relate to such matters as acceptance by the host country and annual reporting, including the greenhouse gas benefits as they are attained. Among the other four factors considered, one is whether the project has potential positive or negative effects on the host country’s employment and public health. (All nine criteria and four other considerations used in the project evaluation process are paraphrased in app. I.) The U.S. Initiative generally uses more criteria than did certain other countries with similar programs, and the criteria are stricter, in some respects, than the criteria used in other countries’ programs, according to our analysis of a 1996 report prepared for the Agency for International Development. This report described the criteria of the U.S. Initiative and similar programs in Australia, Canada, Germany, Japan, and the Netherlands. Our analysis of this information showed that the number of criteria used by the U.S. Initiative (nine) was equal to the number used by the Netherlands and larger than the number used by the other four countries (four to seven each). In addition, the U.S. criteria were stricter in some respects. For example, only the U.S. Initiative had requirements for maintaining benefits over time and for external verification of benefits. Conversely, two other countries—Germany and the Netherlands—had a criterion related to stimulating the use of modern technology or renewable energy. In a July 1996 report to the Secretariat of the United Nations Framework Convention on Climate Change, the Initiative said that its Evaluation Panel, which is responsible for accepting or rejecting project proposals for inclusion in its program, considers not only how a project measures against all criteria, but also how the project contributes to the pilot program. The report stated that while failure on any single criterion could keep a project from being approved, the panel may find relatively poor performance on one criterion to be outweighed by excellent performance on another. The report further stated that because the criteria were also being tested for their appropriateness, the Evaluation Panel did not use a single rigid approach to applying the criteria but remained flexible in their interpretation and application to each project. In our review of Initiative files, we found that 18 of the 32 projects accepted during the first six rounds had been accepted even though internal documentation indicated that the proposals were judged as not clearly meeting one or more of the nine criteria. For example, reviewers raised questions about a project involving the development and operation of a wind electricity-generating plant. The project review documentation noted that because the project had been under discussion since 1992, a year before the U.S. pilot program was announced, it was not clear that the project was initiated either in response to or in reasonable anticipation of the pilot program—one of the nine criteria for a project’s acceptance. The documentation also indicated that the project’s developers believed that acceptance of the project into the Initiative would better enable them to obtain the necessary funding for the project. The Evaluation Panel accepted this project. An Initiative official said that individual technical reviewers sometimes interpreted the criteria differently and came to different conclusions. In such cases, the Initiative’s Secretariat labels these findings as “less than clear compliance” and requests that the Evaluation Panel make this judgment on a case-by-case basis. According to the Secretariat, when the Evaluation Panel accepts such projects, it believes that the criteria were adequately met. Of the 97 proposed projects submitted during six evaluation rounds, 32 projects have been accepted into the program. Of the accepted projects, 17 are designed to reduce emissions, and 15 are designed to sequester emissions. All but one of the projects are aimed at reducing or sequestering carbon dioxide emissions, while the other project is aimed at reducing methane emissions. Through the six rounds, a total of 119 proposals have been submitted, 22 of which have been submitted twice. Thus, 97 separate proposals have been submitted. Thirty applications were submitted in the first round. Thereafter, the number of applications declined steadily to five applications in the fourth round. Although the number of applications rebounded to 30 in round five, it declined again to 18 in the most recent round. Secretariat staff suggested some possible explanations for the variations in the number of project proposals submitted in the various rounds. The staff suggested that the two largest rounds (rounds one and five), which occurred immediately prior to the First and Third Conferences of the Parties to the United Nations Framework Convention on Climate Change, were the result of project developers’ expectations that international crediting of joint implementation projects might be negotiated at those sessions. The staff also suggested that the smallest number of proposals came in round four because it was the first round occurring after the Initiative increased the number of rounds conducted a year from one to three and the resulting short period of time between rounds three and four (about 4 months). According to the Secretariat’s Director, in response to project developers’ expressed desires for a quicker turnaround process, the Initiative increased the frequency of its evaluation rounds by streamlining its application procedures. A total of 32 proposals have been accepted into the Initiative, including at least one proposal in each round. The proportion of proposals accepted increased from 23 percent in round one to 67 percent in round three. However, this proportion declined to 20 percent in round four and 7 percent in round five. Secretariat officials said that they had not attempted to determine a reason for this decline, but they pointed out that many of the proposals submitted for round five were found not to be complete. Our analysis showed that the project reviewers found 19 of the 30 round-five proposals, or more than 60 percent, did not contain sufficient information to permit a complete evaluation. The proportion accepted in round six was about 22 percent. (See fig. 1.) Proposals accepted Proposals not accepted Round three (12/96) includes three projects accepted in 2/97 and 3/97. Of the 32 approved projects, 31 focus on carbon dioxide, while the other project focuses on methane. Seventeen of the approved projects are designed to reduce emissions. For example, a project in Costa Rica involves the construction and operation of a privately owned and operated hydroelectric plant. The electricity generated by this plant will displace electricity that would have otherwise been generated by burning fossil fuels, thus reducing carbon dioxide emissions. The project that focuses on reducing methane emissions is located in the Russian Federation and will capture natural gas that is now escaping from a transmission and distribution system by sealing valves at two compressor stations. The other 15 approved projects are designed to capture carbon dioxide that is already in the atmosphere. For example, one project will preserve a tropical forest in Costa Rica by purchasing over 6,000 acres of privately owned land. Because, according to the project proposal, this forest land likely would have been either harvested or converted for agricultural use within the next 15 years, the greenhouse gas benefits for this project will accrue from preserving the existing trees. The 32 approved projects are located in 12 countries. Of these, the largest number, 16 (50 percent), are located in Central America. Another seven projects (22 percent) are located in Central and Eastern Europe, including the Russian Federation. The other nine projects (28 percent) are located in North America, specifically, Mexico (four projects); South America (three projects); and Asia (two projects). A wide range of U.S. organizations are participating in the Initiative. These include private industry, environmental nongovernmental organizations, universities, and federal agencies. Private industry includes electric utility, oil, and other companies that have developed techniques to reduce greenhouse gas emissions. The nongovernmental organizations include the Center for Clean Air Policy, the National Fish and Wildlife Foundation, and The Nature Conservancy. The nongovernmental organizations provide funding in some cases, but more often they act as project facilitators. Of the seven projects approved in the first round in February 1995, five have been or are being implemented, and two have not yet started. Each of these projects has at least one U.S. participant; one project has seven. Five of the seven projects approved in the first round are reducing or sequestering emissions, according to information collected by the Initiative’s staff in March 1998. Of these five, two projects are intended to reduce emissions. In both of these cases, the facilities have been built and are now in operation. For example, a project in the Czech Republic involving several energy efficiency improvements at a district heating facility, including the conversion of a coal-burning plant to natural gas, was completed and became operational in September 1996. The other three projects are intended to sequester emissions. For these projects, one or more of the following processes have been completed: Land has been purchased; surveys have been completed; and trees have been planted. For example, at one sequestration project in Costa Rica, land included in the project proposal and identified as being in danger of deforestation has been purchased and conveyed to Costa Rica’s national park service. The remaining two projects have not been implemented because of an inability to obtain financing, according to information provided to Initiative staff by these projects’ representatives in March 1998. These two projects include one intended to sequester emissions and one intended to reduce emissions. For one of these projects, a sequestration project located in Costa Rica, the host-country partners reported that they had not been successful in obtaining financing for either this project or another sequestration project approved in the Initiative’s second evaluation round. However, the partners said that the affected forest area covered by these two projects would be absorbed into two other joint implementation projects, one a U.S. Initiative project accepted in the fourth evaluation round in July 1997 and the other a Norwegian pilot joint implementation project. The partners further said that for this reason they planned to report that the two projects for which they have not obtained financing should not continue to be listed as separate projects. According to Initiative staff, the developer of the other project that has not progressed is continuing efforts to obtain financing. This project is located in Honduras and is intended to reduce carbon dioxide emissions by providing for solar-based electrification in rural areas. The status as of March 1998 for each project accepted during the first evaluation round is shown in table 2. The seven projects accepted during the first round of evaluations had between one and seven U.S. participants. For example, the sustainable forest management project in Costa Rica had one U.S. participant, Wachovia Timberland Investment Management; the U.S. Initiative project that will absorb much of this project also has a single U.S. partner, Earth Council Foundation—U.S. Conversely, the Rio Bravo Carbon Sequestration Pilot Project in Belize has seven U.S. participants, including The Nature Conservancy, Wisconsin Electric Power Company, and Detroit Edison Corporation. Standard methodologies that can be used either to estimate a project’s greenhouse gas benefits or to certify a project’s net emissions benefits are being developed. Based on information provided by the projects’ developers, the total estimated greenhouse gas benefits for the 32 projects accepted into the Initiative as of March 1998 is equivalent to about 235 million metric tons of carbon dioxide over a period of up to 60 years. Although the Initiative reviews the methods, data, and assumptions project developers use to estimate net greenhouse gas benefits, it does not attest to the validity of those estimates. The Initiative does have responsibility, however, for monitoring and verifying emissions reductions as they are attained. As of the latest reporting date (July 1997), only one of the 25 projects accepted into the Initiative had reported emissions reduction benefits. According to the Initiative staff, it has not yet verified these reported emissions reductions partially because no standard methods for determining greenhouse gas benefits specific to joint implementation projects have been developed. EPA, as part of its role in providing support to the Secretariat, is funding studies of several issues related to determining emissions benefits. One objective of EPA-funded research is to develop standard methodologies. The 32 projects accepted into the Initiative are projected to yield benefits over time periods as short as 3 years (for one wind power generation project and one forest preservation project) and as long as 60 years (for two reforestation projects). Based on the project developers’ estimates, these 32 projects will reduce greenhouse gases by more than 200 million metric tons of carbon dioxide and 1.3 million metric tons of methane (1.3 million metric tons of methane is equivalent, in terms of global warming potential, to about 31 million metric tons of carbon dioxide). Of the total net greenhouse gas benefits, equivalent to approximately 235 million metric tons of carbon dioxide, about 65 million tons, or 28 percent, is attributed to emissions reduction projects, while the remaining 170 million tons, or 72 percent, is attributed to sequestration projects. For example, one project in Nicaragua involves constructing and operating a flash-steam power generation facility, using the country’s abundant geothermal resources, that will emit only small amounts of carbon dioxide. According to the latest project report, this facility will displace an equivalent-size facility using fossil fuels and is expected to reduce carbon dioxide emissions by about 14 million metric tons over about 38 years. Similarly, a sequestration project in Ecuador that involves purchasing about 5,000 acres of tropical forest will be incorporated into a newly created reserve. According to the project’s developers, by preventing the conversion of these lands, expected to occur over the next 3 years, to marginal cropland and cattle pasture, the project will result in net greenhouse gas benefits of more than 1 million tons of carbon dioxide. Although the Initiative reviews, as part of the proposal review process, the methods, data, and assumptions that the project developers used to develop their estimates, it does not attest to their validity. As of the last reporting period (July 1997), only one accepted project—a project that combines land acquisition and a sustainable forestry program to achieve emissions reductions through forest growth—had reported greenhouse gas benefits. The emissions reductions reported for this project were 807,468 metric tons of carbon dioxide a year for calendar years 1995 and 1996. The project developers for another four implemented projects reported to the Initiative staff in March 1998 that their projects were in operation and achieving greenhouse gas benefits but pointed out that the benefit data they provided at that time were estimates because either detailed monitoring results were not available or the monitoring results had not been verified. According to the Initiative’s deputy director, these reductions are likely to be reported in the 1998 annual report. Although the Initiative’s ground rules state that the Evaluation Panel is responsible for certifying the greenhouse gas benefits estimated for the projects, the Initiative staff said that it does not currently verify reported emissions reductions. The staff acknowledged that it has neither provided standard monitoring guidance to projects nor reviewed the monitoring plans for most projects, but recognizes that its efforts in these areas need to be strengthened. The staff attributed its limited progress in these areas to the small number of projects that are now either funded or implemented and the absence of standard methods for determining greenhouse gas benefits specific to joint implementation projects. The staff also said that it was waiting on the EPA-sponsored research that will provide guidelines for the development of monitoring plans and verification methods to be completed before certifying reported emissions reductions. EPA is funding research to develop standard methods for quantifying emissions benefits. Recently completed studies focused on implementing uniform reporting formats for the pilot projects (compatible with a reporting form used by the United Nations Framework Convention on Climate Change for its pilot program) and refining ways to measure greenhouse gas emissions from projects. Currently under way is a study to examine various aspects of project baselines (to estimate what would have happened if the pilot project had not been implemented) and emissions additionality (to help ensure that project benefits are in addition to what would otherwise have happened). In the context of the pilot program, additionality refers to project acceptance criteria that are designed to ensure that the financing of a proposed project would not have occurred otherwise, called financial additionality, and that the associated reduction in emissions would likewise not have occurred, called emissions additionality. Some phases of the research have been completed and are undergoing review, while other phases are continuing. According to EPA officials, standard methods for estimating emissions reduction benefits would help to move the program from its current pilot phase to a fully implemented program with credible reductions. The officials were not able to say how long the development of the standard methods might take, but current studies being funded by EPA are to be completed during this fiscal year. An EPA official also said that the agency is currently funding research on methodologies for monitoring and plans to fund research on methodologies for verification in the future. (App. II provides additional information about efforts to develop standard methods.) We provided a draft of this report to the Director of the Joint Implementation Secretariat and the Administrator of EPA for review and comment. The Secretariat’s Director said that the report is generally a balanced assessment of the Initiative, with a useful analysis of the projects and the consideration of those projects by the Initiative’s Secretariat and Evaluation Panel. (The Secretariat’s comments and our responses appear in app. III.) The Director also suggested technical corrections to the draft report, which were incorporated as appropriate. EPA’s Office of Economy and Environment, within its Office of Policy, Planning and Evaluation, also suggested technical corrections, which were incorporated as appropriate. To accomplish our objectives, we interviewed officials of the Initiative’s Secretariat, EPA, and the Department of State. At the Secretariat offices, we obtained and reviewed information pertaining to the Initiative’s project evaluation process, including policy memorandums, technical review summaries of project proposals, and decision memorandums prepared to assist the Evaluation Panel with its decision-making process. At EPA, we obtained and reviewed information related to its efforts to develop standard methods for measuring greenhouse gas emissions and for estimating projects’ emissions reduction benefits. At the Department of State, we obtained information on the development of the ground rules for the U.S. pilot program and public comments on notices published in the Federal Register. We limited our work on the third objective (relating to the status of approved projects) to those approved in the first round because they had had the longest period of time to be developed. This information was obtained by reviewing the latest annual reports prepared by the participants in the accepted projects. The Secretariat staff assisted us in obtaining information from the project participants when information contained in the reports was not clear. We did not independently verify the information provided by the Secretariat. We also reviewed available documents about the joint implementation concept, the U.S. Initiative, and the United Nations’ pilot program. We conducted our review from September 1997 through June 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce the report’s contents earlier, we plan no further distribution of this report for 15 days. At that time, we will send copies to the appropriate congressional committees, the Director of the Secretariat, and the Administrator of EPA. We will also make copies available to others upon request. Major contributors to this report were David Marwick; Stacy L. Morgan; William H. Roach, Jr.; and Robert D. Wurster. If you have any questions or need additional information, please call me at (202) 512-6111. These criteria and other considerations were published in the June 1, 1994, Federal Register (Vol. 59, No. 104, pp. 28445-28446). They are paraphrased below. 1. Is the project acceptable to the government of the host country? 2. Does it involve specific measures to reduce or sequester greenhouse gas emissions initiated as a result of the U.S. Initiative on Joint Implementation or in reasonable anticipation of the Initiative? 3. Does it provide data and methodological information sufficient to establish a baseline of current and future greenhouse gas emissions, both with and without the project? 4. Will it reduce or sequester greenhouse gas emissions beyond those without the project, and, if the project is federally funded, is it or will it be undertaken with funds in excess of those available for such activities? 5. Does it contain adequate provisions for tracking the greenhouse gas emissions reduced or sequestered as a result of the project and, on a periodic basis, for modifying such estimates and comparing actual results with original projections? 6. Does it contain adequate provisions for external verification of the greenhouse gas emissions reduced or sequestered by the project? 7. Does it identify any associated non-greenhouse-gas environmental impacts and benefits? 8. Does it provide adequate assurance that the greenhouse gas emissions reduced or sequestered will not be lost or reversed over time? 9. Does it provide for annual reports to the Evaluation Panel on the emissions reduced or sequestered and on the share of such emissions attributed to each domestic and foreign participant, pursuant to the terms of the voluntary agreement among the project’s participants? 1. Does the project have a potential to lead to changes in greenhouse gas emissions outside the project’s boundaries? 2. Apart from the project’s effect on greenhouse gas emissions, does the project have any potential positive and negative effects on factors such as local employment and public health? 3. Are U.S. participants who are emitting greenhouse gases within the United States taking measures to reduce or sequester those emissions? 4. Does the host country have efforts under way to (1) ratify the United Nations Framework Convention on Climate Change, (2) develop a national inventory and/or baseline of greenhouse gas emissions and sinks, and (3) reduce its emissions and enhance its sinks of greenhouse gases? The awarding of credit for joint implementation projects’ results is a basic distinction between the current pilot program and a fully developed program. Under a fully developed program, investors in an approved project could receive credit for that project’s results—greenhouse gas emissions reduced or sequestered—and thus offset their own greenhouse gas emissions. To help ensure that credits are awarded only when warranted, standard methods are being developed for estimating a project’s emissions reduction benefits and for measuring greenhouse gas emissions. Tracking a project’s side effects (e.g., its impact on the local economy) is also important. In support of the pilot program, the Environmental Protection Agency’s (EPA) Office of Policy is sponsoring studies of these issues, and it currently has a contract and an interagency agreement for further studies. EPA officials said that they expect these studies to help ensure that emissions reductions are properly identified and reported; to gain international approval of the joint implementation concept, including the clean development mechanism provisions of the Kyoto Protocol; and to move the joint implementation concept from its current pilot phase into full implementation. One key issue currently being studied is estimating a project’s emissions reduction benefits. In the context of joint implementation, “additionality” is the term used to describe the project acceptance criteria that are designed to ensure that the proposed project’s financing and abatement of greenhouse gas emissions would not have occurred otherwise.Additionality, however, has meaning only relative to an alternative reference point. Determining that reference point requires project developers to construct a hypothetical baseline. For example, as evidence of emissions additionality, project proposals must present a reference case, which presents projections of emissions levels without the project, and a project case, which estimates emissions levels with the project. In this example, the emissions additionality is the difference between the emissions levels without the project (the hypothetical baseline) and the emissions levels with the project. An EPA contractor, ICF, Inc., has completed a report analyzing how the pilot program has evaluated additionality; the report is currently undergoing peer review. By the end of June 1998, the contractor is expected to review assumptions about emissions made in the reference case scenario and project case scenario for selected approved projects. In addition, the contractor is expected to develop comprehensive guidelines for developing reference case and project case scenario emissions for greenhouse gas mitigation projects. EPA officials said that they will use this study, along with the results of other studies, to determine whether a credible, fair, transparent, and consistent approach to establishing project baselines and determining project additionality can be developed. A second key issue currently being studied relates to standardized methods for monitoring, evaluating, reporting, and verifying greenhouse gas emissions benefits. Through an interagency agreement with EPA, the Lawrence Berkeley National Laboratory in Berkeley, California, is expected to complete an assessment of these issues by the end of September 1998. Specifically, the laboratory is to develop comprehensive guidelines for monitoring and evaluating projects. These guidelines are to incorporate such principles as cost-effectiveness, transparency, simplicity, technical soundness, and internal consistency. According to the agreement, these guidelines should also be capable of being used by an independent organization for verifying a project’s benefits. Finally, the laboratory is to identify and develop methods for monitoring environmental, socioeconomic, and other benefits associated with a project. These could include the effect on local economic conditions and on air quality and other environmental indicators. The following are GAO’s comments on the letter from the U.S. Initiative on Joint Implementation dated June 3, 1998. 1. Prior to the Kyoto Protocol, the term “joint implementation” generally was used to describe all projects that were sponsored by developed countries and that were located, and intended to reduce emissions, in another country. The Protocol established the “clean development mechanism” for projects located in developing countries and distinguished them from projects located in developed countries. The Secretariat suggested that we cite the clean development mechanism in the opening paragraph of this report. Because projects accepted into the Initiative, including those accepted in March 1998 (subsequent to the Protocol), are located in both developing countries and developed countries, in this report we use the term “joint implementation” in its more general, pre-Protocol context. However, we have differentiated between these terms (joint implementation and clean development mechanism) in footnote 9. 2. For increased readability, we have used the word Initiative rather than the acronym USIJI when referring to the U.S. Initiative on Joint Implementation. This is explained in footnote 6 of this report. 3. This information appeared in the draft report provided to the Secretariat for comment and, in our judgment, belongs on page 14 of this report. 4. This information appeared in the draft report and, in our judgment, belongs on page 5 of this report. 5. This information appeared in the draft report and, in our judgment, belongs on page 6 of this report. 6. The draft report provided to the Secretariat for comment discussed the differences of interpretation of the criteria. We added footnote 11 to this report to provide additional information on the nature of the areas of “less than clear compliance” with the criteria as reported by the Secretariat in its comments. 7. The draft report discussed the increase in the number of evaluation rounds conducted each year as a reason for the small number of proposals submitted for evaluation in round 4. Based on the Secretariat’s comments, we also included information on the reason for the change in the number of evaluation rounds the Initiative conducts each year. 8. The draft report provided in the text information on the frequency of the evaluation rounds conducted, and the dates of each evaluation round were provided in the table. Therefore, an additional note to the table is not necessary. 9. We determined it was not necessary to list the Initiative’s criteria verbatim in the report. However, in response to the Secretariat’s comments, we added an introductory statement to appendix I indicating that we have paraphrased the criteria and other considerations used by the Initiative’s Evaluation Panel in evaluating proposals. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed selected aspects of the U.S. Initiative on Joint Implementation, intended to encourage investments by U.S. entities in projects to reduce greenhouse gas emissions outside the United States, focusing on the: (1) criteria used to accept proposed projects; (2) number and types of projects accepted; (3) status of the seven projects accepted in the first round of proposals in February 1995; and (4) estimated benefits of pilot projects in terms of emissions reductions. GAO noted that: (1) the Initiative's Evaluation Panel uses nine criteria to evaluate proposed projects for acceptance into the program; (2) among the criteria are acceptance by the host country, a reduction in greenhouse gases that would result from the proposed project and that would not have occurred otherwise, and a mechanism to verify the project's results; (3) the U.S. program generally has more criteria than similar programs administered by certain other countries; (4) also, the U.S. criteria are stricter in some respects, for example, by requiring that benefits be maintained over time; (5) through March 1998, Initiative officials had reviewed proposals for 97 different projects and accepted 32 of them; (6) of the 32 accepted projects, 17 involve reducing greenhouse gas emissions, for example, by constructing and operating a hydroelectric plant that will provide electricity previously produced by burning fossil fuels; (7) the other 15 involve capturing greenhouse gases already emitted; (8) also, 31 of the 32 projects are intended to reduce emissions of or capture carbon dioxide; the other project is intended to reduce methane emissions; (9) of the seven projects accepted into the Initiative as a result of the first round of evaluations in February 1995, five are in the process of being implemented; (10) this means that land has been acquired or facilities have been built, and the projects are in the process of reducing or capturing greenhouse gas emissions; (11) according to Initiative officials, as of March 1998, the remaining two projects--one that would reduce greenhouse gas emissions and one that would capture these emissions from the atmosphere--had not progressed because their developers had not been able to obtain financing; (12) the projects' developers estimate that, over a period of up to 60 years, the 32 approved projects, if fully funded and implemented, will result in net emissions reductions of about 200 million metric tons of carbon dioxide and 1.3 million metric tons of methane; (13) Initiative staff do not verify or attest to the reliability of the net greenhouse gas benefits estimated by the projects' developers; (14) in part, this is because standard methods for estimating projects' emissions reduction benefits specific to the U.S. Initiative have not been developed; (15) the Environmental Protection Agency (EPA) has funded studies to develop standard methods for calculating projects' benefits; and (16) according to EPA officials, these studies should be completed by the end of fiscal year 1998.
American Samoa, CNMI, Guam, Puerto Rico, and the U.S. Virgin Islands are five territories of the United States. With the exception of Puerto Rico, the populations in the territories are small relative to the states, and are generally poorer. Within broad federal guidelines and under federally approved plans, territories have some discretion in setting Medicaid and CHIP eligibility standards and provider payment rates; determining the amount, scope, and duration of covered benefits; and developing their own administrative structures. For example, similar to the states, unless they have obtained a waiver, the territories’ Medicaid programs are required to cover certain benefits—known as mandatory Medicaid benefits—and can choose to cover additional benefits, known as optional benefits. While the states also have similar discretion, the territories have been afforded greater flexibility, including the ability to set their own income eligibility levels for certain populations and determine income eligibility using a locally established poverty level instead of the federal poverty level (FPL). Also like the states, territories can operate their CHIP programs as a separate program, include CHIP-eligible children in their Medicaid program, or use a combination of the two approaches. Significant differences exist in how Medicaid and, to a lesser extent, CHIP are funded in the territories as compared with the states. For example, the federal matching rate for states’ Medicaid programs, the Federal Medical Assistance Percentage (FMAP), is based on a state’s per capita income in relation to the national per capita income, with poorer states receiving higher federal matching rates than wealthier states. In contrast, the Medicaid FMAP for the territories does not recognize their capacity to pay for program expenses. Although PPACA increased the territories’ FMAP from 50 to 55 percent, this percentage is fixed at the lower end of the range available to states. For the CHIP program, the federal government matches states’ and territories’ program spending at a rate higher than Medicaid, known as the enhanced FMAP. However, territories’ matching rate for CHIP spending is similarly fixed at the lower end of the range available to the states. Additionally, federal Medicaid funding in states is not subject to a limit, provided the states contribute their share of program expenditures for services provided. In contrast, federal Medicaid funding in each territory is subject to a statutory cap. In general, once their Medicaid and CHIP funding is exhausted, territories must assume the full costs of their programs. These funding differences, along with differences in the costs of health care in the territories compared with the states, have contributed to lower federal and territory Medicaid program expenditures in the territories. For example, in the aggregate, total Medicaid expenditures in all five territories comprised less than one half of one percent of the total national Medicaid expenditures in fiscal year 2014. However, when examined separately, Puerto Rico had Medicaid enrollment and expenditures similar to some states. Specifically, in fiscal year 2014, Puerto Rico ranked 11th in Medicaid enrollment nationally and ranked 42nd in total Medicaid expenditures. Like the states, territories must report their quarterly program expenditures for Medicaid and CHIP on the CMS-64 no later than 30 days after the end of each quarter, which is used to reimburse them for their federal share of these expenditures. In recent years, legislation to provide temporary increases in Medicaid and CHIP funding has been enacted. For example, the American Recovery and Reinvestment Act of 2009 (Recovery Act) provided the territories with a 30 percent increase in their Medicaid caps from fiscal year 2009 through the first quarter of fiscal year 2011, as well as federal matching funds to encourage Medicaid providers to undertake health information technology (HIT) initiatives. Most recently, PPACA appropriated $7.3 billion in additional Medicaid funding to the territories, the majority of which is available through fiscal year 2019. According to CMS officials, this funding can be used once territories expend their Medicaid and CHIP funding each year. PPACA also permanently increased territories’ Medicaid FMAPs and CHIP enhanced FMAPs to 55 percent and 68.5 percent, respectively. Federal law generally requires state, territory, and federal entities to ensure program integrity by protecting the Medicaid and CHIP programs from fraud, waste, and abuse. Like the states, territories have primary responsibility for such program integrity because they enroll providers, establish payment policies, process claims, and pay for services furnished to beneficiaries. To execute this responsibility, territories may undertake a variety of efforts. For example, although not required, they can establish program integrity units, which are tasked with identifying and recovering improper payments. Territories, like the states, are also required to implement certain program integrity mechanisms or receive an exemption from CMS for doing so. For example, territories must establish Medicaid Fraud Control Units (MFCU), which are tasked with investigating Medicaid fraud and other health care law violations, or receive an exemption from CMS from establishing one. The territories are also required to implement a Medicaid Management Information System (MMIS), which is a claims processing and information retrieval system that includes capabilities for reporting claims data, enrollee encounter data, and conducting pre- and post-payment review. Such information can assist in identifying improper payments. Federal mechanisms are also available to assist in program oversight. For example, CMS can conduct comprehensive or focused program integrity reviews, which assess the effectiveness of state and territory program integrity efforts, including compliance with federal statutory and regulatory requirements. Further, through the Payment Error Rate Measurement (PERM) program, CMS requires states to estimate improper payments in the Medicaid and CHIP program to identify program vulnerabilities and actions to reduce improper payments; however, the agency has excluded the territories from this program. Additionally, OMB’s annual A-133 single audits examine internal controls and compliance deficiencies in certain federal programs, including Medicaid and CHIP, and can be a resource to inform program oversight. Due to the flexibility territories have in administering their Medicaid and CHIP programs, the territories’ program eligibility and benefits not only reflect their unique circumstances, but also differ from one another and from the states. For example, a notable distinction among territories’ program eligibility is that Puerto Rico is the only territory that uses its CHIP funds to cover additional children in its Medicaid programs whose income levels exceed Medicaid eligibility levels. The other four territories use their CHIP funds to pay for services provided to children up to the age of 19 in their Medicaid programs. Additionally, Guam, Puerto Rico and the U.S. Virgin Islands base program eligibility on local poverty levels (LPL) that are more restrictive than federal standards, which has resulted in lower program enrollment than would otherwise be the case. Additionally, unlike the states and other territories, American Samoa does not determine eligibility for its Medicaid program on an individual basis. Instead, it presumes that all individuals with incomes at or below 200 percent of the FPL are eligible. The different methods territories use to determine eligibility affect Medicaid enrollment in each territory, with the estimated percentage of territories’ populations enrolled in Medicaid in fiscal year 2015 ranging from about 17 percent in the U.S. Virgin Islands to 88 percent in American Samoa. (See table 1.) Territories also vary in terms of the range of benefits covered by their respective Medicaid programs. Specifically, Guam covers all of the 17 mandatory Medicaid benefits; CNMI and the U.S. Virgin Islands cover nearly all of the benefits; and American Samoa and Puerto Rico cover 10 of the 17 benefits. American Samoa and CNMI operate their Medicaid programs under broad waiver authority under section 1902(j) of the Social Security Act and, therefore, are not required to cover all mandatory benefits. While the other territories do not operate under this broad waiver authority, CMS acknowledged that the agency has not required them to cover all mandatory Medicaid benefits, citing limited federal Medicaid funding and the unavailability of certain services. Examples of the mandatory benefits most commonly covered by all five territories include Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) services for individuals under 21; inpatient hospital services; outpatient hospital services; and physician services. In contrast, the territories’ coverage of other benefits, such as nursing facility and rural health clinic services, was less widespread, and only Guam covered freestanding birth center services. (See fig. 1.) Officials from the four territories that do not cover all mandatory Medicaid benefits cited multiple reasons for not doing so, including limited funding and a lack of infrastructure. In particular, officials from Puerto Rico and American Samoa said that their programs do not cover nursing facility services due to insufficient funding and because they do not have nursing homes; CNMI officials noted that its program does not cover freestanding birth center services because there are no such facilities in the territory; and due to the lack of available providers, certain specialty services covered by American Samoa, CNMI, and Guam are only available off- island. For example, in CNMI, most cardiac, orthopedic, chemotherapy, and radiation services are only available off-island; in Guam, pediatric oncology, hematology, dermatology, and procedures such as cardiac bypass surgery, are only available off-island. In addition to mandatory Medicaid benefits, each territory has chosen to cover optional benefits, with all five territories providing coverage for outpatient prescription drugs, clinic services, dental and eyeglasses, prosthetics, physical therapy, and rehabilitative services. Optional services commonly covered by states—such as targeted case management, personal care services, and intermediate care facilities for individuals with intellectual disabilities—are not covered by any of the five territories. (See fig. 2.) The recent temporary increases in federal funding have enabled the territories to increase Medicaid and CHIP spending, and avoid federal funding shortfalls. Most notably, PPACA’s appropriation of an additional $7.3 billion in Medicaid funding for the territories—available for expenditure through at least fiscal year 2019—provided them flexibility in terms of when they choose to draw down the additional funds. For example, between fiscal year 2010 when PPACA funds were not available and fiscal year 2014 when they were, the average annual percentage change in total Medicaid and CHIP spending in CNMI and Guam was 23 percent and 19 percent, respectively, with total spending in these territories more than double in fiscal year 2014 compared to fiscal year 2010. (See table 2.) Prior to the availability of these temporary funds, the territories often exhausted their Medicaid funds anywhere from the first through the third quarter of each fiscal year, and generally utilized all of their CHIP funding each year. The territories used various strategies to address these federal funding shortfalls. For example, Puerto Rico officials said that prior to the PPACA funding, the federal Medicaid funds covered only 16 percent of their planned annual expenditures and were expended during the first quarter of the federal fiscal year, after which time the territory had to rely entirely on local funding to cover program spending. Further, a CNMI official said it was normal for their providers to provide services in one year and be paid the following year. In addition to all five territories avoiding federal funding shortfalls, officials in three of the territories noted that these temporary funds have allowed them to improve their programs by covering more benefits, enrolling more providers, or both. For example, American Samoa officials said they plan to use some of their PPACA funds to pay for services provided by new providers, thereby expanding access to services beyond the island’s only hospital. Puerto Rico officials said they used some of their PPACA funds to add coverage for certain organ transplants, which, according to CMS officials, the territory must cover due to other changes in law enacted under PPACA. Despite the influx of temporary PPACA funding, territories may nonetheless experience funding shortfalls in the near future, according to CMS and territory officials. Specifically, certain territories may exhaust their PPACA funding before the end of fiscal year 2019, as there are no restrictions on the rate at which territories may access their allotted funds. For example, CNMI and Puerto Rico, which used 49 percent and 56 percent of their allotments between fiscal years 2011 and 2015, respectively, are spending these temporary funds at a rate that could deplete their allotments early, as the amount they have spent has increased each year. (See table 3.) While the rate of expenditures to date may not reflect future spending rates, some territory officials expressed concerns about the temporary availability of the PPACA funds and the fact that their capped allotments will be reduced to pre-PPACA levels beginning in fiscal year 2019, or earlier if they expend the PPACA funds before 2019. As a result, the territory officials noted that the territories may run out of the temporary funding early, have to make program cuts once the funding is exhausted, or both. For example, Puerto Rico Medicaid officials said they determined they could exhaust their entire PPACA allotment as early as fiscal year 2017. Additionally, officials from Puerto Rico and Guam expressed concern that they may need to restrict eligibility or reduce benefits once the PPACA funding is exhausted. Territory and federal oversight efforts provide little assurance that the territories’ Medicaid and CHIP funds are protected from fraud, waste, and abuse. Citing limited resources, territory officials acknowledge a general lack of program integrity efforts. Further, federal officials cite the territories’ smaller Medicaid expenditures in limiting their program integrity efforts to technical assistance. Although the territories have primary responsibility for Medicaid program oversight, limited assurance exists that they are identifying and recovering improper payments or investigating fraud in their Medicaid programs. With the exception of Puerto Rico, the territories have not established program integrity units, which are dedicated to identifying and reducing improper payments. Although Medicaid agencies are not required to establish program integrity units, the lack of a separate entity is counter to internal controls standards regarding segregation of key duties and responsibilities for reducing the risk of error and fraud. Specifically, in four of the territories, the Medicaid Director is responsible for program oversight, including program integrity efforts, according to CMS officials. This lack of segregation of key duties and responsibilities could be remedied through the establishment of a program integrity unit or other division of labor. According to CMS officials, the territories have not established separate program integrity units because they lack adequate funding and personnel to do so, and funds spent on such an oversight effort would reduce the amount of funds available for the provision of health care services. Further, an American Samoa official said the territory is very interested in undertaking program integrity efforts, but is unable to hire additional staff to do so because of budgetary constraints. Although Puerto Rico has a program integrity unit, according to Puerto Rico officials, this unit’s responsibilities are limited to eligibility fraud and acting as a liaison regarding concerns of provider fraud with the Administración de Seguros de Salud de Puerto Rico (ASES), the Puerto Rico government entity that manages managed care organization (MCO) contracts. ASES delegates primary responsibility for program integrity efforts to the MCOs and requires them to have policies and procedures for the identification, investigation, and referral of suspected fraud. Both we and the HHS-OIG have previously reported concerns that MCOs might not have an incentive to identify and recover improper payments. For example, as we previously reported, officials from state program integrity units noted that they believed MCOs were not consistently reporting improper payments in order to avoid appearing vulnerable to fraud and abuse. In this same report, state program integrity unit officials also noted a potential conflict of interest for MCOs because reporting improper payments could reduce their future federal funding. In addition to the general absence of program integrity units, none of the territories has established a MFCU—units that investigate and prosecute Medicaid fraud and other health care law violations—or obtained an exemption from the requirement to establish one from CMS. According to CMS officials, territories have not established MFCUs because the costs associated with establishing them count against the territories’ capped Medicaid allotment and would reduce the funds available for providing services. Further, Puerto Rico officials told us they had considered developing a MFCU, but decided against it after learning that the funds used to develop it would reduce funds for services. These officials said they made this decision despite knowing that a MFCU could eventually be cost effective because they believed they could not afford the initial investment. While the establishment of a MFCU may not make sense, given the size and spending of the territories, the territories are required to demonstrate that minimal fraud exists in their programs if they do not have a MFCU. The territories’ incomplete service-level expenditure reporting also contributes to limited assurance of Medicaid program integrity in the territories. Specifically, the limited detail on the types and volume of services provided in the territories can hinder program integrity efforts, including making it difficult to identify potential fraud, waste, and abuse. As with states, different reporting requirements exist for fee-for-service and managed care spending in the territories. According to CMS officials, the health care delivery systems in American Samoa, CNMI, Guam, and the U.S. Virgin Islands are entirely fee-for-service, and therefore these territories are required to report service-level spending on the CMS-64. CMS officials cited the CMS-64 as the only data source for Medicaid and CHIP spending in these territories, underscoring the importance of accurate service-level expenditure reporting for territories’ program integrity efforts. However, we reviewed the territories’ Medicaid spending for fiscal year 2014 and found that none of the territories had reported service-level spending for all the Medicaid benefits they covered. Specifically, for the benefits we reviewed, American Samoa, CNMI, and Guam reported service-level spending for 24 percent, 55 percent, and 63 percent, respectively, of the Medicaid benefits they covered. (See table 4.) This limited reporting is the result of various circumstances. For example, Medicaid enrollees in American Samoa are serviced by a single hospital that reports costs by only three mandatory benefits—inpatient hospital services, outpatient hospital services, and emergency services for certain legalized aliens and undocumented aliens. With regard to managed care, Puerto Rico’s Medicaid managed care program, which provides coverage to all Medicaid and CHIP enrollees, is not subject to service-level reporting requirements. However, under their contracts with Puerto Rico Medicaid, the MCOs in Puerto Rico are required to submit encounter data to ASES. Although these data could provide insight on service-level utilization, CMS officials told us they do not collect or review these data on a regular basis. With regard to program oversight, CMS’s general practice has been to conduct comprehensive program integrity reviews in all of the states; however, of the five territories, it has conducted such reviews only in Puerto Rico, the most recent of which was released in January 2012 and produced multiple findings. CMS officials told us they are switching from comprehensive and more focused program integrity reviews in the states and plan to conduct such a review for Puerto Rico in 2016. Citing the other territories’ smaller Medicaid expenditures, however, CMS has neither conducted similar reviews of their Medicaid programs, nor does it plan to conduct more focused program reviews. While Medicaid spending in the territories is small as a proportion of total Medicaid spending, such limited federal oversight efforts provide little assurance that Medicaid is protected from fraud, waste, and abuse, and are inconsistent with federal internal control standards regarding the identification, analysis, and response to relevant risks as part of achieving program objectives. Given that governmental, economic, industry, regulatory, and operating conditions continually change—such as when PPACA significantly increased territory Medicaid funding—mechanisms should be provided to identify and manage any special risks prompted by such changes in program conditions. Additionally, other factors—such as the lack of enforcement of program integrity mechanisms and information systems—have contributed to the limited federal program integrity efforts in the territories. For example, CMS has neither required the territories to establish MFCUs, nor has the agency granted them an exemption, because agency officials were unclear whether they had the authority to grant such exemptions. Additionally, until recently, CMS regulations exempted territories from the requirement to develop an MMIS, which could provide more detail on the territories’ Medicaid and CHIP spending, including increasing the level of detail on the territories’ CMS-64 reporting. In December 2015, CMS amended its regulations to eliminate the MMIS exemptions for the territories, effective January 1, 2016. Despite the fact that an exemption had been in place, the U.S. Virgin Islands established a partnership with West Virginia, which allowed territory officials to make use of the state’s MMIS beginning in 2013. This has improved the level of detail on the U.S. Virgin Islands’ CMS-64 reporting. Specifically, in fiscal year 2012, prior to the implementation of its MMIS, the U.S. Virgin Islands reported service-level expenditures for 30 percent of the Medicaid benefits they covered; after the implementation, this percentage increased to 91 percent in fiscal year 2014. According to Puerto Rico Medicaid officials, the territory’s Medicaid agency is in the process of establishing a similar partnership with Florida and anticipates implementation by the end of 2016. Having additional details on program spending could strengthen CMS’s and territories’ program oversight. According to agency officials, CMS has assigned officials to the five territories to assist in program integrity efforts, and their role is generally focused on providing technical assistance. The activities of these officials vary across the territories, ranging from resolving complaints to more proactive efforts to identify trends indicating fraud, waste, and abuse. In addition, CMS officials reported that Puerto Rico and the U.S. Virgin Islands requested and received on-site training on the proper reporting of federal expenditures. Other federal oversight efforts provide insight on Medicaid program integrity needs in the territories, and CMS has reported making use of these efforts. Specifically, OMB’s annual A-133 single audits—conducted by contracted independent auditors—examine internal controls and compliance in the territories’ programs, and have identified deficiencies in each of the territories. Examples of the findings from the 2013 single audits are listed below. CNMI – the single audit found a significant deficiency in internal control over compliance. Specifically, the payments for certain Medicaid services and medications exceeded permissible amounts. This finding was resolved and closed in September 2015. Guam – the single audit found a material weakness in internal control over compliance. Specifically, the single audit found that no documentation was provided to show that eligibility specialists used the available income and eligibility verification system to determine eligibility. This finding was resolved and closed in February 2015. U.S. Virgin Islands – the single audit found a material weakness in internal control over compliance. Specifically, the audit revealed that sufficient controls did not exist for the required investigation of Medicaid utilization related to fraud. As a result, there may be prolonged, ongoing cases of fraud, which may be unreported. As of March 2015, according to CMS officials, the status of this finding was cleared, meaning that the next step is for the U.S. Virgin Islands to develop a corrective action plan for approval by CMS. CMS has a single audit coordinator that receives the single audit reports and notifies CMS’s regional offices, which are then responsible for working with the territories to correct any deficiencies that were identified. For example, CMS regional office officials help the territories develop corrective action plans, if required. However, CMS officials noted that it is not uncommon for territories to take multiple years to resolve certain deficiencies. CMS officials told us that the limited funding and staff created particular challenges for the territories when responding to single audit findings. For example, CNMI officials reported to CMS that the territory lacked sufficient staff to perform post-payment reviews in response to a finding from a single audit that found the territory incorrectly paid certain Medicaid claims. The Medicaid and CHIP programs provide critical financial support to the U.S. territories’ health care systems. However, citing the territories’ limited resources and the relatively small size of their programs, CMS has not required the territories to follow certain program requirements. In particular, this includes requirements for complete service-level expenditure reporting and the establishment of a MFCU or the receipt of an exemption—obtained by demonstrating that the operation of such a unit would not be cost effective, because minimal fraud exits in a territory’s Medicaid program. Although American Samoa and CNMI’s Medicaid programs operate under broad waivers that exempt them from many of these requirements, this is not the case for Guam, Puerto Rico, and the U.S. Virgin Islands, which have not received exemptions or waivers from these requirements. Despite acknowledging the territories’ limited resources, CMS provides limited assurance and oversight to support program integrity efforts in the territories, and undertakes limited efforts of its own in this regard. Such limited federal efforts in the territories are inconsistent with federal internal control standards regarding identifying and responding to relevant risks when conditions change, such as when PPACA significantly increased territories’ federal Medicaid funding. Without additional efforts by CMS, there is limited assurance that territories have the capacity to identify fraud, recover improper payments, or provide complete information on program spending. While Medicaid funding to the territories represents a small share of national program expenditures and may not warrant the same level of program integrity oversight as the states, additional actions are needed by CMS to ensure an appropriate level of program integrity in these areas. To ensure the appropriate level of Medicaid program integrity oversight in the territories, we recommend that the Acting Administrator of CMS reexamine CMS’s program integrity strategy and develop a cost-effective approach to enhancing Medicaid program integrity in the territories. Such an approach could select from a broad array of activities, including—but not limited to—establishing program oversight mechanisms, such as requiring territories to establish a MFCU or working with them to obtain necessary exemptions or waivers from applicable program oversight requirements; assisting territories in improving their information on Medicaid and CHIP program spending; and conducting additional program assessments of program integrity as warranted. We provided a draft of this report to the HHS and the Department of the Interior (DOI) for comment. In its written comments, HHS concurred with our recommendation and acknowledged that many territories face challenges in addressing program integrity and finding a balance between applying funds towards providing services and program integrity efforts. Further, HHS noted that it will work with territory Medicaid officials to determine the feasibility of enhancing program integrity activities, including, but not limited to, establishing MFCUs or obtaining the necessary exemptions when MFCUs are not warranted. HHS also provided technical comments, which we incorporated as appropriate. In its written comments, DOI noted the financial and infrastructure challenges related to health care faced by all territories, despite the additional funding under PPACA, which is temporary, and raised concerns about future reductions in Medicaid once PPACA funds are depleted. HHS’s and DOI’s comments are reproduced in appendices I and II. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. In addition to the contact named above, Susan Anthony, Assistant Director; Manuel Buentello; Sandra George; Giselle Hicks; Drew Long; Amber Sinclair; and Teresa Tam made key contributions to this report.
Notable differences exist in the funding and operation of Medicaid and CHIP—joint federal-state health financing programs for low-income and medically needy individuals—in the territories versus the states. For example, the territories are subject to certain funding restrictions, such as capped annual federal funding, that are not applicable to the states. Further, certain federal requirements regarding eligibility, benefits, and program integrity do not apply to the territories' programs, and certain otherwise applicable requirements have not been enforced. In recent years, various laws—such as PPACA—have increased funding for Medicaid and CHIP in the territories. This report examines (1) eligibility and benefit characteristics of the territories' Medicaid and CHIP programs, (2) Medicaid and CHIP spending in the territories, and (3) Medicaid and CHIP program integrity efforts in the territories. GAO reviewed laws and regulations, data on five territories' Medicaid and CHIP spending, and federal internal control standards. GAO also interviewed CMS and territory Medicaid officials. Eligibility and benefits for Medicaid and the state Children's Health Insurance Program (CHIP) in five U.S. territories—American Samoa, Commonwealth of the Northern Mariana Islands (CNMI), Guam, Puerto Rico and the U. S. Virgin Islands—differ from one another and from the states, generally reflecting the territories' unique circumstances. For example, Guam is the only territory that covers all 17 mandatory Medicaid benefits, while American Samoa and Puerto Rico cover 10 of the 17 benefits. Officials from the territories that do not cover all mandatory benefits cited multiple reasons for not doing so, including limited funding and a lack of infrastructure, and, in some cases, exercised available flexibility to exclude certain benefits. Temporary increases in federal funding have enabled the territories to increase Medicaid and CHIP spending. Unlike the states, whose Medicaid funding is not subject to a capped allotment—provided they contribute their share—territories are subject to a capped allotment, and historically have exhausted available federal Medicaid and CHIP funds each year. Most notably, the Patient Protection and Affordable Care Act (PPACA) provided the territories an additional $7.3 billion through at least fiscal year 2019. Officials in four territories cited positive effects of the additional funding, such as the ability to enroll more providers and cover more services; however, some officials also expressed concerns about the temporary nature of the funding, noting that they may have to make program cuts once the funding is exhausted—and that future shortfalls remain a concern. Despite temporary increases in Medicaid funding, GAO found little assurance that territory Medicaid funds are protected from fraud, waste, and abuse. Program oversight mechanisms : Only Puerto Rico has developed a program integrity unit, which, although not required, is tasked with identifying and recovering improper payments and is a management best practice. Additionally, no territory has established a Medicaid Fraud Control Unit—which identify and prosecute Medicaid fraud—or received an exemption from doing so, as required by federal law. Program information : Territories lack detail on the types and volume of services they provide, contrary to federal reporting requirements, resulting in limited information on how territories spend their federal Medicaid funding. Until recently, the Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), exempted the territories from the requirement to implement a claims processing and information retrieval system with program integrity capabilities, although the U.S. Virgin Islands has established a partnership to use such a system. Program assessments : CMS has performed assessments on Medicaid program integrity effectiveness and compliance only for Puerto Rico. Although not required, such assessments have been conducted on all states. CMS does provide technical assistance, with the activities of CMS officials varying across the territories. Officials from CMS noted that funding for program integrity would count against the territories' capped allotments. Nonetheless, such limited efforts by the territories and federal government are inconsistent with federal internal control standards regarding identifying and responding to risks, particularly in light of increased federal Medicaid spending in the territories as a result of PPACA. GAO recommends that the Acting Administrator of CMS examine and select from a broad array of activities—such as establishing program oversight mechanisms, assisting in improving program information, and conducting program assessments—to develop a cost-effective approach to protecting territories' Medicaid programs from fraud, waste, and abuse. HHS concurred with GAO's recommendation.
We have previously reported that because procurement at federal departments and agencies is generally decentralized, the federal government is not fully leveraging its aggregate buying power to obtain the most advantageous terms and conditions for its procurements. Agencies act more like many unrelated medium-sized businesses and often rely on hundreds of separate contracts for many commonly used items, with prices that vary widely. GAO-11-318SP. out the companies’ strategic approach.and practices. The four agencies we reviewed—DOD, DHS, Energy, and VA—together accounted for 80 percent of the total $537 billion federal procurement spending in fiscal year 2011, but reported managing less than 5 percent, or $25.8 billion, through agencywide strategic sourcing contracts. From these efforts, the four agencies reported achieving a combined savings of $1.8 billion, or less than one-half of one percent of total federal procurement spending. While strategic sourcing may not be suitable for all procurement spending, this percentage of managed spending and savings is very low compared to leading companies which generally strategically manage about 90 percent of their procurement spending and achieve savings of 10 to 20 percent of total procurements annually. According to DOD officials, DOD procurement spending and savings through strategic sourcing contracts in fiscal year 2011 may be underreported, as DOD currently tracks departmentwide initiatives on an ad hoc basis. When strategic sourcing contracts were used, selected agencies generally reported savings ranging from 5 percent to over 20 percent of spending through strategically sourced contracts. Further, most of the four agencies’ current and planned strategic sourcing efforts do not address their highest spending areas, the majority of which exceed $1 billion and most of which are services. As a result, opportunities exist for agencies to realize significant savings by applying strategic sourcing in these areas. OMB directed agencies to implement strategic sourcing practices in 2005, but taken together, the agencies we reviewed have leveraged only a fraction of what could potentially be managed and saved through strategic sourcing. DOD, DHS, Energy, and VA accounted for 80 percent of the total federal procurement spending for fiscal year 2011, but reported managing less than 5 percent, or $25.8 billion, through strategic sourcing efforts, and achieving a combined savings of $1.8 billion. The four agencies varied widely in the level of spending managed through strategic sourcing. For example, in fiscal year 2011, of the agencies we reviewed, DHS reported the highest percentage of its total procurement spending, nearly 20 percent, being managed through strategic sourcing contracts. By contrast, VA reported the lowest at 1.4 percent. Figure 3 compares spending and savings through strategic sourcing at the four agencies. According to some agency officials, not all of their procurement spending may be addressable through strategic sourcing. For example, DHS considers some spending related to natural disasters such as hurricanes and earthquakes to be unaddressable through strategic sourcing. In another instance, Energy officials stated that they consider less than a third of Energy’s total procurement spending to be addressable through strategic sourcing. However, while some spending may not be suitable for strategic sourcing, the percentage of spending selected agencies report managing through strategic sourcing is small compared to the amount leading companies are managing strategically. Industry groups recently reported that leading companies they surveyed strategically manage approximately 90 percent of their procurement spending centrally. Moreover, leading companies we spoke with in 2012 reported that setting goals and using metrics to measure managed spending is important. By contrast, only a few of our selected agencies have set goals for the amount of spending managed through strategic sourcing (see table 1). At DOD—the federal government’s largest procurer of products and services— the Army, Navy, Air Force, and DLA together reported spending almost 6 percent, or $19 billion, through strategic sourcing contracts. In addition, the Defense Program Acquisition and Strategic Sourcing (PASS) office, which coordinates strategic sourcing efforts across the department, was unable to provide us with a comprehensive list of departmentwide strategic sourcing initiatives, and indicated that there are likely more strategic sourcing initiatives that are not accounted for because departmentwide initiatives are reported on an ad hoc basis.However, they provided information on a limited number of departmentwide strategic sourcing initiatives that together represented more than $1 billion of spending and over $900 million in savings in fiscal year 2011. The proportion of procurement spending being managed through strategic sourcing varied widely among the military departments and DLA (see fig. 4). For example, the Army spent more than $125 billion on products and services in fiscal year 2011, but reported that only $280 million, or less than a quarter of one percent of procurement spending, was strategically sourced. In contrast, DLA spent $36 billion on goods and services in fiscal year 2011, and reported that 46 percent or $16 billion was strategically sourced. According to DOD officials, it is to be expected that a high percentage of DLA’s spending is suitable for strategic sourcing because DLA’s unique mission is to supply high volume products that are bought across DOD, such as uniforms and food. Although DLA’s spending represents only 10 percent of DOD’s total procurement spending, DLA’s strategic sourcing efforts demonstrate that when DOD approaches procurement from a departmentwide level, it can achieve successful outcomes. In addition, PASS officials reported savings of $889 million in fiscal year 2011 from one initiative that leveraged departmentwide spending on enterprise software. Specifically, the initiative consolidates DOD commercial software, information technology hardware, and services requirements to obtain lower prices from information technology providers. Figure 4 provides reported fiscal year 2011 procurement spending, and strategic sourcing information for the military departments and DLA. Selected agencies’ reported savings added up to $1.8 billion in fiscal year 2011—less than one-half of one percent of total federal procurement spending. We previously reported that some companies achieved reported savings of 10 percent to 20 percent of their total procurement costs through the use of a strategic approach to buying products and services. DHS reported fiscal year 2011 savings of $324 million. At DOD, the military departments and DLA reported a total of $213 million in savings for fiscal year 2011. The PASS office is just starting to collect some data on departmentwide strategic sourcing savings and could not definitively report savings through departmentwide strategic sourcing in fiscal year 2011. However, based on information provided, DOD achieved $900 million in savings from agencywide efforts. We have previously reported that DOD has not fully collected and assessed cost savings and other information from strategic sourcing initiatives. Energy and VA officials reported $335 million and $56 million in savings respectively for their strategic sourcing efforts in fiscal year 2011. When strategic sourcing contracts were used, agencies generally reported rates of savings ranging from 5 percent to over 20 percent of spending through those contracts. For example, the Navy reported spending $145 million and achieving savings of $30 million through strategic sourcing in fiscal year 2011; these reported savings are almost 21 percent of the spending through strategic sourcing vehicles. However, spending through strategic sourcing was a tiny fraction of the Navy’s total procurement spending (0.1 percent). If the Navy were to direct even 10 percent of its total procurement spending of $105 billion through strategic sourcing vehicles, and achieved savings equivalent to 21 percent of the spending directed through strategic sourcing vehicles, it would save over $2 billion. In another example, the Air Force reported spending $2.4 billion and achieving savings of $126 million through strategic sourcing in fiscal year 2011. These reported savings are equal to 5 percent of the spending through strategic sourcing. If the Air Force were to strategically source even 10 percent of its total procurement spending of $65 billion, and achieved savings equivalent to 5 percent of the spending through strategic sourcing vehicles, it would save $339 million. DOD training materials highlight the importance of prioritizing spending categories by looking at total spending dollars, among other criteria; specifying that a priority list can be developed by identifying those top spend categories that comprise 80 percent of the organization’s total procurement spending. While some agencies such as DHS have implemented initiatives that address top spend categories, current and planned initiatives at the other agencies we reviewed do not address the categories that represent their highest spending, the majority of which exceed $1 billion and most of which are for services rather than products. Consequently, agencies are leaving large segments of spending unmanaged, particularly in the area of services. We found that as the amount of spending managed through strategic sourcing at selected agencies increased, reported savings generally increased. In fiscal year 2011, more than half of the procurement spending at the four agencies we reviewed was used to acquire services. However, we found that strategic sourcing efforts addressed products significantly more often than services. Officials reported that they have been reluctant to strategically source services for a variety of reasons, such as difficulty in standardizing requirements or a decision to focus on less complex commodities that can demonstrate success. Leading companies we spoke with have focused their efforts on services in the past 5 to 7 years because of the growth in spending in that area, and have achieved significant savings. For example, officials at one leading company told us that they developed standardized regional labor rates to allow strategic sourcing of services. Strategic sourcing leading practices at private companies suggest that it is critical to analyze all procurement spending with equal rigor and with no categories that are off limits. In addition, achieving savings requires a departure from the status quo. An industry group recently surveyed companies and reported that companies have successfully strategically sourced categories of services spending that have been off limits or controversial for most procurement organizations, such as information technology, professional services, technical services, and facilities management. Selected agencies are procuring many of the same types of services, such as professional services, and are missing opportunities to coordinate efforts. For example, DOD conducted a spend analysis in 2010 that identified knowledge-based services such as engineering management services and logistics management services as the largest portfolio of spending being procured by all the DOD components. Of the four agencies we reviewed, only DHS reported addressing the majority of its top 10 spending categories of products and services. In contrast, VA reported implemented initiatives in 3 of its top 10 spending categories. Energy reported departmentwide initiatives for one of its top 10 categories of spending – operation of Government-Owned Contractor- Operated Research and Development Facilities, which comprises a large percentage of the agency’s total procurement spending – but officials noted that some of their components have strategic sourcing efforts in those top 10 spending categories where virtually all of the spending is concentrated at one component. Because of this concentration, Energy officials did not believe greater efficiencies could be achieved by sourcing those commodities at a departmentwide level. At DOD, PASS officials could not provide a comprehensive list of departmentwide initiatives and therefore, we could not fully assess their efforts to address the department’s highest spending categories through departmentwide strategic sourcing efforts. The DOD components varied in the extent to which their efforts address their top spending categories. Of the components we reviewed, only DLA reported initiatives addressing the majority of their highest spending categories (see fig. 5). According to Navy officials, they choose products and services for strategic sourcing by examining their top spending products and services and eliminating those that they do not consider to be good candidates for strategic sourcing. Through this analysis, Navy officials identified engineering and technical services as a good candidate for strategic sourcing and stated they planned to create a commodity team in the fourth quarter of fiscal year 2012 to develop an initiative in this area. In addition, the Air Force reported completing multiple spend analyses to identify and establish commodity councils, and also to prioritize strategic sourcing initiatives within these commodity councils. Air Force officials reported that strategic sourcing opportunities were prioritized based on a value/complexity trade-off. Of the top spending categories that DOD components reported targeting through implemented strategic sourcing initiatives, only two are services. However, leading companies and DHS have successfully tackled some high-spend and complex services that are comparable to DOD’s high spend services. For example, DHS has implemented a strategic sourcing initiative for engineering and technical services, which is also in the top 10 spending categories for the Army, Air Force and Navy. DOD acknowledges the need to better manage services spending. A DOD instruction is in place that requires each service acquisition executive in the military departments to collaborate with other senior officials to determine key categories of services that can be strategically sourced, and to dedicate full-time commodity managers to coordinate procurement of these services. These officials are also responsible for conducting periodic spend analyses of their procurement data. DOD officials told us that the appointment of a senior manager for the acquisition of services at each of the military departments and defense agencies is in progress, but that it will take some time to fully achieve this change. Agency officials stated that not all categories of spending are good candidates for strategic sourcing. However, industry sources, government guidance, and our prior work have identified a range of strategic sourcing tactics that are appropriate for various types of government products and services. For example, some military officials have cited weapon systems as one category that they do not consider to be good candidates for strategic sourcing. While weapon systems are highly specialized items that have a long development cycle, we have identified where weapon systems acquisition could benefit from some strategic sourcing practices, such as supplier relationship management. In a 2005 memo on strategic sourcing, DOD stated that enhancing relationships with suppliers minimizes costs, as DLA has done through long-term contracts. Further, DOD cited the need for improving long-term strategic relationships with suppliers in its 2010 Better Buying Power memo, and directed components to participate in an agencywide pilot to develop preferred supplier relationships. In addition, the Better Buying Power memo acknowledged a need for a cohesive and integrated strategy for acquiring services, many of which are among the highest spending categories for DOD. The memo cited the use of more than 100,000 contract vehicles held by more than 32,200 contractors and also stated that contract support services spending at DOD represented more than 50 percent of its total contract spending in 2009. However, the memo was not clear how strategic sourcing would be incorporated into their ongoing efforts in this area. Similarly, the Defense Business Board issued a report in January 2011 recommending that DOD target high value areas for cross-military department coordination. The report noted that strategic sourcing savings of even one percent at the department would equate to billions of dollars. Some of the four agencies have implemented many initiatives covering a wide range of products or services, while others report only a few initiatives that address a limited number of products or services. As of fiscal year 2011, the four agencies we reviewed reported initiatives that have been implemented or planned targeting nearly 750 products and services—ranging from ammunition to architecture and engineering services to seating. In addition to the initiatives that have already been implemented at the four agencies, officials reported a total of 85 efforts planned for fiscal years 2012 through 2016. Many initiatives include multiple products or services. For example, DHS reported, for fiscal year 2011, 42 implemented departmentwide initiatives that covered approximately 270 products and services, ranging from software to professional and program management support services. By contrast, VA reported 4 initiatives covering flags, hearing aids, wireless devices, and affiliate negotiation teams. At the Department of Energy, the majority of spending through strategic sourcing is led by the management and operating contractors who operate and maintain most of their government-owned facilities, such as national laboratories. Energy officials told us that labs managed by the National Nuclear Security Administration require their contractors to collaborate to produce contracting vehicles for common products and services used across the program. However, national labs managed by the Office of Science do not. The Office of Environmental Management began implementing an initiative modeled on National Nuclear Security Administration’s approach in 2012. At DOD, PASS officials were unable to definitively report how many initiatives were ongoing as of fiscal year 2011. However, they did provide information on a limited number of ongoing departmentwide initiatives. Within DOD, the military departments and DLA varied in the number of efforts that they have implemented and the types of products and services that have been addressed. For example, the Air Force reported 17 implemented initiatives whereas the Army reported 8 and the Navy reported 7 implemented initiatives. These initiatives include products, such as taxiway lighting and maritime coatings, and services, such as clinical support services and integrated logistics support services. We found that agencies with more mature strategic sourcing programs— those with more implemented initiatives—managed more spending through strategic sourcing than those with less mature efforts. For example, DHS had the most initiatives and had the highest percentage of spending through strategic sourcing of the four selected agencies we reviewed. Conversely, VA officials reported the fewest initiatives and the lowest amount of spending managed through strategic sourcing of the agencies we reviewed. To support strategic sourcing efforts, DHS stood up a strategic sourcing office at its headquarters to centralize strategic sourcing efforts when the department was created in 2003, and its strategic sourcing program has been operating under an implemented management directive since 2004. We reported in 2004 that VA had success with a few initiatives. However, since that time they do not appear to have expanded their use of strategic sourcing. According to VA officials, efforts were stymied by a lack of reliable data. However, they are in the process of ramping up to add resources and increase their strategic sourcing efforts. Additionally, Energy’s major programs—National Nuclear Security Administration, Office of Science, and Office of Environmental Management—have ongoing strategic sourcing efforts and require some contractors operating their facilities to strategically source; however, Energy is still working to centralize its management of strategic sourcing initiatives and create an agencywide strategic sourcing program. In 2010, the Deputy Secretary of Energy cited further opportunities to leverage the department’s buying power through a more centralized and less fragmented approach, but according to Energy officials, steps have only recently been taken to take advantage of those opportunities. For example, the Office of Environmental Management created a partnership in January 2012 with the National Nuclear Security Administration’s Supply Chain Management Center. Under this partnership, the Environmental Management contractors have begun working with the Supply Chain Management Center to create a cooperative strategic sourcing solution that achieves efficiencies and economies of scale, and increases productivity and cost savings. In 2012, the Office of Science conducted a review of the National Nuclear Security Administration’s Supply Chain Management Center process and concluded that it does not make financial or practical sense for their labs to leverage existing efforts or establish a similar organization. They fulfill this function using their previously established Integrated Contractor Purchasing Team. We recently recommended that the Secretary of Energy assess whether the National Nuclear Security Administration and the Office of Science are taking the necessary steps to address challenges limiting implementation of cost savings efforts. They agreed with our recommendation and are taking steps to address it. Efforts across DOD components varied but in general are not mature. Army officials reported that the Army is currently aligning and allocating strategic sourcing responsibilities in partnership with the Army’s Senior Services Manager. In 2011, the Assistant Secretary of the Army issued a memo establishing a strategic sourcing board structure and program. The Air Force has been incorporating some aspects of strategic sourcing into its procurement practices for over a decade and its efforts are the most mature. The Office of the Secretary of Defense (OSD) relies on each military department to develop its own strategic sourcing efforts. The Defense Business Board reported in 2011 that DOD had not incorporated agencywide strategic sourcing into its operations and business practices. Rather, PASS officials told us that they conduct a detailed and comprehensive spending analysis, which they provide to the military departments so that they may determine which products and services they should address. Using this detailed spend analysis to select commodities that could be strategically sourced across DOD is not their primary goal; instead, PASS officials told us that they used it to identify and develop goals set forth in the Better Buying Power memo, which addresses productivity and efficiency initiatives such as increasing competition and reducing the use of high-risk contract vehicles. DOD guidance has not made clear how efforts related to the Better Buying Power memo link specifically to strategic sourcing initiatives. Military service officials told us that they would welcome guidance on strategic sourcing efforts from the PASS office, which is the organization responsible for championing strategic sourcing policy and initiatives for the department. The PASS office is currently updating its departmentwide strategic sourcing concept of operations, and is revising additional guidance which will include a process for regular review of DOD component and departmentwide proposed strategic sourcing initiatives. PASS officials stated that additional direction from DPAP or AT&L may be needed. Selected agencies and the FSSI program, which manages governmentwide strategic sourcing initiatives, continue to face common challenges which prevent them from fully adopting a strategic sourcing approach to procurement. A lack of leadership investment has prevented several agencies from transforming their procurement organizations to allow for enterprisewide strategic sourcing, as leading companies have done. For example, leaders at DOD and the Army have not devoted resources to set up the necessary organizational structures for strategic sourcing. In addition, agencies are still challenged to obtain and analyze reliable and detailed agencywide spending data, which hinders their ability to identify and study the strategic sourcing opportunities offering the most potential benefits. Finally, the FSSI program and selected agencies face challenges in communicating information for new and ongoing initiatives as well as establishing and meeting utilization goals to measure the effectiveness of these efforts and to realize cost savings. Where agencies have overcome some of these challenges, strategic sourcing results have improved. For example, DHS has established a centralized strategic sourcing office, identified opportunities in high spend areas despite data deficiencies, and taken steps to manage and monitor ongoing efforts. DHS is currently managing nearly 20 percent of its total procurement spending through strategic sourcing contracts and in fiscal year 2011 met its goal of 35 percent of spending on relevant products and services directed through the strategic sourcing contracts. Leaders at DOD and Energy have not committed to creating a procurement organization that views spending and makes strategic sourcing decisions at an agencywide level. DHS has created a centralized strategic sourcing organization and VA has initiated some reorganization in 2012, but it is too early to tell whether recent realignment efforts within VA will be successful. Each of these four agencies must overcome challenges posed by insufficient leadership support, including failure to dedicate resources and lack of management action to address disincentives to strategic sourcing. However, all four agencies reported they have begun to adopt some practices aimed at addressing these challenges. At a few agencies, leadership has not devoted sufficient resources to allow for strategic sourcing to be conducted at a departmentwide level. Strategic sourcing programs can be structured in various ways that require varying levels of resources. For example, DHS established a strategic sourcing program office, located in the Office of the Chief Procurement Officer, which includes DHS’s Director of Strategic Sourcing and nine additional staff members. This office coordinates with the components to identify opportunities and develop agencywide contracts that apply strategic sourcing principles. VA reported that in response to a proposal from Veterans Health Administration (VHA) personnel, VA leadership has recently committed to support the VHA in hiring approximately 150 full time personnel who will establish commodity management teams to identify departmentwide strategic sourcing opportunities and develop improved requirements packages, among other duties. Similarly, the Air Force reported that roughly 125 employees within its Enterprise Sourcing Group are involved in cross-functional strategic sourcing that leverages installation spending across 71 sites. The structure of an agency’s strategic sourcing program will determine the amount of dedicated resources that are needed; however, in two cases we found large agencies with only one or two full time employees who were expected to coordinate strategic sourcing across the entire organization. The Army, which managed more annual procurement spending than any other government agency in fiscal year 2011, currently does not have a formal strategic sourcing program office. The strategic sourcing function presently lies within the Policy and Oversight Directorate under the Deputy Assistant Secretary of the Army for Procurement, which has committed only one staff person at a quarter time to managing its strategic sourcing. Moreover, at the OSD level, DOD has allocated one Deputy Director and one full-time staff member to its PASS office. The failure of agency leaders to devote resources to strategic sourcing has been a recurring issue. For example, in 2009, the FSSI program reported that many strategic sourcing staff cited resource constraints as the main barrier to implementation of strategic sourcing in their organizations. During our review we observed that agencies that have committed more resources to managing departmentwide strategic sourcing efforts were better able to maintain data on their agencies’ current strategic sourcing initiatives. This is a necessary step for coordinating and managing purchases agencywide—identified in our prior work as critical to taking a strategic approach to procurement. For example, DHS maintains a published list of available departmentwide strategic sourcing vehicles, while Army officials told us it was difficult for them to supply information on all initiatives because strategic sourcing data—even on Army-wide efforts—are not kept centrally and would require a data call. Agency officials mentioned several disincentives that can discourage procurement and program officials from proactively participating in strategic sourcing, and at many agencies, these disincentives have not been fully addressed by leadership. Key disincentives identified by agency officials include the following: A perception that reporting savings due to strategic sourcing could lead to program budgets being cut in subsequent years, Difficulty identifying existing strategic sourcing contracts that are available for use as there is no centralized source for this information, A perception that strategically sourced contract vehicles may limit the ability to customize requirements, A desire on the part of agency officials to maintain control of their Program officials’ and contracting officers’ relationships with existing The opportunity to get lower prices by going outside of strategically sourced contracts. For example, a key disincentive to implementing efforts and tracking and reporting savings is a perception that program budgets may be cut as a result of producing savings. Agency officials stated that a reluctance to track and report savings for fear of budget reductions contributes to underreporting of savings. Military officials reported a perception that any money saved will be taken from the next year’s budget. Navy officials stated that several of their commands have expressed concerns that tracking and reporting savings could trigger another round of budget reductions and that the issue is frequently raised during their meetings with Navy senior management. DOD leadership has not yet addressed this perception for strategic sourcing efforts. Leaders at some agencies have proactively introduced practices that address these disincentives to strategically source. For example, DHS and VA reported increasing personal incentives for key managers by adding strategic sourcing performance measures to certain executives’ performance evaluations. In addition, several agencies including DOD, DHS, and VA have instituted policies making use of some strategic sourcing contracts mandatory or mandatory “with exception,” although the extent to which these policies have increased use of strategic sourcing vehicles is not yet clear. Some agencies have made use of automated systems to direct spending through strategic sourcing contracts. For example, FSSI issued a blanket purchase agreement through its office supplies initiative which included provisions requiring FSSI prices to be automatically applied to purchases made with government purchase cards. VA reported that its utilization rate for the office supplies FSSI contracts increased from 12 percent to 90 percent after these measures took effect. Officials from a number of agencies reported that they expected shrinking budgets will prompt leadership to provide additional requirements or incentives to strategically source. In fact, VA and Navy officials reported renewed attention to strategic sourcing as a potential tool to deal with budget cuts. The FSSI program and selected agencies generally cited the Federal Procurement Data System—Next Generation (FPDS-NG)—the federal government’s current system for tracking information on contracting actions—as their primary source of data, and noted numerous deficiencies with this data for the purposes of conducting strategic sourcing research. Agencies reported that when additional data sources are added, incompatible data and separate systems often presented problems. We have previously reported extensively on issues agencies faced in gathering data to form the basis for their spend analysis. However, some agencies have been able to make progress on conducting enterprisewide opportunity analyses despite flaws in the available data. For example, both the FSSI Program Management Office and DHS told us that current data, although imperfect, provide sufficient information for them to begin to identify high spend opportunities. DHS has in fact evaluated the majority of its 10 highest-spend commodities and developed sourcing strategies for seven of those based on its analysis of primarily FPDS-NG data. Officials at several agencies noted that the lack of trained acquisition personnel made it difficult to conduct an opportunity analysis and develop an informed sourcing strategy. For example, Army officials cited a need for expertise in strategic sourcing and spend analysis data, and OMB officials echoed that a key challenge is the dearth of strategic sourcing expertise in government. VA and Energy also reported this challenge. A few agencies have responded to this challenge by developing training on strategic sourcing for acquisition personnel. For example, the Air Force noted that they instituted training related to strategic sourcing because it is necessary to have people who are very strong analytically to do the front end work for strategic sourcing, and these are the hardest to find. The training course gives acquisition personnel strong analytical skills to perform steps like market evaluation. VA has also begun to develop training to address this challenge. Agency officials listed several challenges that they commonly face in developing the sourcing strategy for a product or service. As with the opportunity analysis stage, imperfect data and a lack of expertise also pose challenges to agencies when developing a sourcing strategy. This phase requires study of various types of information; for example, data on the agency’s requirements and historical and projected demand for the product or service. Some agencies have found additional information on existing suppliers and contracts to also be important in identifying commodities to target for strategic sourcing. For example, the Air Force also considers data on the number of contracts, the number of purchasing locations, the number of transactions, the number of suppliers, and estimated total cost savings for each potential commodity, among other factors. Several agencies have historically turned to contractors to perform this step; however some have recently decided to increase internal expertise in this area. For example, the FSSI Program Management Office reported hiring two new staff with relevant expertise to help with this process. A few agencies also reported challenges specifically with meeting requirements unique to government procurements. Government procurements must meet specified socioeconomic goals; for example, agencies are expected to award a certain portion of their contracts to small businesses. Organizations representing small businesses have expressed concern that federal strategic sourcing reduces contracting opportunities for small businesses. However, while acknowledging that reducing the number of vendors providing a product or service means that some vendors will be unable to participate, agency officials reported finding ways to conduct strategic sourcing efforts that allow for maximum feasible small business participation. For example, DHS, VA, and Air Force officials told us they collaborated with small business advocates early in the acquisition planning stage to ensure they conducted market research that would help determine how to maximize small business participation. In addition, Federal Strategic Sourcing Initiatives to date have generally awarded a number of contracts to small businesses. For example, of 15 contracts awarded for an initiative focusing on office supplies, 13 were awarded to small businesses, and these businesses received over 70 percent of office supplies spending through that initiative in fiscal year 2011. Another initiative targeting print management services awarded 5 of 11 contracts to small businesses. Several agency small business utilization officials with whom we spoke were generally satisfied with FSSI and agency efforts to involve small businesses in strategic sourcing. Officials at all of our four selected agencies discussed challenges in getting buy-in among those who would be using the strategic sourcing contracts to purchase products or services. Buyers can face a number of disincentives to using strategic sourcing vehicles, as outlined above. In addition to creating structures that provide incentives for participation in strategic sourcing, several agencies incorporate practices into individual strategic sourcing efforts to increase stakeholder buy-in. OMB, DHS, and Air Force officials reported that one such practice is involving stakeholders early in the process. For example, Air Force officials said that in order for strategic sourcing to be successful, the cross-functional commodity team must include the organization that is funding the acquisition. DHS officials added that in order to have support with all levels of personnel involved, it is important to have the end users in the room when making procurement decisions. Similarly, the program manager for GSA’s planned FSSI covering wireless rate plans and devices personally conducted extensive outreach with agencies to understand their technical requirements and encourage customer involvement. In response, agencies have sent representatives from offices of both the Chief Information Officer and the Chief Acquisition Officer to work with the team in developing the acquisition strategy. After strategic sourcing contracts are awarded, realizing cost savings and other benefits depends on utilization of these contracts. Agency officials indicated that a key challenge with strategic sourcing is communicating new contracting options. For example, Navy officials expressed that even though they have templates for communication that can be used when rolling out an initiative, it is really people who determine whether the communication will be effective. In addition to putting together a communication plan to alert government purchase card users that use of the office supplies FSSI was now required, Air Force officials conducted random telephone calls to ensure these users knew of the existence of the policy. Air Force officials believed that these communication efforts led to a 50 percent increase in FSSI usage from March to April 2011. To improve the existing FSSI efforts, GSA applied lessons learned from previous initiatives to increase buy-in and utilization. The first generation of the office supplies initiative did not have a high utilization rate, and officials attributed this, in part, to a failure to publicize the effort. As a result, GSA increased its outreach efforts for the second iteration of the initiative and developed an implementation kit—a pre-packaged communications campaign to help implement the FSSI. The kit included a five-step implementation process, sample communications and policy memos, and reporting templates. In fiscal year 2011, an estimated 13 percent of governmentwide spending on office supplies went through second generation office supplies initiative contracts, up from less than 1 percent of similar spending through first generation contracts in fiscal year 2009. Failure to set goals and difficulty in measuring the utilization of strategic sourcing contracts also present a critical challenge. A lack of detailed data on spending makes it difficult for agencies to track utilization of existing strategic sourcing contracts. FPDS-NG provides spending data by product service code, but the products and services targeted by most strategic sourcing initiatives are only a subset of these much broader categories. Further, FPDS-NG lacks information on transactions below a certain dollar threshold. Even where agencies have improved their data on spending through strategically sourced contracts—for example, a FSSI blanket purchase agreement requires vendors to provide detailed line item data on spending—they continue to lack data that allows them to reliably identify spending on these products and services that go to contractors outside of strategically sourced contracts. Inability to track this spending makes tracking utilization imprecise, but the FSSI program, VA, and DHS have all begun tracking utilization data—though imperfect—and are using utilization rates as one metric to manage strategic sourcing efforts. VA officials acknowledged that regular monitoring of strategically sourced spending is what creates the incentive to stabilize savings following the initial drop in spending, which is crucial to continued success. Agencies are equally challenged to produce other metrics—such as spending through strategic sourcing contracts and savings—that can be used to monitor progress. However, agencies will be increasingly called upon to produce metrics as the use of strategic sourcing expands. For example, in 2012 OMB issued a new cross-cutting management improvement goal, which calls for agencies to strategically source at least two new products or services in both 2013 and 2014 that yield at least 10 percent savings. Agencies will need to measure savings to document their progress toward this goal. However, although recognizing the need for guidance on how agencies are to measure savings, OMB has not yet issued such guidance. The strategic sourcing savings figures reported to us by agency officials were calculated using a variety of savings methodologies; for example, the difference in the paid price versus the price from ordering the same product or service from GSA schedule contracts. Other examples for calculating savings include totaling GSA management fees avoided because GSA schedules were not used for the procurement, and comparison of the prices paid on the contracts before strategic sourcing and after. Often several methodologies were used even within agencies. We recently reported that Energy’s guidance on calculating procurement cost savings gave its maintenance and operations contractors considerable flexibility in choosing the methods for estimating savings, and therefore estimates could vary widely. For example, one laboratory estimated a $9 million savings from a software purchase in 2010 using its preferred estimation method. By other methods used elsewhere in Energy, however, the site estimated that its savings could have been as high as $35 million. We recommended that Energy clarify its guidance on estimating cost savings from streamlining efforts. Energy officials agreed with our recommendation and stated they have clarified their guidance on developing savings estimates. In addition, our recent reports highlighted the difficulties that agencies face when calculating acquisition savings. Specifically, we found that agencies reported billions of dollars in overstated and questionable savings in response to OMB’s Acquisition Savings Initiative due to differing methods of calculating savings as well as confusion as to what should be included as savings. We also found that when calculating savings from various efforts including strategic sourcing, VA had double counted savings on different efforts and had not accounted for the cost of implementing other efforts. In addition to savings, leading companies that we spoke with have identified a variety of metrics they use to measure return on strategic sourcing investments, such as the spending under management, reductions in total cost of ownership, and efficiencies due to streamlined processes. Agency officials have identified, but have not quantified, some of these other benefits to strategic sourcing. Officials acknowledge that strategic sourcing efforts can produce administrative cost savings, but they are difficult to quantify. For example, DHS reported that consolidating procurements using agencywide contracts streamlines the acquisition process and saves the department significant administrative costs; however, DHS does not quantify these savings. Strategic sourcing efforts can also achieve efficiencies by changing buying behavior and managing demand for products and services. For example, OSD officials reported the Navy has been innovative in incorporating demand management into its approach for buying wireless services. Navy’s wireless effort is down to eight plans and eight devices, and the service is now doing demand analysis. Several agencies mentioned a need for sustained leadership support and additional resources in order to more effectively monitor ongoing initiatives. For example, DOD’s PASS office—with only two people to advocate strategic sourcing policy and coordinate communication of component initiatives—does not track current strategic sourcing efforts at the component or departmentwide level. Some agencies that have developed metrics such as contract utilization rates report using those metrics to increase ongoing communication with leadership and maintain leadership investment. For example, VA officials said that they collect utilization metrics for the FSSIs and brief the Deputy Secretary on those metrics monthly. The FSSI program has adopted some leading practices for strategic sourcing, such as creating the structure and processes necessary to implement and monitor efforts. Through the end of fiscal year 2011, the program had only managed a small amount of spending through its four governmentwide strategic sourcing initiatives; however it reported achieving significant savings on those efforts. In fiscal year 2011, the FSSI program managed $339 million through these governmentwide initiatives and reported achieving $60 million in savings. However, the program faces key challenges in obtaining agency commitments to use new FSSIs and in increasing the level of agency spending directed through FSSI vehicles. For example, only 15 percent of governmentwide spending for the products and services covered by the FSSI initiatives went through the FSSI contracts in fiscal year 2011. Successfully addressing these challenges could help the FSSI program achieve greater governmentwide savings and efficiencies. In addition, the FSSI program has not yet targeted any of the government’s 50 highest spend products and services for strategic sourcing, and therefore is missing the potential for more significant strategic sourcing savings and other benefits governmentwide. The FSSI program reports to OMB’s Chief Acquisition Officers Council through its Strategic Sourcing Working Group. The Working Group, comprised of representatives of various agencies, vets and approves initiatives and sourcing strategies, and establishes the standards, processes, and policies governing FSSI. The FSSI Program Management Office supports the Working Group and coordinates the efforts of executive agents to implement individual FSSI initiatives. See figure 6 for the FSSI program governance structure. The FSSI Program Management Office is located within the GSA’s Federal Acquisition Service. In addition, each of the four implemented strategic sourcing initiatives is also managed by GSA’s Federal Acquisition Service staff. The FSSI Program Management Office provides guidance and oversight, reviews information and recommendations, and makes strategic program decisions. This structure has allowed the FSSI program to assess opportunities for procuring certain products and services, develop and implement sourcing strategies to leverage governmentwide buying power, and manage the strategic sourcing efforts. Agency representatives participate in developing and managing FSSIs through membership in commodity teams. In fiscal year 2011, the FSSI program managed $339 million through governmentwide initiatives and achieved approximately $60 million in savings, or almost 18 percent of the procurement spending it managed through these initiatives. As of fiscal year 2011, four initiatives had been implemented. The domestic delivery services and office supply initiatives—originally implemented in 2006 and 2007, respectively - are in their second iterations. The wireless telecommunications expense management initiative has been in place since 2008. The first print management initiative contracts were awarded near the end of fiscal year 2011, and initial efforts focused on assessing agencies’ use of print output devices. Given the timing of the first contract award in late fiscal year 2011, FSSI Program Management Officials reported that no spending was yet managed through the contracts in that fiscal year. The number of agencies participating in the initiatives varied widely. For example, the FSSI Program Management Office reported that as of fiscal year 2011, five agencies participated in the Wireless Telecommunications Expense Management Services initiative, while 95 participated in the Domestic Delivery Services II initiative. Table 2 describes four implemented and two planned governmentwide strategic sourcing initiatives. SmartBUY is an existing federal procurement vehicle that was started in 2003 and leverages the government’s buying power to reduce the cost of commercial off-the-shelf software and services. As of the end of fiscal year 2011, SmartBUY was not classified as a Federal Strategic Sourcing Initiative, but the Strategic Sourcing Working Group in June 2012 formally accepted it as an FSSI Initiative. The FSSI Program Management Office plans that going forward, the FSSI SmartBUY will develop strategies to address as many large software publishers as possible. The planned initiative targeting wireless rate plans and devices will aim to deliver acquisition savings due to lower purchasing costs as well as operational savings through improvement of processes and information. FSSI program officials that manage the governmentwide strategic sourcing initiatives told us that before contracts are awarded, obtaining spending commitments—especially from top spending agencies—is important in negotiating discounted prices and implementing a successful strategic sourcing effort. However, they noted that getting agency commitments to use the FSSI initiatives sometimes can be a challenge. Even agencies that are part of strategic sourcing commodity teams sometimes do not commit to using the resulting FSSI contracts. For example, DOD representatives participated in the commodity team for the Office Supplies II effort, and DOD spending represented a large portion of the total federal government spending on office supplies. However, prior to Office Supplies II implementation, DOD committed only to continue support of the FSSI for office supplies, but did not specifically commit to using Office Supplies II contracts. By contrast, other agencies such as VA provided the FSSI program with letters of intent that committed the agencies to use the Office Supplies II contracts once awarded. This information was used by the FSSI program to negotiate better prices with vendors. PASS officials acknowledged that although DOD participates in commodity teams for the Federal Strategic Sourcing Initiatives, the department has not fully committed to certain FSSI contracts for a variety of reasons, including having more sophisticated requirements or different mission needs than those of civilian agencies, and having existing contract vehicles in place when some FSSI initiatives were implemented. A PASS official said that the department would more likely commit to current and planned FSSI contracts if those contracts showed significant savings/best value over established DOD contracts, to include the cost of administration, fees, and transition. Officials added that they may also be more likely to commit to FSSI contracts if FSSI initiatives addressed products or services not previously addressed by DOD, or if the FSSI program selected products or services with more significant spending. Although reported spending through FSSI vehicles has increased from fiscal year 2009 to fiscal year 2011, spending through the vehicles continues to be limited. Only $339 million, or 15 percent, of governmentwide spending for the products and services covered by the FSSIs went through the vehicles in fiscal year 2011. However, where spending went through FSSI vehicles, the program reported savings equivalent to almost 18 percent of the spending through those vehicles. Although not all spending is suitable for strategic sourcing, if even 1 percent of total federal procurement spending were directed through the FSSIs and achieved savings equivalent to 18 percent of that spending, the federal government could save over $900 million. Agencies cited a variety of reasons for not participating in the governmentwide FSSI initiatives. Some agencies told us they were more likely to engage in agencywide strategic sourcing than to participate in a governmentwide effort because they want to maintain control over their contracting, have unique requirements, or can get lower prices outside of the FSSI contract vehicles. For example, Energy officials told us their buyers are purchasing some items through Office Supplies II, but also have gotten lower prices on other items from current vendors by comparing prices. FSSI use is not mandatory and agencies face no consequence for not using the FSSI contract vehicles. According to OFPP and GSA officials, they have not made use of FSSI contracts mandatory governmentwide— preferring instead to establish FSSI contract vehicles that agencies will want to use. However, though GSA is the sponsoring agency, only 28 percent of its own office supply spending in fiscal year 2011 went through Office Supplies II. Agencies can mandate FSSI use agencywide, and the Air Force, Navy, DHS, and VA have issued policies making at least consideration of some FSSI use mandatory. GSA officials indicated that they are revisiting whether mandatory use policies would benefit the current FSSI initiatives. In 2012, OMB released a Cross-Agency Priority Goal Statement which requires agencies to increase their agencywide strategic sourcing efforts and also requires agencies to increase their use of FSSI contracts by at least 10 percent in both fiscal years 2013 and 2014, unless they can establish that their current spending patterns on related products are more cost-effective. This is a positive step and may help to increase agency use of FSSI contracts; however, the guidance does not include information on how agencies will be held accountable for meeting these goals and it is too early to tell what effect the establishment of this goal will have. Many of the governmentwide highest-spending categories of products and services exceed $10 billion, and therefore offer great opportunities for savings if they can be strategically sourced successfully. In fiscal year 2011, federal spending on the top 50 products and services was $283 billion, or 53 percent of total government procurement spending. Appendix II identifies the fiscal year 2011 highest governmentwide spending categories. Current governmentwide strategic sourcing initiatives do not address any of the top 50 governmentwide products and services, in part because the FSSI program excludes some of them from consideration for governmentwide strategic sourcing. The FSSI program evaluates products and services based on savings potential, diversity of customer pool, and ease of implementation. In its fiscal year 2011 analysis of spending, before considering products and services for a governmentwide initiative, the program removed those considered unsuitable for strategic sourcing, including mission-critical products and services, products and services for which DOD alone or two or fewer departments accounted for 80 percent or greater of funding, and construction, architect/engineering, and building maintenance services. Consequently, only about one quarter of total spending—or $129 billion— remained open for consideration for a governmentwide initiative. Officials told us that where spending on a product or service is concentrated among just a few agencies, a better approach would be for these agencies to collaborate to strategically source it rather than establish a governmentwide effort. However, many high-spending categories of services with spending spread more broadly across agencies are also not currently being targeted. FSSI Program Management Officials acknowledged that services comprise a high volume of governmentwide procurement spending, and that the FSSI program cannot ignore them for much longer. Current FSSI contracts address products and services that have relatively low spending when compared to those that are among the top 50. For example, all products procured under the Office Supplies II FSSI combined would rank 134th in fiscal year 2011 federal procurement spending. FSSI officials reported that they selected FSSI products and services for reasons other than a high spending level, which included agency adoption and standard purchase requirements across government. For example, FSSI officials explained that the current FSSIs, including Office Supplies II, were selected to demonstrate that strategic sourcing could be successful with simpler commodities before they pursued more complicated products and services. They also may select a product or service that an agency has already been considering to build on momentum. For example, an upcoming FSSI initiative for publication licenses was chosen because of interest by the Library of Congress. The Library of Congress will also lead the initiative, which will be the first time that GSA will not be the executive agent for an FSSI effort, but this FSSI only addresses an estimated $500 million to $600 million in federal procurement spending. Current fiscal pressures and budgetary constraints have heightened the need for agencies to take full advantage of strategic sourcing and other efficiencies. Government agencies and commercial firms tend to have more spending managed through strategic sourcing efforts when they incorporate leading practices such as using their spend analysis to inform their selection of products and services for strategic sourcing, devoting resources to strategic sourcing efforts, and measuring the benefits of ongoing efforts. These practices drive efficiencies and yield benefits beyond savings, such as increased business knowledge. Governmentwide strategic sourcing efforts have been initiated, and the four federal agencies we reviewed have improved and expanded upon their use of strategic sourcing to achieve cost savings and other benefits, but some agencies’ leaders, such as OSD and the Army, have not made a sufficient commitment to strategic sourcing, investing limited resources and failing to establish goals and performance metrics. Energy’s experience has been similar to DOD’s, and we have recently made related recommendations to the Secretary of Energy. DHS and DLA have shown that agencies can successfully target high spend commodities for strategic sourcing. Despite these examples, selected agencies’ current efforts and the FSSIs fall well short of addressing most of the federal procurement spending. Perennial high spend areas such as services offer the biggest potential for savings but have been largely ignored in strategic sourcing efforts. Focusing only on low risk, low return strategic sourcing strategies diminishes the government’s ability to fully leverage its enormous buying power and achieve other efficiencies. Until top-spending federal entities, especially DOD and the FSSI program, better incorporate strategic sourcing leading practices, increase the amount of spending through strategic sourcing, and direct more efforts at high spend categories, billions of dollars in potential savings may be missed, denying agencies a valuable tool for maximizing their ability to carry out critical missions under tight budgets. To improve departmentwide strategic sourcing efforts at DOD, we recommend that the Secretary of Defense direct the Office of Acquisition, Technology, and Logistics to take the following five actions: sets goals for spending managed through strategic sourcing vehicles, establishes procedures for the identification and tracking of departmentwide and component strategic sourcing efforts through the PASS office, implements the PASS office strategic sourcing guidance, links strategic sourcing to its Better Buying Power memorandum, and establishes metrics, such as utilization rates, to track progress toward these goals. Evaluate whether the current resources of OSD’s PASS office are sufficient to enable the office to fulfill its strategic sourcing mission. Evaluate existing acquisition strategies for DOD’s current departmentwide acquisitions, and where these represent a strategic sourcing approach, ensure that data on these programs are submitted to the PASS office. Identify and evaluate the best way to strategically source DOD’s highest spending categories of products and services (e.g., governmentwide vehicles, interagency collaboration, departmentwide vehicles). Identify and submit to the FSSI program a list of products and services that, if developed as FSSIs, present the best opportunities for future DOD participation. To improve strategic sourcing efforts at the Army, and in light of significant potential savings and performance improvements, we recommend that the Secretary of Defense take the following action: Evaluate whether the resources that the Army’s Policy and Oversight Directorate has allocated to strategic sourcing are sufficient to enable the Directorate to fulfill its strategic sourcing mission. To help ensure that VA’s strategic sourcing efforts further reflect leading practices, and in light of significant potential savings and performance improvements, we recommend that the Secretary of Veterans Affairs direct strategic sourcing staff to take the following two actions: Based on analysis of agencywide spending, evaluate the best way to strategically source VA’s highest spending categories of products and services (e.g., governmentwide vehicles, interagency collaboration, agencywide vehicles). Set goals for spending managed through strategic sourcing, and establish metrics, such as utilization rates, to monitor progress toward these goals. To help ensure that government strategic sourcing efforts further reflect leading practices, and in light of significant potential savings and performance improvements, we recommend that the Director of OMB direct the Administrator of OFPP to take the following two actions: Issue an updated memorandum or other direction to federal agencies that includes guidance on calculating savings (including administrative cost savings) and establishes additional metrics to measure progress toward goals. Direct the FSSI Program to report on the program’s assessment of whether each top spend product and service governmentwide is suitable for an FSSI, with a plan to address those products or services that are suitable for strategic sourcing. We sent copies of a draft of this report to DOD, Energy, DHS, VA, GSA, and OMB. In its written comments (reproduced in appendix III), DOD concurred with our recommendations and stated it would take action to adopt them. DHS comments (reproduced in appendix IV) emphasized the agency’s commitment to improving and expanding its use of strategic sourcing. In its written comments (reproduced in appendix V), VA concurred with our recommendations and gave additional information on its strategic sourcing activities. As acknowledged in VA’s letter, it did not provide this data in response to requests made during our review, and therefore we were unable to evaluate this additional data. OMB staff provided oral comments concurring with our recommendations and stated they are developing guidance designed to accomplish the intended results in collaboration with a new senior level interagency governance group. In their oral comments, OMB staff also noted that our report compares the percent of spending through strategic sourcing to total procurement spending, rather than to spending on the products and services for which strategic sourcing is applicable. In response, we revised our draft report to more explicitly acknowledge that not all spending is suitable for strategic sourcing. However, during our review we observed that agencies held different views on whether certain categories of products and services are addressable through strategic sourcing, and our recommendations aim to encourage agencies to make this determination for each high spend product or service through a structured analysis. DOD also provided technical comments which we considered and incorporated into the report as appropriate. Energy and GSA provided only technical comments, which we considered and incorporated into the report as appropriate. As agreed with your offices, unless you publically announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of OMB; the Administrator of OFPP; the Administrator of General Services; and the Secretaries of the Departments of Homeland Security, Energy, Veterans Affairs, and Defense, as well as the Secretaries of the Air Force, Army, and the Navy, and the Director of DLA. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix VI. We were asked to review the status of strategic sourcing efforts both at selected agencies and governmentwide, and identify challenges, leading practices, and potential for additional strategic sourcing savings. Accordingly, we assessed: (1) the extent to which selected agencies managed spending and achieved savings through strategic sourcing, and whether buying power could better be leveraged, (2) key challenges agency and Federal Strategic Sourcing Initiative (FSSI) officials face in strategically sourcing products and services, (3) the extent to which FSSIs managed spending and achieved savings through strategic sourcing, and whether governmentwide buying power could better be leveraged. To evaluate agency strategic sourcing efforts, we selected for review four agencies—Department of Defense (DOD), Department of Homeland Security (DHS), Department of Veterans Affairs (VA), and Department of Energy (Energy)—among the top ten in fiscal year 2011 procurement obligations that accounted for 80 percent of total federal procurement spending. In addition, to more fully assess strategic sourcing at DOD, we reviewed the efforts of four component agencies—Air Force, Army, Navy, and the Defense Logistics Agency (DLA)—which accounted for 88 percent of DOD fiscal year 2011 spending, as well as departmentwide efforts managed by DOD’s Defense Program Acquisition and Strategic Sourcing (PASS) office, which reports to the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L). We requested and analyzed data on active and planned agencywide strategic sourcing initiatives, and any other efforts they might have begun, but discontinued. We asked the agencies for fiscal years 2009 through 2011 information on: Federal Procurement Data System—Next Generation (FPDS-NG) amount of savings achieved amount of any savings through means other than cost reduction or Product Service Codes associated with the initiatives spending through their strategic sourcing vehicles methods for calculating savings To avoid double counting, we excluded agency data on spending and savings achieved through FSSIs, and limited our analysis to the data provided on their agencywide initiatives. We did not independently verify this information that agencies reported to us, but we did assess information from agency officials about the reliability of the data and resolved some discrepancies. We determined that these data were sufficiently reliable for the purposes of analyzing agency-reported strategic sourcing spending and savings data. To identify the products and services with the highest federal procurement spending—both governmentwide and by our selected agencies—we analyzed fiscal year 2011 data from the FPDS-NG. To assess the reliability of the FPDS-NG, we reviewed existing documentation and electronically tested the data to identify obvious problems with completeness or accuracy. We determined that these data were sufficiently reliable for the purpose of reporting governmentwide and agency spending on products and services. At the selected agencies and the General Services Administration (GSA), we interviewed strategic sourcing officials to determine the status of current and planned governmentwide and agencywide strategic sourcing efforts, with a focus on key challenges and leading practices. Specifically, we asked about governance structure, obstacles to strategic sourcing, enablers or good practices in strategic sourcing, agencywide initiatives, participation in governmentwide initiatives, spend analysis, and savings and potential other benefits from strategic sourcing efforts. We also obtained and reviewed agency strategic sourcing policies and other documentation, including spend analyses, where applicable. In addition, we reviewed previous GAO reports on leading company practices for strategic sourcing as well as related reports on acquisition, contract management, government streamlining, and duplication, overlap, and fragmentation in the federal government. We also reviewed work papers from a 2012 engagement on best commercial practices for acquiring services. In addition, we examined related testimony before various congressional committees. Furthermore, we reviewed the Defense Business Board 2011 Report to the Secretary of Defense on Strategic Sourcing, as well as literature from industry sources on successful strategic sourcing efforts. To examine how agencies consider small businesses in the strategic sourcing process, we interviewed personnel at DHS and VA Offices of Small and Disadvantaged Business Utilization and the Air Force Office of Small Business. We also analyzed selected data to determine the number of small businesses participating in strategic sourcing efforts and the level of spending directed to small business contractors. To evaluate governmentwide strategic sourcing, we examined the FSSI efforts led by GSA and overseen by the Office of Federal Procurement Policy (OFPP). We interviewed strategic sourcing officials at GSA to determine the status of current and planned FSSI initiatives, again looking at challenges and leading practices. We sought information on topics similar to those discussed with officials at our selected agencies, and obtained and reviewed FSSI documentation, including spend analysis. Similarly to the selected agencies, we requested and analyzed GSA fiscal years 2009-2011 information on Product Service Codes, governmentwide spending, and governmentwide savings associated with the FSSIs. In addition, we used the results of prior work on the Office Supplies II FSSI. We did not independently verify the FSSI information GSA reported to us, but we did assess information from agency officials about the reliability of the data and resolved some discrepancies. We determined that these data were sufficiently reliable for analyzing reported governmentwide FSSI spending and savings data. To assess OFPP’s oversight of strategic sourcing, we met with OFPP officials to discuss the agency’s role in advancing the FSSIs, the agency’s role in facilitating agency participation in the FSSIs, consistency across agencies in estimating strategic sourcing savings, and selection of goods and services for future FSSIs. We also obtained and reviewed OFPP documentation, including memoranda promoting increased use of strategic sourcing, and observed a monthly meeting of OFPP’s Chief Acquisition Officers Council, Strategic Sourcing Working Group. We conducted this performance audit from August 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, W. William Russell, Assistant Director; Joseph Fread; Laura Greifner; Julia Kennon; John Krump; Leigh Ann Nally; Michael Palinkas; Ralph Roffo; Roxanna Sun; and Ann Marie Udale made key contributions to this report. Managing for Results: GAO’s Work Related to the Interim Crosscutting Priority Goals under the GPRA Modernization Act. GAO-12-620R. Washington, D.C.: May 31, 2012. Follow-up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. VA Health Care: Methodology for Estimating and Process for Tracking Savings Need Improvement. GAO-12-305. Washington, D.C.: February 27, 2012. Department of Energy: Additional Opportunities Exist to Streamline Support Functions at NNSA and Office of Science Sites. GAO-12-255. Washington D.C.: January 31, 2012. Strategic Sourcing: Office Supplies Pricing Study Had Limitations, but New Initiative Shows Potential for Savings. GAO-12-178. Washington, D.C.: December 20, 2011. Federal Contracting: OMB’s Acquisition Savings Initiative Had Results, but Improvements Needed. GAO-12-57. Washington, D.C.: November 15, 2011. Streamlining Government: Key Practices from Select Efficiency Initiatives Should Be Shared Governmentwide. GAO-11-908. Washington, D.C.: September 30, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Contracting Strategies: Data and Oversight Problems Hamper Opportunities to Leverage Value of Interagency and Enterprisewide Contracts. GAO-10-367. Washington, D.C.: April 29, 2010. Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes. GAO-07-20. Washington, D.C.: November 9, 2006. Homeland Security: Successes and Challenges in DHS’s Efforts to Create an Effective Acquisition Organization. GAO-05-179. Washington, D.C.: March 29, 2005. Best Practices: Using Spend Analysis to Help Agencies Take a More Strategic Approach to Procurement. GAO-04-870. Washington, D.C.: September 16, 2004. Contract Management: High-Level Attention Needed to Transform DOD Services Acquisition. GAO-03-935. Washington, D.C.: September 10, 2003. Best Practices: Improved Knowledge of DOD Service Contracts Could Reveal Significant Savings. GAO-03-661. Washington, D.C.: June 9, 2003. Best Practices: Taking a Strategic Approach Could Improve DOD’s Acquisition of Services. GAO-02-230. Washington, D.C.: January 18, 2002. Best Practices: DOD Can Help Suppliers Contribute More to Weapon Systems Programs, GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998.
GAO has reported that the government is not fully leveraging its aggregate buying power, and that strategic sourcing, a process that moves a company away from numerous individual procurements to a broader aggregate approach, allowed companies to achieve savings of 10 to 20 percent. A similar savings rate applied to the federal procurement budget would equal more than $50 billion dollars. In 2005, the Office of Management and Budget directed agencies to use strategic sourcing and established the FSSI program to manage governmentwide efforts. GAO was asked to assess (1) the extent to which selected agencies managed spending and achieved savings through strategic sourcing, (2) key challenges selected agency and FSSI officials face in strategically sourcing products and services, and (3) the extent to which the FSSI program managed spending and achieved savings through strategic sourcing. To do this, GAO selected four agencies that were among the highest in fiscal year 2011 procurement obligations--DOD, DHS, VA, and Energy--and reviewed governmentwide FSSI efforts. For each, GAO analyzed strategic sourcing data and policies, and interviewed responsible officials. Selected agencies leveraged only a fraction of their buying power through strategic sourcing and achieved limited savings. In fiscal year 2011, the Departments of Defense (DOD), Homeland Security (DHS), Energy, and Veterans Affairs (VA) accounted for 80 percent of the $537 billion in federal procurement spending, but reported managing about 5 percent or $25.8 billion through strategic sourcing efforts. These agencies reported savings of $1.8 billion--less than one-half of one percent of procurement spending. While strategic sourcing may not be suitable for all procurement spending, leading companies strategically manage about 90 percent of their procurements and report annual savings of 10 percent or more. Further, most agencies' efforts do not address their highest spending areas such as services, which may provide opportunities for additional savings. Most selected agencies and the Federal Strategic Sourcing Initiative (FSSI) program have not fully adopted a strategic sourcing approach. In prior work, GAO found that sustained leadership and effective metrics are important factors to implementing strategic sourcing. However, leaders at DOD have dedicated limited resources to strategic sourcing, and leaders at VA and Energy are just beginning to align resources for agencywide strategic sourcing efforts. A lack of clear guidance on metrics for measuring success has also impacted the management of ongoing FSSI efforts as well as most selected agencies' efforts. In contrast, DHS leaders stood up a centralized office and hold senior managers accountable to meet goals. DHS sets targets for use of strategic sourcing contracts, and reported that nearly 20 percent of its fiscal year 2011 procurement spending was directed through strategically sourced contracts. The FSSI program managed little spending through strategic sourcing initiatives, but reported considerable savings. In fiscal year 2011, the program managed $339 million through several governmentwide initiatives and reported $60 million in savings. However, total spending through the program remains low, in part, because the FSSI contracts have low rates of use and the program has not yet targeted the products and services on which the government spends the most. GAO recommends a number of actions OMB, DOD, and VA can take to achieve more savings, such as applying strategic sourcing practices to their highest spending procurement categories, and setting targets for use of strategic sourcing contracts. All three agencies concurred with our recommendations.
Within DOD, the commitment to make significant investments in developing a new product typically takes place at a decision review known as Milestone B, which authorizes military service officials to enter the engineering and manufacturing development phase of the DOD acquisition process, select a development contractor, and sign a development contract. The process of identifying and understanding requirements typically begins when a sponsor, usually a military service, submits an Initial Capabilities Document that identifies the existence of a capability gap, the operational risks associated with the gap, and a recommended solution or preferred set of solutions for filling the gap. Potential solutions are then assessed in an Analysis of Alternatives prior to the start of the technology maturation and risk reduction phase of DOD’s acquisition process. According to DOD guidance, an Analysis of Alternatives assesses the costs and benefits of potential materiel solutions that could fill the capability gaps documented in an Initial Capabilities Document and supports a decision on the most cost effective solution. Operational requirements for that preferred solution are then defined in a draft Capability Development Document that goes through several stages of military service- and DOD-level review and validation. Our work on product-development best practices has found that clearly understood and stable program requirements are critical to establishing a sound, executable business case for any product development program. Figure 1 shows the phases of DOD’s acquisition process. In a March 2016 report, we found that after completing a review of its airborne intelligence, surveillance, and reconnaissance (ISR) portfolio, OSD directed the Navy in January 2016 to focus on developing and fielding an unmanned Carrier Based Aerial Refueling System, which represented a significant shift in requirements. The program was subsequently designated the MQ-25. Previously the Navy had been largely focused on developing and fielding a system that could provide ISR and air-to ground strike capabilities, with the potential to add aerial refueling capability in the future. That system, referred to as the Unmanned Carrier Launched Airborne Surveillance and Strike (UCLASS) system was to have the potential to operate in highly contested environments. Under the MQ-25 program, the Navy is now focused on developing and fielding an unmanned tanker capable of operating from the carrier, in a permissive environment, to refuel other naval aircraft and provide only limited ISR capability. The overall system is expected to extend the range of the carrier air wing’s mission effectiveness and increase the number of F/A-18E/Fs available for strike fighter missions, among other things. The MQ-25 system will consist of three segments: an aircraft segment; a control system and connectivity segment (CS&C); and an aircraft carrier segment (see figure 2). The aircraft segment is to develop a carrier- suitable unmanned vehicle and associated support systems. The CS&C segment is to interface with existing command and control systems, and the tasking, processing, exploitation, and dissemination system. The aircraft carrier segment is to make modifications to upgrade the existing carrier infrastructure to support unmanned aircraft systems. These three segments will be managed and integrated by the Navy’s Unmanned Carrier Aviation program office, acting as a Lead Systems Integrator. Between fiscal years 2017 and 2022, the Navy has budgeted almost $2.5 billion to continue development of the MQ-25 carrier and control segments and to begin development of the aircraft segment. Over that period, the annual funding requirements for the overall MQ-25 system will increase from $89.0 million in 2017 to $554.6 million in 2022 (see figure 3). In the first quarter of fiscal year 2018, the Navy plans to request MQ-25 aircraft proposals from four competing contractors. Then, in the summer of 2018, the Navy expects to hold a Milestone B review to assess whether the Navy is ready to enter the engineering and manufacturing development phase of the acquisition process for the aircraft segment and downselect to one of the four contractors. In July 2017, the Joint Requirements Oversight Council (JROC) validated system requirements for the MQ-25. The Navy has two primary requirements, known as key performance parameters. Those requirements are: (1) carrier suitability and (2) air refueling. Carrier suitability is defined by the Navy as the ability of the aircraft to effectively operate on and from all current and planned aircraft carriers and to integrate into carrier air wing operations. Air refueling indicates the ability of the aircraft to be equipped as a sea-based tanker to refuel other carrier-based aircraft—a mission currently performed by Navy’s F/A- 18E/F Super Hornets. The MQ-25 requirements have evolved intermittently over the past 16 years instead of following the more sequential processes described in DOD requirements and acquisition guidance. The MQ-25 requirements are not traced back to a single, standalone Initial Capabilities Document (ICD). Instead they address capability gaps identified in two different such documents that were developed more than 4 years apart. Over time, the Navy conducted various analyses, each focused on different aspects of those capability gaps. Our assessment of the content of the Navy’s underlying documentation and analyses, when taken together, is that they provide a basis for the current set of MQ-25 requirements. Figure 4 illustrates the iterative evolution of the MQ-25 requirements. As noted in the figure, after receiving direction from OSD in January 2016 to pursue a carrier based airborne tanking system, the Navy began the process of defining more specific MQ-25 aircraft requirements and reducing technology and design risks. In September and October 2016, the Navy awarded cost-plus-fixed-fee contracts to each of the four competing contractors to conduct risk reduction activities, including concept refinement and requirements trade analysis. The total combined value of the contracts, including options, is approximately $250 million. The Navy expects the contractors to provide concepts for an unmanned aircraft that could meet the tanking requirements of the F/A-18E/F in the mid-2020s, while also providing some ISR capabilities. Our comparison of the Navy’s final requirements document—the Carrier Based Unmanned Aircraft System Capability Development Document— with earlier draft versions found that the Navy reduced the total number of key performance parameters from seven to two—carrier suitability and air refueling—and made adjustments to both. The Navy refined the carrier suitability requirement to focus more clearly on the MQ-25’s basic ability to operate on and from the aircraft carrier. For air refueling, the Navy adjusted the mission focus and the required refueling capacity at a specific distance from the ship. Our work in product-development best practices has found that as detailed requirements are identified, decision makers can make informed trades between the requirements and available resources, potentially achieving a match and establishing a sound basis for a program business case before entering the product development phase of the defense acquisition system. The Navy’s MQ-25 acquisition strategy, approved by Navy leadership in April 2017, reflects key aspects of an evolutionary, knowledge-based acquisition approach. While the Navy is still developing, refining, and finalizing most of the acquisition documentation that will make up its program business case, our review of its acquisition strategy and other available documentation showed that they reflect key aspects of a knowledge-based approach and generally align with what we have found to be product-development best practices: Using open systems standards and an evolutionary approach: The Navy is planning to use open systems standards and an evolutionary development approach to develop, fly, and deploy the MQ-25 over time. The Navy expects to provide primarily aerial refueling and ISR capabilities first, while using open systems standards to support incremental capability upgrades in the future like adding the capability to receive fuel, weapons and improving radars. In July 2013, we concluded that the adoption of open systems standards in defense acquisitions can provide significant cost and schedule savings. In addition, we have previously reported that adopting a more evolutionary, incremental approach can enable the capture of design and manufacturing knowledge and increase the likelihood of success in providing timely and affordable capabilities. Using knowledge-based criteria to assess progress and inform key decisions: The Navy has established knowledge-based criteria for seven key points during MQ-25 aircraft development. Those points include the development contract award, the system design review, the low-rate production contract award, and the start of initial operational testing. At each point, the Navy plans to assess program progress against the established criteria and provide briefings to key leadership stakeholders before moving into the next phase of development. If implemented, this knowledge-based approach would align with best practices that we identified in our body of work related to product-development. Specifically, we have found that achieving positive program outcomes requires the use of a knowledge-based approach to product development that demonstrates high levels of knowledge attained at key junctures. Constraining development schedule: According to the Navy’s acquisition strategy, the MQ-25 aircraft is expected to take 6 to 8 years from the start of product development (i.e., Milestone B) to the fielding of an initial operational capability. Based on our work in product development best practices, constraining the development phase of a program to 5 or 6 years is preferred because, among other things, it aligns with DOD’s budget planning process and fosters the negotiation of trade-offs in requirements and technologies. Limiting technology risk: The Navy expects to significantly reduce technology risk during development by mandating that technologies, or subsystems, for the MQ-25 aircraft must be demonstrated in a relevant environment to be included in the design. If a technology is identified that does not meet this criteria, the Navy plans to push that technology into the future and include it only when it reaches the specified level of maturity. Federal statute and product development best practices illustrate the critical importance of demonstrating high levels of technology maturity prior to entering the product development phase of the defense acquisition system. As we reported in March 2017, failure to fully mature technologies prior to developing the system design can lead to redesign and cost and schedule growth if later discoveries during development lead to revisions. Limiting design risk: While the Navy does not plan to hold a MQ-25 system level preliminary design review prior to the start of development, as best practices recommend, it is tailoring its previous UCLASS aircraft requirements which may allow the contractors to leverage the preliminary design knowledge gained under that program. In addition, the Navy is leveraging knowledge gained under the four recent risk reduction contracts, as well as various levels of prototyping done by each of the contractors and the Navy. Our work in product-development best practices emphasizes the importance of gaining early design knowledge to reduce design risk before beginning a product development. In June 2017, we reported that prototyping helped programs better understand design requirements, the feasibility of proposed solutions, and cost—key elements of a program business case. Developing an independent cost estimate: Cost analysts within the Cost Analysis and Program Evaluation office of the Office of the Secretary of Defense are in the process of developing an independent cost estimate for the MQ-25 aircraft. Federal statute, DOD acquisition guidance, and product-development best practices illustrate the importance of having an independent cost estimate to inform the business case for a new product development program. Cost Analysis and Program Evaluation officials explained that they had not yet completed their estimate, but they plan to have it done in time to support the Navy’s MQ-25 Milestone B review in the summer 2018. Given the early focus on defining requirements and reducing risk prior to the start of product development, the Navy plans to award a fixed-price incentive, firm target contract for MQ-25 aircraft development. This type of contract is designed to provide a profit incentive for the contractor to control costs. It specifies target cost, target profit, and ceiling price amounts, with the latter being the maximum amount that may be paid to the contractor. The Navy plans to issue a request for proposals to the four competing contractors in October 2017 and award the contract to one of those four contractors the following year. With the Milestone B review scheduled in the summer of 2018, the ultimate success of the MQ- 25 program largely depends on the Navy’s ability to present an executable business case and then effectively implement its planned approach. We are not making recommendations in this report. We provided DOD with a copy of this report and they returned technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense and the Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. GAO staff who made key contributions to this report are listed in the appendix. In addition to the contact named above, key contributors to this report were Travis Masters, Assistant Director; Marvin E. Bonner; Laura Greifner; Kristine Hassinger; and Roxanna Sun.
The Navy expects to invest almost $2.5 billion through fiscal year 2022 in the development of an unmanned aerial refueling system referred to as the MQ-25 . The MQ-25 is the result of a restructure of the former Unmanned Carrier-Launched Airborne Surveillance and Strike system. The program is expected to deliver an unmanned aircraft system that operates from aircraft carriers and provides aerial refueling to other Navy aircraft and intelligence, surveillance, and reconnaissance capabilities. The Navy plans to release a request for proposals for air system development by October 2017 and award a development contract one year later. A House Armed Services Committee report on a bill for the National Defense Authorization Act for Fiscal Year 2017 contained a provision for GAO to review the status of the MQ-25 program. This report assesses the extent to which the MQ-25's acquisition strategy is (1) rooted in validated requirements and (2) structured to follow a knowledge-based acquisition process. To do this work, GAO reviewed the Navy's requirements documentation, acquisition strategy, and other relevant documents and compared them with acquisition statutes, Department of Defense acquisition policy, and previous GAO reports and best practices. GAO also discussed the MQ-25 requirements and acquisition strategy with the Navy program office and other cognizant officials. The MQ-25 requirements have been validated by DOD's Joint Requirements Oversight Council. The Navy has identified two primary requirements: carrier suitability, which means the ability to operate on and from the Navy's aircraft carriers; and air refueling, which is the ability to provide fuel to other carrier-based assets while in flight. While the MQ-25 system is also expected to possess intelligence, surveillance and reconnaissance capabilities; those capabilities are not considered primary requirements. According to the program's acquisition strategy, the MQ-25 system will consist of three segments: the Air segment; a control and connectivity segment, which will interface with existing command and control systems; and an aircraft carrier segment, which will make modifications to upgrade existing carrier infrastructure. These three segments will be managed and integrated by the Navy's Unmanned Carrier Aviation program office, acting as a Lead Systems Integrator (see figure below). The Navy has established a knowledge-based approach for acquiring the MQ-25 aircraft. For example, the Navy plans to take an incremental approach to develop and evolve the MQ-25 over time. Further, the Navy expects to use knowledge-based criteria to assess progress at key decision points during development, and to use only technologies with high levels of maturity. With the Milestone B review scheduled in the summer of 2018—signaling the beginning of development—the ultimate success of the MQ-25 program depends heavily on the Navy's ability to present an executable business case and then effectively implement its planned approach. GAO is not making recommendations. DOD's technical comments are incorporated in this report.
VA provides health care services to various veteran populations— including an aging veteran population and a growing number of younger veterans returning from the military operations in Afghanistan and Iraq. VA operates approximately 150 hospitals, 130 nursing homes, 800 outpatient clinics, as well as other facilities to provide care to veterans. In general, veterans must enroll in VA health care to receive VA’s medical benefits package—a set of services that includes a full range of hospital and outpatient services, prescription drugs, and long-term care services provided in veterans’ own homes and in other locations in the community. VA also provides some services that are not part of its medical benefits package, such as long-term care provided in nursing homes. VA develops a health care budget estimate each year of the resources needed to provide these services for 2 fiscal years. Typically, VA’s Veterans Health Administration (VHA), which administers VA’s health care program, starts to develop a health care budget estimate approximately 10 months before the President submits the budget request to Congress the following February. The budget estimate includes the total cost of providing health care services, including direct patient costs as well as costs associated with management, administration, and maintenance of facilities. VA develops most of its budget estimate for health care services using the Enrollee Health Care Projection Model (EHCPM). VA uses other methods to develop the remaining parts of its budget estimate, that is, the costs of long-term care and other health care programs. VA’s annual budget estimate for a fiscal year includes estimates of anticipated funding from several sources. These sources include new appropriations, which refer to the appropriations to be provided during the current annual appropriations process for the upcoming fiscal year, and with respect to advance appropriations, the next fiscal year. For example, VA estimated it needed $52.7 billion in new appropriations for fiscal year 2013 and $54.5 billion for fiscal year 2014. In addition to new appropriations, sources of funding include resources expected to be available from unobligated balances and collections and reimbursements that VA anticipates it will receive in the fiscal year.include third-party payments from veterans’ private health care insurance for the treatment of nonservice-connected conditions and veterans’ copayments for outpatient medications. VA’s reimbursements include amounts VA receives for services provided under service agreements with the Department of Defense (DOD). VA’s health care budget estimate informs the President’s annual request for appropriations for VA health care services, which includes an advance appropriations request for these services. The budget estimate can change during each budget formulation cycle, due to the availability of updated data and the successively higher levels of review in VA and OMB before the President’s budget request is submitted to Congress. The Secretary of VA considers the health care budget estimate developed by VHA when assessing resource requirements among competing interests within VA, and OMB considers overall resource needs and competing priorities of other agencies when deciding the level of funding requested for VA’s health care services. VA prepares a budget justification that provides information supporting the policy and funding decisions in the President’s budget request. In its budget justification, VA includes estimates related to the following: Ongoing health care services, which include acute care, rehabilitative care, mental health, long-term care, and other health care programs. Initiatives, which are proposals by the Secretary of VA or by the President to provide, expand, or create new health care services. Some of the proposed initiatives can be implemented within VA’s existing authority, while other initiatives would require a change in law. Operational improvements, which are changes in the way VA manages its health care system to lower costs, such as changes to its purchasing and contracting strategies. Collections and reimbursements, which are resources VA expects to collect from health insurers of veterans who receive VA care for nonservice-connected conditions and other sources, such as veterans’ copayments, and to receive as reimbursement of services provided to other government agencies or private or nonprofit entities. The President’s fiscal year 2013 budget request for VA health care services was $165 million more than the advance appropriations request for the same year. This increase came about as a result of changes in the estimates supporting the two requests. Specifically, the President’s fiscal year 2013 request reflected an estimate of funding needed for initiatives that increased by $2 billion and an estimate for ongoing health care services that decreased by $2.1 billion, for a net decrease of $110 million. In addition, VA’s estimate of anticipated resources from collections and reimbursements decreased by $275 million. This decline in anticipated resources was partially offset by the $110 million decrease in expected obligations, which resulted in the net increase in the President’s request of $165 million. (See table 1.) Three factors accounted for most of the changes in the estimates that supported the President’s fiscal year 2013 budget request when compared to the earlier, advance appropriations request; however, VA, in its budget justification, was not transparent about two of the factors. The three factors that accounted for the $2 billion increase in the initiatives estimate and the $2.1 billion decrease in the ongoing health care services estimate were: (1) a new approach in reporting the estimate for initiatives, (2) updated assumptions and data to estimate ongoing health care services, and (3) additional funding needed for initiatives. The first factor—VA’s new reporting approach—accounted for $1.2 billion of the increase in VA’s initiatives estimate and a corresponding decrease in VA’s ongoing health care services estimate. The second factor accounted for a $900 million decrease in VA’s ongoing health care services estimate. This decrease was largely offset by the third key factor—an almost $800 million increase in additional funding for initiatives. (See table 2.) A new approach in reporting the estimate for initiatives. VA used a new reporting approach for initiatives that combined both funding for initiatives and funding for certain ongoing health care services in its initiatives estimate, which increased VA’s initiatives estimate and decreased VA’s ongoing services estimate. In prior budget justifications, VA’s estimated funding for initiatives only included funding identified for initiatives during that year while funding needs for all ongoing services were included in VA’s estimate for ongoing health care services. However, VA, in its budget justification, did not disclose that it had used a new reporting approach for initiatives. OMB staff and VA officials told us that the reason for this change in reporting was to be more transparent about the total amount of funding needed to support VA’s initiatives. Nevertheless, by not stating in its budget justification that it made this change, VA has not made it transparent that the estimate for initiatives is greater and the estimate for ongoing services is less than they would have been using VA’s past reporting approach. Updated data and assumptions to estimate ongoing health care services. As reported in its budget justification, VA used updated assumptions and data, which reduced VA’s estimate for ongoing health care services. Specifically, the amount of funding needed to support health care services estimated by the EHCPM decreased because VA updated some of the assumptions used in the EHCPM. For example, VA updated the EHCPM’s assumption accounting for the pay freeze for civilian employees in fiscal years 2011 and 2012, which reduced the base salary of VA employees in future years. VA also used updated data to adjust the estimates produced by the EHCPM and the estimates for long-term care and other health care programs. Updated data for long-term care and other health care programs generally indicated that costs for these services would grow at a slower rate than the data used to support the President’s fiscal year 2013 advance appropriations request indicated. Additional funding for initiatives. According to VA’s budget justification, as a result of the reduced estimate for ongoing health care services, VA increased the estimate of funding needed for its initiatives. This estimate of funding included funding for all initiatives for which funding was not requested in the fiscal year 2013 advance appropriations request and increased funding for some initiatives for which funding had been identified in the earlier request. However, in its fiscal year 2013 budget justification, VA did not make it clear that part of the additional increase in its initiatives estimate occurred because VA’s earlier estimate in support of the advance appropriations request did not include funding for all the initiatives the agency intended to continue. According to OMB staff, the purpose of the advance appropriations request is to provide assurance for the continuation of ongoing health care services and select initiatives that represent direct care to veterans. As a result, rather than including estimates of funding needed to support all initiatives in advance appropriations requests—including the fiscal year 2013 advance appropriations request and the fiscal year 2014 advance appropriations request—the funding needs for all initiatives are taken into account during the following budget formulation cycle. At that time, once updated data are available to produce revised estimates for ongoing health care services, VA and OMB assess the amount of likely resources available to fund initiatives in the context of overall budget constraints. However, VA did not state that some initiatives for which estimates were included in the fiscal years 2013 and 2014 advance appropriations requests would require additional funding if the initiatives were to be continued. (Table 3 indicates the difference in the fiscal year 2013 initiative estimates attributable to VA’s new reporting approach versus additional funding.) VA’s budget justification is used to provide Congress with relevant information for making decisions. The lack of transparency regarding the factors that changed VA’s estimates for ongoing health care services and initiatives results in unclear information for congressional deliberation. Our analysis of the estimates supporting the President’s fiscal year 2013 budget request found that VA’s supporting estimates (1) do not address historical discrepancies between estimated and actual NRM spending and (2) lack analytical support for expected savings from some operational improvements. Regarding NRM, VA’s fiscal year 2013 estimate does not appear to correct for the long-standing pattern of VA’s NRM spending exceeding its estimates and was based on a policy decision. In June 2011, we reported VA’s spending on NRM exceeded the estimates reported in VA’s budget justifications from fiscal years 2006 to 2010. More recently, we found that in fiscal year 2011 VA spent about $2 billion for NRM, which was $867 million more than estimated (see fig. 1). According to VA officials, NRM spending has exceeded estimates of needed funding in recent years because VA medical facilities have spent more funds on NRM projects that were originally expected to be spent on other activities— such as utilities, grounds maintenance, and janitorial services. This spending is consistent with VA’s authority to increase or decrease the amounts VA allocates from the Medical Facilities account for NRM and with congressional committee report language. When we asked VA officials if the fiscal year 2013 estimate addressed the historical discrepancies between amounts estimated and actual spending, VA officials said that all information was considered in developing the estimate. However, VA officials noted that the amount requested was a policy decision and did not specifically say whether these discrepancies were addressed. This explanation suggests that VA has not changed the way in which it determines the final NRM estimate; as we previously reported VA lowered its fiscal year 2012 estimate due to a policy decision to fund other initiatives. Because the fiscal year 2013 estimate of $710 million is significantly lower than past spending and lower than the estimate provided last year, it does not appear that medical facilities’ spending was addressed. Furthermore, VA estimates that the NRM backlog for health care facilities—which reflects the total amount needed to address facility deficiencies—will remain over $9 billion in fiscal year 2013. As such, the NRM information provided in VA’s budget justification may not be a reliable estimate of future spending for NRM. Regarding operational improvements, VA estimated savings for fiscal year 2013 using the same methodologies it used in the past, some of which we recently reported lacked analytical support or were flawed. The President’s budget request for fiscal year 2013 reflected VA’s estimate that it would save about $1.3 billion from the implementation of six operational improvements: Changing rates. Estimated savings from purchasing dialysis treatments and other care from civilian providers at Medicare rates instead of current community rates. Acquisitions. Estimated cost savings from changes to VA’s purchasing and contracting strategies. Fee Care. Estimated saving from purchasing care from non-VA providers at lower rates. Realigning clinical staff and resources. Estimated savings by using less costly health care providers, such as licensed practical nurses instead of certain types of registered nurses. Medical and administrative support. Estimated savings from employing resources more efficiently. VA real property. Estimated saving from initiatives including repurposing vacant or underutilized buildings, decreasing energy costs, and changing procurement practices for building maintenance. In a February 2012 report, we highlighted issues regarding VA’s methodology for estimating savings from some operational improvements, including changes to VA real property, medical and administrative support activities, and the realignment of clinical staff and resources. We also recommended that VA develop a sound methodology for estimating savings from its operational improvements. In response, VA concurred with the recommendation except for two real property initiatives where VA maintained that the savings estimates were not flawed. However, since our February report was issued after the release of the President’s budget request for fiscal year 2013, VA has not yet implemented our recommendations. VA officials told us during the course of our current review that the agency is taking steps to address deficiencies in the methodology used for estimating savings for some of its operational improvements. Without a sound methodology, VA runs the risk of falling short of its estimated savings, which may ultimately require VA to make difficult trade-offs to provide health care services with the available resources. We determined that the estimates for some of the operational improvements provided in VA’s budget justification may not be reliable estimates of future savings and therefore are of limited use for decision makers. VA’s budget justification is intended to provide Congress with estimates of resource needs and what the agency plans to achieve with requested appropriations. Our work shows that changes in the way that VA estimates and reports its required resources are responsible for the increase in the President’s fiscal year 2013 budget request for VA health care, when compared to last year’s advance appropriations request for the same year. However, VA was not transparent in its budget justification about two of the factors that accounted for the change in VA’s initiatives and ongoing health care services estimates. By neither disclosing that it used a new reporting approach for initiatives nor indicating that its advance appropriations requests did not include funding for continuing initiatives, VA did not provide Congress with information relevant to understanding these estimates. In addition, VA may not have provided Congress reliable information with which to make decisions regarding VA’s appropriations in regards to NRM and some operational improvements. VA’s most recent NRM estimates do not appear to correct for the long-standing pattern where VA’s NRM spending exceeds VA’s NRM estimates. VA’s estimates have not consistently accounted for additional spending by VA medical facilities. As a result, the NRM estimates may be unreliable, as they may continue to underestimate VA’s future spending for NRM. Also, VA continued to use flawed methodologies we identified in our prior work to develop savings estimates for operational improvements. We continue to believe that VA should improve its methodology as we previously recommended. Until these issues are addressed, VA’s estimates of NRM and operational improvements are of limited use for decision makers. To improve the transparency and reliability of information presented in VA’s congressional budget justifications that support the President’s budget request for VA health care, we recommend the Secretary of VA take the following three actions: State in future budget justifications whether the estimates for initiatives include funding for ongoing health care services. State in future budget justifications whether the estimates for initiatives in support of the advance appropriations request reflect all the funding that may be required if all initiatives are to be continued. Reflect in future budget justifications estimates of annual resource needs for NRM that fully account for resources that VA medical facilities have consistently spent for this purpose. We provided a draft of this report to the Secretary of VA and the Acting Director of OMB for comment. In its written comments signed by the Chief of Staff and reprinted in appendix I, VA concurred with two of our three recommendations, but did not concur with our recommendation related to the estimates for initiatives that support VA’s advance appropriations requests. In addition, VA stated in its comments that one aspect of our report is not accurate, that it disagrees with a second, and that it had concerns about a third. OMB staff provided a technical comment, which we incorporated. VA concurred with our first recommendation regarding funding for ongoing health care services. VA noted that to implement this recommendation it will include in future budget justifications a narrative description stating whether the estimates for initiatives include funding for ongoing health care services. VA also concurred with our third recommendation regarding its estimates for NRM. VA noted that in order to implement this recommendation, VA will reflect in future budget justifications annual estimates of resources needed for NRM that are consistent with policy decisions and account for past spending on NRM. VA did not concur with our second recommendation related to the estimates for initiatives that support VA’s advance appropriations requests. VA stated that it did not concur because the recommendation is not consistent with its multiyear approach to budgeting for advance appropriations, in which VA estimates what the agency calls essential initial funding for the advance appropriations year. VA then estimates full funding for initiatives in the next year based on updated information. However, we do not address VA’s approach to budgeting in this report and our recommendation is that VA state in future budget justifications whether the estimates for initiatives in support of the advance appropriations request reflect all the funding that may be required if all initiatives are continued. VA’s comments indicate that all funding for these initiatives is not included in the advance appropriations estimates. VA could implement our recommendation by making a statement to this effect in its budget justification. In addition to its comments on our recommendations, VA had comments on three sections of our report. VA questioned the accuracy of our assertion that VA did not disclose the new reporting approach for its initiatives, which included estimates for certain ongoing health care services. VA stated that a table footnote in its budget justification explained that the estimates for initiatives for fiscal years 2012, 2013, and 2014 represented total funding. We do not believe that a table footnote in a document, which consists of nearly 400 pages, provides adequate transparency in explaining a change of more than $1 billion that resulted from a new reporting approach. Moreover, because the footnote does not explain that this approach is new or that the estimate for ongoing services was also affected, we continue to believe that the transparency of VA’s reporting could be improved. We support VA’s plans to include an expanded narrative regarding its approach to reporting estimates for initiatives in future budget justifications and believe this will enhance transparency. VA also indicated that it disagreed with what VA characterized as our assertion that Congress cannot use VA’s estimates of costs for ongoing health care services or initiatives due to a lack of transparency. As evidence, VA pointed to the detailed information presented in the budget justification and described how the estimates are determined, including a description of the actuarial model it uses. However, we did not state that Congress cannot use VA’s estimates for the cost of ongoing health care services and initiatives. Instead, what we identified was that the lack of transparency regarding the factors that changed VA’s estimate for ongoing health care services and initiatives resulted in unclear information for congressional deliberation. VA’s concurrence with two of our recommendations and its implementation of them would address the concerns we raised and improve the transparency of the information that VA provides to Congress in its annual budget justifications. VA also expressed concern that our conclusions cast doubt on its strong commitment to stewardship of resources. VA noted that the agency and its resources need to be flexible and responsive to changes in veterans’ medical care needs, which may occur after its budget estimates are formulated. VA has the authority to respond to such changes. We have pointed out, for example, that VA’s NRM spending is consistent with its authority to increase and decrease the amounts VA allocates from the Medical Facilities account for NRM. However, in regard to NRM, the long- standing pattern in which NRM spending has significantly exceeded VA’s estimates needs to be better accounted for in VA’s budget estimates. Doing so will not decrease VA’s flexibility to be responsive to veterans’ needs. Moreover, we believe that VA’s plans to address our recommendation will provide Congress with more reliable estimates with which to make decisions about VA appropriations. We are sending copies of this report to the Secretary of Veterans Affairs and the Acting Director of the Office of Management and Budget, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website http://www.gao.gov. If you or your staff have any questions about this report, please contact Randall B. Williamson at (202) 512-7114 or [email protected], or Melissa Emrey-Arras at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contacts named above, James C. Musselwhite and Melissa Wolf, Assistant Directors; Kye Briesath, Deirdre Brown, Krister Friday, Lauren Grossman, Aaron Holling, Wati Kadzai, and Lisa Motley made key contributions to this report. VA Health Care: Estimates of Available Budget Resources Compared with Actual Amounts. GAO-12-383R. Washington, D.C.: March 30, 2012. VA Health Care: Methodology for Estimating and Process for Tracking Savings Need Improvement. GAO-12-305. Washington, D.C.: February 27, 2012. Department of Veterans Affairs: Issues Related to Real Property Realignment and Future Health Care Costs. GAO-11-877T. Washington, D.C.: July 27, 2011. Veterans’ Health Care Budget Estimate: Changes Were Made in Developing the President’s Budget Request for Fiscal Years 2012 and 2013. GAO-11-622. Washington, D.C.: June 14, 2011. Veterans’ Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President’s Budget Request. GAO-11-205. Washington, D.C.: January 31, 2011. VA Health Care: Spending for and Provision of Prosthetic Items. GAO-10-935. Washington, D.C.: September 30, 2010. VA Health Care: Reporting of Spending and Workload for Mental Health Services Could Be Improved. GAO-10-570. Washington, D.C.: May 28, 2010. Continuing Resolutions: Uncertainty Limited Management Options and Increased Workload in Selected Agencies. GAO-09-879. Washington, D.C.: September 24, 2009. VA Health Care: Challenges in Budget Formulation and Issues Surrounding the Proposal for Advance Appropriations. GAO-09-664T. Washington, D.C.: April 29, 2009. VA Health Care: Challenges in Budget Formulation and Execution. GAO-09-459T. Washington, D.C.: March 12, 2009. VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement. GAO-09-145. Washington, D.C.: January 23, 2009. Federal Real Property: Progress Made in Reducing Unneeded Property, but VA Needs Better Information to Make Further Reductions. GAO-08-939. Washington, D.C.: September 10, 2008. VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement. GAO-06-958. Washington, D.C.: September 20, 2006.
The Veterans Health Care Budget Reform and Transparency Act of 2009 requires GAO to report on the President’s annual budget request to Congress for VA health care services. GAO’s previous work found that VA’s NRM spending exceeded its estimates in recent years and that some of VA’s estimates of savings from operational improvements lacked analytical support or were flawed. Building on GAO’s past work and the President’s most recent request for VA health care, this report examines (1) key changes to the fiscal year 2013 budget request compared to the 2013 advance appropriations request, and certain aspects of the fiscal year 2014 advance appropriation request and supporting estimates; and (2) whether the issues GAO identified regarding NRM and operational improvements continue in the estimates for the most recent request. GAO reviewed the President’s budget request, VA’s budget justification, and VA data. GAO interviewed VA officials and staff from the Office of Management and Budget. The President’s fiscal year 2013 budget request for the Department of Veterans Affairs’ (VA) health care services was $165 million more than the earlier advance appropriations request for the same year. This request reflected a $2 billion increase for initiatives and a $2.1 billion decrease for ongoing health care services, for a net decrease of $110 million in expected obligations. This decrease partially offset a decline in anticipated resources available to VA of $275 million, resulting in the net increase in the President’s request of $165 million. Two of the three factors that accounted for most of these changes were not transparent. First, VA used a new reporting approach for initiatives that combined both funding for initiatives and for certain ongoing health care services in its initiatives estimate. Previously, VA had reported only funding it identified for initiatives during that year. This new reporting approach resulted in an increase in VA’s initiatives estimate and a commensurate decrease in VA’s ongoing services estimate. VA officials told GAO that this change was made to be more transparent about the total funding needed for initiatives. However, because VA did not disclose this change in its budget justification, VA has not made it transparent that its initiatives estimate is greater and its ongoing health care services estimate is lower than they would have been using VA’s past approach. Second, VA included additional funding in its initiatives estimate, in part, to fund initiatives that were not identified in the fiscal year 2013 advance appropriations request. VA also did not make transparent in its budget justifications that some initiatives identified in its fiscal years 2013 and 2014 advance appropriations requests may require additional funding if the initiatives are continued. The lack of transparency regarding VA’s estimates for initiatives and ongoing health care services results in unclear information for congressional deliberation. The issues GAO previously identified related to NRM (non-recurring maintenance), such as renovations and other improvements of VA medical facilities, and operational improvements remain. VA’s fiscal year 2013 estimate for NRM—$710 million—does not appear to correct for the long-standing pattern where VA’s NRM spending exceeds VA’s NRM estimates. For example, in fiscal year 2011 VA spent about $2 billion for NRM, which was $867 million more than estimated. According to VA officials, this pattern has occurred because VA medical facilities have spent more funds on NRM projects that were originally expected to be spent on other activities—such as utilities, grounds maintenance, and janitorial services—which is consistent with VA’s authority to allocate its appropriations. When GAO asked if the fiscal year 2013 estimate addressed the historical discrepancies between estimated and actual NRM spending, VA officials said that all information was considered in developing the estimate. However, they noted that the final estimate was a policy decision and did not say specifically whether these discrepancies were addressed. Regarding operational improvements, VA estimated savings for fiscal year 2013 using the same methodologies it used in the past, some of which GAO previously found lacked analytical support or were flawed. GAO previously recommended that VA develop a sound methodology for estimating savings from its operational improvements, which according to officials, VA is addressing for future estimates. Until these issues are addressed, VA’s estimates of NRM and operational improvements may not be reliable and are of limited use for decision makers. GAO recommends that VA state in its budget justification whether the estimates for initiatives include funding for ongoing services and whether its advance appropriations request reflects funding that may be required if initiatives are continued. GAO also recommends that VA’s NRM estimates fully account for the long-standing pattern of medical facilities spending more on NRM than originally expected. VA concurred except for the recommendation on advance appropriations, which GAO believes is needed to improve transparency.
Congress passed the Overseas Differentials and Allowances Act of 1960 (hereinafter referred to as the Act) to (1) provide a means for more effectively compensating government employees for the extra costs and hardships associated with overseas assignments; (2) provide for the uniform treatment of government employees stationed overseas; (3) establish the basis for more efficient and equitable administration of the laws compensating government employees who are assigned overseas; and (4) facilitate government recruitment and retention of the best The Act qualified employees for civilian employment overseas.authorized the granting of LQA whenever government-owned or government-leased housing is not provided free of cost to an employee assigned overseas. LQA is intended to reimburse employees for the costs incurred for rent, heat, light, fuel, gas, electricity, and water. LQA is generally intended as a recruitment incentive to encourage individuals who are recruited by federal agencies in the United States— hereinafter referred to as “U.S. hires”—to live and work overseas for a limited period of time. However, under certain circumstances the DSSR also permits federal agencies to provide LQA to employees recruited and hired overseas. Specifically, the DSSR allows LQA to be granted to employees hired overseas provided that the following eligibility requirements are met: the employee’s actual place of residence overseas where LQA is to be granted can be fairly attributable to employment by the federal agency that is hiring him or her; prior to appointment by the federal agency, the employee was recruited in the United States or a U.S. territory by the U.S. government, including by the military; by a U.S. firm, organization, or interest; by an international organization in which the U.S. government participates; or by a foreign government; the employee must have been in “substantially continuous employment by such employer”; and the employee must have been authorized by such employer to receive paid transportation back to the United States or U.S. territory after the conclusion of his or her overseas employment. According to DSSR § 013, the head of a federal agency may issue further implementing regulations within the scope of the DSSR. According to OPM compensation claim decisions, agency implementing guidance may impose additional requirements, but may not be applied unless the employee has first met the basic DSSR eligibility criteria. implementing regulations such as DOD’s LQA Instruction may impose additional requirements to further restrict LQA eligibility but may not exceed the scope of the DSSR. State, OPM, and DOD have varying roles and responsibilities related to LQA for civilian employees assigned overseas, as shown in figure 1. OPM File Number 11-0037 (July 11, 2012). OPM File Number 12-0019 (Oct. 9, 2012). DOD’s LQA Instruction was last revised in February 2012. According to DCPAS officials, the primary reason for the revision was to extend eligibility for overseas allowances and differentials to same-sex domestic partners of civilian employees and their children, to comply with a 2010 Presidential Memorandum. The secondary reason for the revision, according to DCPAS officials, was to add a requirement that the heads of DOD components conduct ongoing quality assurance reviews to verify that foreign allowance and differential payments are consistent with applicable statutory and regulatory provisions. The addition of this requirement was partly in response to a DOD Inspector General’s report in August 2010 that found that the Office of the Deputy Assistant Secretary of Defense for Civilian Personnel Policy did not provide uniform guidance to the human resource offices of DOD components with regard to their authorizing overseas allowances and differentials accurately and consistently. The 2012 revision to the LQA Instruction also assigned responsibility to the Deputy Assistant Secretary of Defense for Civilian Personnel Policy for developing, revising, and monitoring the implementation of overseas allowance and differential policies and procedures. According to DOD’s LQA Instruction, overseas allowances and differentials, including LQA, are neither automatic salary supplements nor entitlements that are automatically granted to all employees who meet eligibility requirements. The LQA Instruction states that allowances and differentials are specifically intended to be recruitment incentives for U.S. citizens who are civilian employees living in the United States to accept federal employment overseas, and that ordinarily, if a person is already living overseas, that inducement is unnecessary. DOD’s LQA Instruction defines a “U.S. hire” as a person who physically resided permanently in the United States from the time he or she applied for employment until and including the date he or she accepted a formal offer of employment.DOD’s LQA Instruction permits LQA and other allowances in certain circumstances to be granted to employees hired overseas when those employees meet eligibility requirements. DOD components are responsible for making LQA eligibility determinations for individual job applicants or employees, and for ensuring that employees are paid LQA properly, in accordance with DOD’s LQA Instruction and the DSSR. For example, each military service has multiple human resource offices within the geographic combatant commands’ areas of responsibility that hire and work with employees regarding personnel issues. The level of local human resource office that determines LQA eligibility varies by military service and their respective human resource offices. For example, officials from the Army’s local human resource offices propose initial LQA eligibility determinations to the Army’s Civilian Human Resources Agency’s regional office, which makes the final eligibility determinations. In contrast, Navy officials stated that Navy local human resource offices in the EUCOM area of responsibility have full responsibility to make LQA eligibility determinations, although the local offices can contact the Navy Installation Command’s regional human resource office in Europe if there are questions associated with a particular LQA eligibility determination. The defense agencies and field activities have a much smaller overseas presence than the military services and generally centralize LQA eligibility determinations at the headquarters level. Table 1 shows the number of DOD employees who received LQA during fiscal years 2011 through 2014, as well as the total amount of LQA payments. Prior to the 2008 and 2011 OPM compensation claim decisions that discuss the single employer interpretation, many DOD components interpreted the LQA Instruction and the DSSR as authorizing LQA for employees hired overseas in situations of continuous employment with multiple employers, rather than a single employer. In May 2011, U.S. Army in Europe, in response to the 2011 OPM decision, concluded that its LQA eligibility determinations had been inconsistent with OPM’s single employer interpretation of the DSSR. In May 2012, EUCOM requested authorization from the Under Secretary of Defense for Personnel and Readiness to continue LQA for not longer than 12 months for employees working for DOD components in the EUCOM area of responsibility who were currently receiving the allowance, but who did not meet OPM’s single employer interpretation. EUCOM’s request prompted the Acting Principal Deputy Under Secretary of Defense for Personnel and Readiness to issue a memorandum on January 3, 2013, directing all DOD components to conduct an audit of all “locally hired overseas employees” (that is, employees hired overseas) currently receiving LQA. The audit results showed that 680 DOD civilian employees were considered to have been “erroneously paid LQA after having been hired overseas,” including 444 who were identified in the audit as being ineligible for LQA because of inconsistency with the single employer interpretation.results of DOD’s 2013 audit. DOD, through its components and DCPAS, has taken some steps to clarify LQA eligibility requirements, particularly as they relate to the single employer interpretation and the definition of a U.S. hire. For example, DCPAS is drafting an update to DOD’s LQA Instruction that will address the single employer interpretation. Also, the Office of the Under Secretary of Defense for Personnel and Readiness has issued a memorandum and DCPAS has issued a point paper and fielded questions from DOD components about individual employees to clarify LQA eligibility requirements. In addition, in September 2013, the Deputy Assistant Secretary of Defense for Civilian Personnel Policy issued a policy advisory to clarify the definition of a U.S. hire. However, the policy advisory’s definition of a U.S. hire appears to conflict with OPM’s interpretation of the DSSR. DCPAS is currently consulting with DOD components to decide whether to continue using the policy advisory’s definition, which could prompt it to discuss possible revisions to the DSSR with State, or to invalidate that definition. Single Employer Interpretation. The January 2013 memorandum that initiated the 2013 LQA audit stated that employees hired overseas after working for more than one employer are not eligible to receive LQA. DCPAS also disseminated a point paper to DOD components in April 2013 that acknowledged the components had been incorrectly interpreting the DSSR and further reinforced the single employer interpretation as the correct interpretation. In the point paper, DCPAS also provided examples of employee categories that did not meet the single employer interpretation and thus were not eligible for LQA.example, the point paper clarified the status of military members who separated from service in a location outside the United States, were employed in a federal civilian position and properly provided LQA, left for employment with a contractor(s), and subsequently returned to a federal civilian position with DOD. The point paper stated that it does not matter that such employees properly received LQA during their initial civilian employment. Based upon the clarified definition of “substantially continuous employment by such employer,” these employees had intervening employment and were not eligible for LQA upon appointment to the subsequent period of federal civilian employment with DOD. In addition, DCPAS officials have periodically assisted officials from DOD components’ headquarters if they have questions regarding LQA policies. In particular, officials from two military departments told us that DCPAS officials will answer questions about LQA eligibility determinations for individual employees and will respond to inquiries about interpretations of DOD’s LQA Instruction. DCPAS is drafting an update to DOD’s LQA Instruction that will address the single employer interpretation. The LQA Instruction was last revised in February 2012, prior to the 2013 LQA audit, and therefore does not reflect, among other things, the single employer interpretation. Specifically, DOD’s LQA Instruction currently does not state that, to be eligible for LQA, an employee must have remained with the same employer that recruited him or her from the United States, and must have had no intervening employment, prior to his or her DOD civilian position. Officials from several DOD components told us that it would be helpful to have an updated LQA Instruction that reflects the single employer interpretation to minimize potential misinterpretations. According to DCPAS officials and a February 2015 draft LQA Instruction we reviewed, DOD’s updated LQA Instruction will address the single employer interpretation and other LQA eligibility requirements, such as former military members’ eligibility to receive LQA immediately after separating from the military. However, the draft LQA Instruction we reviewed also includes a proposal to eliminate LQA for DOD civilian employees hired overseas unless a Service Secretary or equivalent grants an exception. If DOD adopts this proposal, the number of instances where DOD would apply the single employer interpretation to determine LQA eligibility would likely be significantly reduced. DCPAS officials told us that, at the direction of the Deputy Assistant Secretary of Defense for Civilian Personnel Policy, they began a review of DOD’s LQA Instruction in fall 2013. As part of this review, DCPAS officials solicited informal comments and recommendations from DOD components on the draft LQA Instruction. DCPAS officials stated that they sent the draft LQA Instruction to DOD components for initial informal comments in early 2014. These officials explained that they then transitioned their focus to a larger effort reviewing all DOD guidance related to overseas civilian employees, and the update to the LQA Instruction was incorporated into this effort. According to the officials, DOD’s draft LQA Instruction was sent in February 2015 to DOD components for additional informal comments. DCPAS officials stated that they expect DOD’s updated LQA Instruction to be finalized and released in late 2015. Officials from military service component commands in the EUCOM area of responsibility told us that they are waiting for DCPAS to release the updated LQA Instruction before they issue their own updated LQA guidance. In addition, DOD components have taken steps to clarify LQA eligibility requirements, including issuing memorandums and adopting new procedures for making LQA eligibility determinations. Prior to the 2013 LQA audit, U.S. Army in Europe and U.S. Air Forces in Europe modified their interpretation of LQA eligibility requirements to be consistent with the single employer interpretation. For example, U.S. Army in Europe issued a memorandum in January 2012 stating that six months earlier it had begun applying the single employer interpretation when making LQA eligibility determinations, and that the single employer interpretation was consistent with the DSSR. Also, U.S. Air Forces in Europe revised its guidance in October 2012 to emphasize that “substantially continuous employment by such employer” is restricted to the single employer that initially recruited the employee from the United States. After the 2013 LQA audit, some DOD components adopted new procedures for making LQA eligibility determinations with the intent of ensuring that civilian employees hired overseas met the single employer interpretation. For example, prior to the 2013 LQA audit, LQA eligibility determinations for Army overseas civilian employees to receive LQA were made at the local overseas human resource office level. After the audit, the Army’s Civilian Human Resources Agency established the following multitier review for making LQA eligibility determinations in each of its overseas regions: First, a human resource specialist at the local human resource office reviews an LQA questionnaire filled out by the job applicant to ensure that the applicant meets LQA eligibility criteria found in both the DSSR and DOD’s current LQA Instruction. Second, the human resource specialist’s supervisor conducts another review. If both of the local officials agree with regard to the job applicant’s eligibility for LQA, the applicant’s LQA questionnaire is forwarded to the overseas regional office. Third, an LQA subject matter expert at the regional office conducts an additional review. Fourth, a senior-level LQA subject matter expert at the regional office conducts a final review before sending the final determination of the job applicant’s eligibility to receive LQA back to the local human resource office. Army officials explained that if at any point there is disagreement among the officials regarding the eligibility assessment, the officials will discuss the job applicant’s LQA questionnaire to reach consensus. Similarly, U.S. Air Forces in Europe developed a flow chart to help local human resource specialists determine whether overseas job applicants are eligible for LQA. For example, the flow chart seeks to determine whether an applicant is a contractor or a separated military member, and whether he or she was recruited in the United States. Definition of a U.S. Hire. In an August 2013 memorandum, the Army Civilian Human Resources Agency requested that Department of the Army headquarters provide clarification on the definition of a U.S. hire found in DOD’s LQA Instruction. In the memorandum, the Army Civilian Human Resources Agency explained that the Department of the Army had been interpreting DOD’s LQA Instruction to mean that a physical presence overseas during time of recruitment (that is, from the time of application to a job offer), for any reason, disqualified a job applicant from meeting the definition of a U.S. hire. The Army Civilian Human Resources Agency memorandum stated that this interpretation of a U.S. hire did not appear to be logical, and cited a recent case as an example. After receiving the Army Civilian Human Resources Agency’s request for clarification, in September 2013 the Deputy Assistant Secretary of Defense for Civilian Personnel Policy sent a policy advisory to the Department of the Army with guidance on how to define a U.S. hire. The policy advisory stated that an individual may still be considered a U.S. hire even though he or she may have left the United States for a short period of time, and it provided examples of such scenarios. Specifically, it clarified that a job applicant should be considered to physically reside in the United States and considered a U.S. hire if he or she takes a vacation outside the United States, travels outside the United States on a temporary duty assignment, or is deployed overseas as a reservist or National Guard member during the time of recruitment. In addition, the policy advisory stated that reservists and National Guard members deployed overseas benefit from the provisions afforded by the Uniformed Service Employment and Reemployment Rights Act when determinations are made as to whether they are recruited from the United States, and they should be allowed employment benefits that would accrue as if a deployment had not occurred. Regarding the issue of U.S. hire, the policy advisory stated that it was DCPAS’s intent for personnel physically residing in the United States before being deployed overseas to be considered for LQA eligibility as if they were not deployed. The definition of a U.S. hire in DOD’s September 2013 policy advisory appears to conflict with OPM’s interpretation of the DSSR in compensation claim decisions since at least 2012. May 2014 OPM compensation claim decision, OPM further clarified that an employee must be physically residing in the United States during recruitment to be considered a U.S. hire. Specifically, OPM stated that the DSSR does not exempt particular categories of employees, such as military reservists mobilized overseas, from the DSSR’s requirements for a U.S. hire. Thus, federal agencies cannot exempt categories of employees in their implementing regulations, since that would exceed the scope of the DSSR. OPM File Number 11-0037 (July 11, 2012). in Europe told us they had seen and implemented the policy advisory, but later advised their local human resource offices to disregard it when they discovered OPM compensation claim decisions that they felt conflicted with the policy advisory’s definition of a U.S. hire. The September 2013 policy advisory not being disseminated department-wide and the absence of a consistent interpretation of how to apply the policy advisory may have led to inconsistencies in how DOD components determined whether those applying for a civilian position overseas were “U.S. hires” and thus eligible for LQA. When asked about the apparent conflict between OPM’s interpretation of a U.S. hire and DOD’s September 2013 policy advisory, DCPAS officials told us that they did not initially believe the policy advisory conflicted with the definition of a U.S. hire in the DSSR or with OPM’s interpretation. However, officials told us that they have since recognized such a conflict may exist. As a result, DCPAS is currently soliciting DOD components’ views on the definition of a U.S. hire. DCPAS officials stated that if DOD components support the definition in DOD’s policy advisory, then DCPAS will determine with senior DOD officials whether to discuss the matter with State and request a revision to the DSSR that reflects the definition in DOD’s policy advisory. (This issue is discussed in greater detail later in the report.) They explained that if the components or senior officials do not support the definition in DOD’s policy advisory, then the definition in the updated LQA Instruction will invalidate the policy advisory’s interpretation and no further action would be necessary. As of April 2015, DCPAS had not yet decided which definition will be included in DOD’s updated LQA Instruction. According to DCPAS officials, they discussed the definition of a U.S. hire with OPM in April 2015, at which time OPM officials agreed that DCPAS should discuss with State revising the DSSR if DOD continues to use its September 2013 policy advisory. OPM officials explained to us that they informed DCPAS officials at that meeting that OPM would continue to apply its interpretation of U.S. hire until or unless State revised the DSSR. DOD has not monitored DOD components’ LQA eligibility determinations for civilian employees overseas to help ensure the consistent application of DOD’s LQA Instruction across the department. According to DOD Instruction 1400.25, Volume 100, DOD Civilian Personnel Management System: General Provisions, the Deputy Under Secretary of Defense for Civilian Personnel Policy is responsible for monitoring the implementation and effectiveness of DOD’s civilian personnel management, including DOD’s LQA Instruction. This requirement was the basis for a 2010 recommendation from the DOD Office of the Inspector General that the Deputy Under Secretary of Defense for Civilian Personnel Policy conduct periodic quality assurance reviews. In implementing the Office of the Inspector General’s recommendation, DCPAS charged the heads of DOD components with conducting periodic quality assurance reviews. However, DCPAS officials told us that they have not monitored the LQA eligibility determinations of DOD components, indicating that it is the responsibility of the components to do so. DOD’s LQA Instruction includes a requirement that the heads of DOD components conduct ongoing quality assurance reviews to verify that overseas allowance and differential payments are proper and consistent with applicable statutory and regulatory provisions. However, according to DCPAS officials, the heads of DOD components have not consistently conducted these reviews due to the 2013 LQA audit. DCPAS officials explained that they became aware of issues with LQA eligibility determinations in the EUCOM area of responsibility approximately in May 2012, shortly after the February 2012 LQA Instruction with the new requirement for ongoing quality assurance reviews was issued. DCPAS officials chose not to have DOD components begin conducting the periodic quality assurance reviews because they anticipated that the components would soon be involved in what became the 2013 LQA audit. DCPAS officials told us that they added this requirement in response to one of the recommendations in the DOD Inspector General’s report, although it differs from the actual recommendation for the Deputy Under Secretary of Defense for Civilian Personnel Policy to conduct the review of DOD components. DOD Inspector General, Report No D- 2010-075, Foreign Allowances and Differentials Paid to DOD Civilian Employees Supporting Overseas Contingency Operations (Aug. 17, 2010). of employees who receive overseas allowances and differentials, and then send a report with the audit results to the Deputy Assistant Secretary of Defense for Civilian Personnel Policy by March of each year. However, there is no specific requirement in the draft LQA Instruction to monitor the reviews of DOD components to ensure they are accurately and consistently authorizing LQA as well as other overseas allowances and differentials. Standards for Internal Control in the Federal Government state that internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. Specifically, managers are to (1) promptly evaluate findings from audits and other reviews, (2) determine proper actions in response to findings and recommendations from audits and reviews, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. Without monitoring DOD components’ reviews, DCPAS cannot ensure that DOD components are making LQA eligibility determinations and payments in accordance with applicable statutory and regulatory provisions. DOD, through DCPAS, has not discussed its concerns related to the DSSR with State’s Office of Allowances to determine whether LQA eligibility requirements should be revised, notwithstanding DOD components’ concerns that some of those requirements are ambiguous or outdated. Of particular concern are the DSSR requirements related to “substantially continuous employment by such employer” and the definition of a U.S. hire, as discussed earlier. Officials from one military service component command, as well as regional and local human resource offices in the EUCOM and PACOM areas of responsibility, told us that the DSSR requirement for “substantially continuous employment by such employer” remains unclear, even after the 2013 LQA audit. In particular, the DSSR has not been modified to explicitly reflect OPM’s single employer interpretation. Officials we spoke with at the service component command and local human resource office levels stated that even if DOD’s LQA Instruction was updated to include the single employer interpretation, as previously discussed, it would still be helpful to revise the DSSR to clarify the phrase “substantially continuous employment by such employer,” since local human resource specialists routinely use both the LQA Instruction and the DSSR to make LQA eligibility determinations. Officials from the military department headquarters, military service component commands, and regional and local human resource offices in the EUCOM and PACOM areas of responsibility informed us that the DSSR’s U.S. hire definition should be updated to reflect modern travel and Internet access realities, temporary duty assignments, and the overseas deployment of reservists and National Guard members. DCPAS’s definition of a U.S. hire in its September 2013 policy advisory to the Army attempts to update the definition within DOD, as previously discussed, but it may also expand the DSSR’s LQA eligibility requirements, thereby exceeding DOD’s authority according to OPM’s interpretation of the DSSR. State officials told us that, although there is no requirement for State to proactively review the DSSR and assess the need for revisions, there have been occasions when State collaborated with other federal agencies—including DOD—on eligibility issues for overseas allowances. According to State officials, they have done so when other federal agencies have initiated the collaboration. For example, in 2014, DCPAS officials requested that State’s Office of Allowances consider revising a provision within the DSSR relating to the separate maintenance allowance. Previously, if the overseas employee was a former military member whose family had access to military commissary and exchange facilities, the separate maintenance allowance provided was reduced by 10 percent. DCPAS officials communicated to State that former military members and their families appeared to be unfairly penalized by this requirement. In response to DCPAS’s request, State reviewed the proposed revision and sent it to other federal agencies for comment. Neither the other federal agencies nor State’s legal counsel had any substantive objections to the proposed amendment, so the DSSR was updated in January 2015. DCPAS officials told us they requested that State revise this provision of the DSSR because it was negatively affecting a specific class of DOD employees. State officials also told us that they initiated an internal review of the DSSR in early 2015. The review encompasses offices within State—such as the Office of Overseas Schools—and is intended to provide State’s Office of Allowances with suggestions for revisions or updates to sections of the DSSR that those offices routinely use. Once the Office of Allowances receives all internal suggestions, officials told us that they intend to compile a list of proposed changes to the DSSR and share them with other agencies, including DOD, for comment. The officials told us that when they send this list to other agencies, they plan to ask whether the agencies have any other suggestions for revising the DSSR. State officials indicated that they expect this review to be completed by the end of 2015. Notwithstanding the example cited above, and the concerns about LQA eligibility raised by the DOD component officials with whom we spoke, DCPAS officials stated that they have not yet discussed with State their concerns related to the DSSR, particularly with regard to OPM’s single employer interpretation and the definition of a U.S. hire. According to DCPAS officials, they did not feel the need to collaborate with State to discuss modifications to the requirement for “substantially continuous employment by such employer” in the DSSR because they believe that updating DOD’s LQA Instruction would be sufficient to resolve any ambiguity. However, DOD component officials we interviewed who determined LQA eligibility told us that updating DOD’s LQA Instruction and revising the DSSR is needed to help ensure consistent eligibility determinations as both documents are considered in making such determinations. DCPAS officials also stated that they have not discussed with State their concerns with the DSSR’s definition of a U.S. hire or requested a potential revision of the DSSR definition because, as previously discussed, they have not yet made a decision about whether the definition in the September 2013 policy advisory will be included in DOD’s updated LQA Instruction. While federal agencies are not required to collaborate with State about questions related to LQA eligibility requirements, the Standards for Internal Control in the Federal Government state that, in addition to internal communications, management should ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders who may have a significant effect on the federal agency achieving its goals. Additionally, in prior work, we have reported on leading practices for interagency collaboration. One of these practices is to establish compatible policies, procedures, and other means to operate across federal agency boundaries. Frequent communication among collaborating agencies is another means to facilitate working across agency boundaries. In the absence of DOD initiating a discussion with State about concerns related to DSSR LQA eligibility requirements for U.S. and overseas hires and whether they should be revised, State may not have the information it needs to determine whether the DSSR should be revised with regard to the “substantially continuous employment by such employer” and definition of a U.S. hire provisions. Until recently, past OPM compensation claim decisions have not been widely available to federal agencies, including DOD, and the public. OPM officials told us that compensation claim decisions usually have been provided only to the claimant involved in the specific OPM compensation claim and to the office within a federal agency that issued the final agency-level decision. OPM officials added that, under some circumstances, they send compensation claim decisions to other offices higher in the employee’s chain of command. To make compensation claim decisions more widely available, OPM maintains a public website on which it can post compensation claim decisions. This website is the primary means by which federal agencies could learn of compensation claim decisions involving another federal agency that could have implications for LQA eligibility determinations. However, until recently OPM did not post its compensation claim decisions on its website for the years 2003 through 2012. This is because, according to OPM officials, they lacked the funds needed during that 10-year period to comply with the statutory requirement that federal agencies make their electronic and information technology accessible to individuals with disabilities. OPM compensation claim decisions could not be posted unless they were compliant. These officials stated that they did not have the resources available to make the postings accessible to individuals with disabilities because funding for other OPM programs was prioritized ahead of funding for the updates necessary to comply with this requirement during that time. OPM officials stated that those compensation claim decisions are now compliant with the statutory requirement and are being posted. According to the officials, in June 2013 and August 2014, respectively, OPM posted all its compensation claim decisions from 2003 through 2012 and some decisions for 2013 and 2014. OPM officials told us that they are implementing a new web application for posting compensation claim decisions to the OPM website in a more timely manner. This web application—estimated to be fully functional by June 2015—will facilitate making compensation claim decisions accessible to individuals with disabilities when posted on OPM’s website. However, OPM officials acknowledge that they still have a backlog of decisions that have not been posted online because the web application was not operational when those decisions were made. We found that as of early May 2015, no new compensation claim decisions had appeared on OPM’s website since June 2014, during which time OPM officials told us they had adjudicated 20 claims, over half of which were related to LQA. In addition, OPM officials told us they have not yet developed timeframes for individual compensation claim decisions to be reviewed and then posted online after these decisions have been made. For example, these officials have not determined how long it should reasonably be expected to take between the time when OPM sends a decision to the claimant and to the office that issued the final agency-level decision, and the time when the decision can be posted on its public website. Although there will be some expected lag time, since OPM must first notify the parties involved in the compensation claim before the compensation claim decision can be made publicly available, OPM has not determined how to ensure that the time it takes to post decisions is of a reasonable length. Standards for Internal Control in the Federal Government state that agencies must identify, capture, and distribute pertinent information in a form and in a timeframe for their employees to perform their duties efficiently. Managers should also ensure an adequate means of communicating with and obtaining information from external stakeholders who may have a significant effect on achieving their federal agency’s goals. Until OPM develops and implements timeframes for posting individual LQA-related compensation claim decisions online in a timely manner, it cannot ensure that agencies, including DOD, have access to the most up-to-date information to provide accurate guidance on issues relating to LQA eligibility determinations to their employees. While DCPAS has not generally disseminated OPM compensation claim decisions that may affect LQA eligibility determinations and guidance for how to apply those decisions, DCPAS officials told us they recently assigned an official responsibility for doing so. As previously discussed, earlier this year OPM made compensation claim decisions from 2003 through 2012 and some decisions from 2013 through 2014 available on its website. OPM officials stated that they expect agencies—including DOD—to distribute OPM compensation claim decisions internally if they wish to do so. DCPAS officials agreed that their office should remain informed of OPM compensation claim decisions. According to OPM officials, while compensation claim decisions are binding at the individual case level, it is up to agencies to determine the extent to which the decisions necessitate broader policy changes. OPM officials also told us that it makes sense for agencies to reevaluate their policies if necessary in order to prevent future compensation claims and legal liabilities. We found that military service component commands and local and regional human resource offices in the EUCOM and PACOM areas of responsibility varied in how they viewed and applied OPM compensation claim decisions when making LQA eligibility determinations, even when they are in possession of the decisions. For example, some officials in the Navy stated that they do not consider it mandatory to incorporate OPM compensation claim decisions into their broader personnel policies, while other Navy officials stated their understanding was that DOD components were obliged to comply with all OPM compensation claim decisions relating to LQA eligibility. Similarly, some Air Force officials stated that they considered OPM decisions related to LQA eligibility to be binding, but recognized that other Air Force officials considered them discretionary. DCPAS told us the recently assigned official will review OPM compensation claim decisions and determine the implications for DOD’s implementation of LQA eligibility determinations. This official will then disseminate those decisions that affect LQA eligibility determinations to DOD components, providing views on each decision’s implications and guidance for how components should use the decisions when making LQA eligibility determinations. This may help to reduce the confusion that exists among DOD components about how OPM compensation claim decisions should be applied when making LQA eligibility determinations. Since DOD’s 2013 audit determined that 680 of its civilian employees assigned overseas had erroneously received LQA because of misinterpretations of eligibility requirements, DCPAS and DOD components have taken steps to clarify the eligibility requirements outlined in DOD’s LQA Instruction. The steps include DCPAS drafting an update to the instruction that reflects OPM’s single employer interpretation and coordinating with the components about their views on the definition of a U.S. hire. However, the Deputy Assistant Secretary of Defense for Civilian Personnel Policy or DCPAS, as delegated, have not carried out their responsibility for monitoring the implementation and effectiveness of DOD’s LQA Instruction, including monitoring reviews of LQA eligibility determinations conducted by DOD components. Without fulfilling this responsibility, DOD cannot ensure the consistent application of LQA eligibility requirements throughout the department, and is at risk for future erroneous payments of this allowance to its civilian employees overseas. Additionally, agencies have missed opportunities to ensure consistency in LQA eligibility determinations. First, in light of State’s willingness to discuss revisions to the DSSR if requested and State’s ongoing DSSR review, DOD has an opportunity to work with State to ensure that State has the information it needs to determine whether the DSSR needs to be revised. Doing so would help reduce the risk of future misinterpretation of terms related to LQA eligibility requirements, including “substantially continuous employment by such employer” and “U.S. hire,” thereby avoiding situations similar to that which led to the 2013 LQA audit. Second, OPM has made some progress that has resulted in the posting of compensation claim decisions from the past 10 years on its website, but more recent decisions have not yet been posted. Further, while OPM is about to launch a new web application to post compensation claim decisions, it has not yet established timeframes for posting its backlog of decisions and any future decisions to ensure that the website remains up to date. Unless OPM establishes timeframes for posting compensation claim decisions to its website, the number of unposted decisions could grow, leading to continued delays in agencies’ ability to access the most recent decisions that may affect their LQA eligibility determinations. To ensure that DCPAS and DOD components are determining LQA eligibility consistently with DOD’s LQA Instruction, the DSSR, and OPM compensation claim decisions, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following two actions: require the Deputy Assistant Secretary of Defense for Civilian Personnel Policy or DCPAS, as delegated, to monitor reviews of LQA eligibility determinations conducted by DOD components; and discuss with State its concerns related to the DSSR to determine whether LQA eligibility requirements should be revised and then, as appropriate based on those discussions, request that State make any revisions deemed necessary, particularly with regard to the requirement for “substantially continuous employment by such employer” and the definition of a U.S. hire. To ensure that agencies have access to recent OPM compensation claim decisions online, including those related to LQA, we recommend that the Director of OPM develop timeframes for posting its compensation claim decisions on OPM’s public website. We provided a draft of this report to DOD, OPM, and State for review and comment. In written comments, which are summarized below and reprinted in appendix III, DOD concurred with the two recommendations directed to it. In its written comments, which are summarized below and reprinted in appendix IV, OPM concurred with the recommendation directed to it. State did not provide comments on the draft. In its written comments, DOD noted that it is in the process of revising DOD Instruction 1400.25, Volume 1250, which provides guidance for overseas allowances and benefits for civilian employees. Further, DOD indicated that it welcomes State’s review of the DSSR and the opportunity to work with State on any proposed changes, including any changes DOD initiates. In its written comments, OPM noted that, in May 2015, it used its new web application to successfully post 13 LQA-related claim decisions. OPM expects to complete testing of the new web application in June 2015 and post the current backlog within two months after testing is complete. Thereafter, OPM expects to post completed cases monthly. OPM also provided technical comments, which we have incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of State, the Director of OPM, and the Under Secretary of Defense for Personnel and Readiness. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-5741 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To conduct this review, we used the Defense Finance and Accounting Service’s payroll data for fiscal years 2011 through 2014 to assess the number of employees who received a living quarters allowance (LQA), the total dollar amount spent for LQA, and the debt incurred for the employees determined by the Department of Defense’s (DOD) 2013 LQA audit to have been erroneously paid LQA. To assess the reliability of these data, we interviewed Defense Finance and Accounting Service’s knowledgeable officials about these data and performed electronic testing to identify obvious problems with completeness or accuracy. We found these data to be sufficiently reliable for the purposes of our report. See the Background section and appendix II of this report for additional information. To evaluate the extent to which DOD has clarified its LQA eligibility requirements and is monitoring its components’ LQA eligibility determinations, we reviewed DOD Instruction 1400.25, Volume 1250, DOD Civilian Personnel Management System: Overseas Allowances and Differentials and a related September 2013 policy advisory. We also reviewed the Department of State (State) Standardized Regulations (DSSR) and selected Office of Personnel Management (OPM) compensation claim decisions to determine if DOD had incorporated into DOD’s current LQA Instruction OPM’s interpretation of the DSSR’s requirement for federal overseas civilian employees to be in “substantially continuous employment by such employer” prior to their current job. We interviewed officials from State’s Office of Allowances and OPM’s Merit System Accountability and Compliance division on OPM’s interpretation of key LQA eligibility requirements. In addition, we assessed a DOD policy advisory on the definition of a U.S. hire to determine if it was consistent with DOD’s current LQA Instruction and the DSSR. We also interviewed Defense Civilian Personnel Advisory Service (DCPAS) officials to assess their oversight of DOD components’ LQA eligibility determinations, including plans to conduct periodic audits. We also determined the status of DCPAS officials’ efforts to update DOD’s current LQA Instruction. We interviewed officials from the Army, Navy, Air Force, and Marine Corps who are involved in creating implementing LQA guidance for the military services and providing support to the overseas officials who make LQA eligibility determinations to identify potential challenges with implementing DOD’s current LQA Instruction and recommendations for improving the instruction and the DSSR. We also interviewed similar officials from the DOD office, the DOD field activity, and one DOD agency that were identified in the 2013 LQA audit: the Office of the Under Secretary of Defense for Intelligence, the Defense Logistics Agency, and the Department of Defense Education Activity. We interviewed officials at U.S. European Command (EUCOM) and U.S. Pacific Command (PACOM) to identify potential effects that the 2013 LQA audit had on those commands’ missions and readiness. We also interviewed officials involved in determining LQA eligibility from the Army, Navy, and Air Force service component commands in the EUCOM and PACOM areas of responsibility, as well as 15 of the 33 local human resource offices that report to those service component commands to identify potential challenges in applying LQA eligibility requirements at the operational level. To select the local human resource offices we conducted interviews with, we selected a nongeneralizable sample of 15 of the 33 offices from service component commands in both the EUCOM and PACOM areas of responsibility, as well as offices from each of the military departments’ service component commands. In developing our selection criteria, we chose local human resource offices with the most employees determined to have been erroneously paid by the 2013 LQA audit. We also chose a set of local human resource offices with relatively few such employees, which allowed us to identify any variation in how eligibility determinations were made between them and the other local human resource offices with the most employees determined to have been erroneously paid. Specifically, we interviewed officials from nine human resource offices in the EUCOM area of responsibility and six human resource offices in the PACOM area of responsibility, which included seven Army human resource offices, five Air Force offices, and three Navy offices. While the results of these interviews are not representative of all offices, they provide valuable insights. To evaluate the extent to which DOD, State, and OPM have helped ensure consistency in the interpretation of LQA eligibility requirements, we interviewed officials from DCPAS, DOD components, and the local human resource offices to determine the extent to which DOD has communicated with State and received and disseminated OPM compensation claim decisions with implementation instructions during and since the 2013 LQA audit. We evaluated DCPAS’s process for collaborating with State regarding the DSSR’s definition of a U.S. hire and “substantially continuous employment by such employer.” We also interviewed officials from State’s Office of Allowances on the extent to which they communicated with DCPAS on issues related to LQA eligibility requirements and the process for updating the DSSR. We assessed DOD’s collaboration with State against Standards for Internal Control in the Federal Government. To evaluate DOD’s process for receiving and disseminating OPM compensation claim decisions related to LQA eligibility requirements, we evaluated the extent to which DCPAS, DOD components, and the local human resource offices we interviewed receive, share, and incorporate OPM compensation claim decisions into LQA eligibility determinations. We also interviewed officials from OPM’s Merit System Accountability and Compliance division to assess the process for adjudicating OPM compensation claim decisions and procedures for disseminating those decisions to agencies and publicly on OPM’s website. We conducted this performance audit from July 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. In response to the January 3, 2013, memorandum from the Acting Principal Deputy Under Secretary of Defense for Personnel and Readiness, all Department of Defense (DOD) components conducted an audit of all “locally hired overseas employees” (that is, employees hired overseas) currently receiving a living quarters allowance (LQA). On May 15, 2013, the Acting Under Secretary of Defense for Personnel and Readiness issued a memorandum announcing the LQA audit conclusion and results. The audit results showed that 680 DOD civilian employees were considered to have been “erroneously paid LQA after having been hired overseas,” 444 of whom were identified in the audit as being ineligible for LQA because of inconsistency with the single employer interpretation. Table 2 shows the DOD components to which the 680 employees were assigned, as identified in DOD’s 2013 audit as erroneously receiving LQA. Figure 2 shows the command locations of the overseas assignments of the 680 employees, as identified in DOD’s 2013 audit of LQA. Employees identified by the 2013 LQA audit as erroneously receiving LQA were determined to owe a debt to the United States for the full amount of LQA payments they had been erroneously granted. DOD is required to initiate collection on all debts due the United States promptly and in accordance with applicable laws and regulations. Because of the unique circumstances involved with these debts, however, the Office of the Under Secretary of Defense for Personnel and Readiness decided that it was in the best interest of the department to support requests for debt waivers, so long as each employee making a request was unaware that he or she had not been entitled to LQA and there was no evidence of misrepresentation, fraud, or deception to initially acquire LQA. The Defense Finance and Accounting Service received and sent 592 debt waiver applications to the Defense Office of Hearing and Appeals for review, and all of these applications were approved. As shown in table 3, the total amount of debt incurred was about $104.5 million. Table 4 shows the status of employees, as of January 2015, who were identified in DOD’s 2013 audit of LQA. DOD took additional measures to assist those employees who were determined by the 2013 LQA audit to have been erroneously paid LQA. For example, a team from the Defense Finance and Accounting Service traveled to local human resource offices in the U.S. European Command and U.S. Pacific Command areas of responsibility to directly assist employees who were determined to have been erroneously paid LQA with preparing requests for waivers of debt. In addition, DOD authorized a temporary limited exception to a standard priority placement program for employees determined to have been erroneously paid LQA that allowed them to be placed in U.S. job vacancies that were otherwise subjected to a hiring freeze. DOD also provided counseling services to those employees who were determined to have been erroneously paid LQA. In addition to the above named contact, Tina Won Sherman, Assistant Director; Tracy Barnes; Nick Benne; Tom Costa; Alissa Czyz; Lorraine Ettaro; Susannah Hawthorne; Amie Lesser; Biza Repko; Steven Rocker; Wayne Turowski; Sarah Veale; and Cheryl Weissman made key contributions to this report.
DOD provides LQA as an incentive to recruit eligible individuals for civilian employee assignments overseas. In 2014 DOD spent almost $504 million on LQA for about 16,500 civilian employees to help defray overseas living expenses, such as rent and utilities. GAO was asked to review DOD's implementation of LQA policies for overseas employees. This report evaluates the extent to which (1) DOD has clarified its LQA eligibility requirements and is monitoring its components' LQA eligibility determinations; and (2) DOD, State, and OPM have helped ensure consistency in the interpretation of LQA eligibility requirements. GAO reviewed the DSSR, DOD's LQA Instruction, and OPM compensation claim decisions. GAO interviewed DOD, State, and OPM officials responsible for overseeing, implementing, or interpreting LQA eligibility requirements, including a nongeneralizable sample of 15 DOD local human resource offices in the U.S. European Command and U.S. Pacific Command areas of responsibility selected based on the number of employees determined to have been erroneously paid LQA in DOD's 2013 LQA audit. The Department of Defense (DOD) and its components have taken steps to clarify living quarters allowance (LQA) eligibility requirements for civilian employees overseas, but DOD has not monitored its components' LQA eligibility determinations. DOD and its components are to make LQA eligibility determinations in accordance with Department of State (State) Standardized Regulations (DSSR) as well as department-wide and component-level guidance. However, after conducting an audit in 2013, DOD determined that 680 of its civilian employees had erroneously received LQA. Most erroneous LQA payments were attributed to misinterpretations of eligibility requirements. This determination was based in part on a 2011 interpretation of a DSSR eligibility requirement for LQA by the Office of Personnel Management (OPM), which settles federal employee compensation claims. After the audit, DOD issued a memorandum and point paper to implement OPM's interpretation and clarify LQA eligibility requirements. DOD is also updating its LQA Instruction, DOD Instruction 1400.25, Volume 1250, to incorporate OPM's 2011 interpretation. Some DOD components also issued clarifying guidance and adopted new procedures for making LQA eligibility determinations. For example, U.S. Air Forces in Europe developed a flow chart to help human resource specialists determine whether overseas job applicants are eligible for LQA. DOD's LQA Instruction directs DOD components to conduct periodic quality assurance reviews of LQA eligibility and payments, but according to DOD and component officials, they have not consistently done so. Further, the Deputy Assistant Secretary of Defense for Civilian Personnel Policy is responsible for monitoring the implementation and effectiveness of DOD's LQA Instruction and administers this responsibility through the Defense Civilian Personnel Advisory Service. However, this office has not monitored its components' reviews of LQA eligibility determinations. Without such monitoring, DOD cannot ensure that LQA eligibility determinations are being made in accordance with applicable regulations and policies. Agencies have missed opportunities to ensure consistent interpretation of LQA eligibility requirements. DOD components have raised concerns that some DSSR LQA eligibility requirements are ambiguous or outdated, but DOD has not discussed these concerns with State to determine whether the DSSR should be revised. State officials told GAO that they have collaborated with DOD and other agencies on eligibility issues for other allowances in the past and would be open to future discussions. Without communicating its concerns to State, DOD cannot ensure that State has the information it needs to make any adjustments to the DSSR, if appropriate. Until recently, OPM had not made its compensation claim decisions widely available to federal agencies, including DOD, and the public because of limited funding. OPM is implementing a new web application for posting compensation claim decisions to its website, but has not established timeframes to routinely post individual decisions. In the absence of doing so, OPM cannot ensure that agencies will have timely access to the most up-to-date information on LQA eligibility issues. GAO recommends DOD monitor components' reviews of LQA eligibility determinations and discuss concerns about DSSR LQA eligibility requirements with State. GAO also recommends that OPM develop timeframes for the timely web posting of its decisions. DOD and OPM concurred with GAO's recommendations.
In 1980, the Congress passed the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), which established the Superfund program to clean up highly contaminated hazardous waste sites. EPA administers the program, oversees cleanups performed by the parties responsible for contaminating the sites, and performs cleanups itself. State governments also have a role in the Superfund process. States may enter into contracts or cooperative agreements with EPA to carry out certain Superfund actions, including evaluating sites, cleaning them up, and overseeing the cleanups. In addition, most states have established their own hazardous waste programs that can clean up sites independently of the federal Superfund program. State cleanup programs include efforts to enforce state cleanup laws on responsible parties and to encourage them to “voluntarily” clean up contaminated sites. CERCLA requires EPA to develop and maintain a list of hazardous sites, known as the National Priorities List, that the agency considers to present the most serious threats to human health and the environment. These sites represent EPA’s highest priorities for cleanup nationwide. Although EPA may undertake cleanup actions at contaminated sites not on the National Priorities List, the agency’s regulations stipulate that only sites placed on the list are eligible for long-term cleanup (“remedial action”) financed by the agency under the trust fund established by CERCLA. Additional details on EPA’s process for placing sites on the National Priorities List are included in appendix I. The 3,036 sites that were awaiting a National Priorities List decision as of October 1997 represent only a portion of the sites that EPA has evaluated and classified over the history of the Superfund program. According to EPA, as of November 1998, the Superfund program had investigated over 40,000 potential hazardous waste sites and made final decisions about whether or not to include almost 35,000 sites on the National Priorities List. EPA also reported that it has removed waste or taken other interim cleanup actions at over 5,500 sites—most of which are not on the National Priorities List—to address the most urgent risks and stabilize conditions to prevent further releases of contamination. For the more than 1,400 sites EPA has placed on the list, it has completed cleanup studies at most and has completed cleanup construction at 585. States have reported cleaning up thousands of sites under their own programs and authorities. To obtain information on the 3,036 sites that EPA identified as awaiting a National Priorities List decision, we developed and mailed two surveys for each nonfederal site and three surveys for each federal facility. We sent surveys to site assessment officials in EPA’s 10 regional offices, and since state officials might have more knowledge of some of the sites, we also sent surveys to the 50 states, the District of Columbia, Guam, Midway Island, the Northern Mariana Islands, Puerto Rico, and the Navajo Nation (collectively referred to as states in this report). In addition, if a federal agency is responsible for cleaning up sites, we also sent surveys to that agency: We surveyed 14 federal agencies for 157 of the 3,036 sites that are federally owned and/or operated. Because we did not receive responses from some states and incomplete responses from others, we sent follow-up surveys to state officials. In total, we received one or more survey responses for 3,023 (99.5 percent) of the 3,036 sites identified by EPA as awaiting a National Priorities List decision. We discuss our methodology in greater detail in appendix II, and appendix III includes reproductions of our surveys. The responses to our surveys of officials of EPA, other federal agencies, and states indicate that 1,789 of the 3,036 sites classified by EPA’s database as awaiting a National Priorities List decision are potentially eligible for the list. Another 1,234 sites are unlikely to become eligible for the Superfund program for various reasons. First, EPA’s database of potentially contaminated sites, known as the Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS), inaccurately lists some sites as awaiting a National Priorities List decision although they are not eligible for listing. EPA regions reported that about 19 percent of the 3,036 sites should not be considered eligible sites because (1) they received preliminary hazard ranking scores below the qualifying level or (2) EPA has already proposed them for the list or decided not to propose them for the list. According to an EPA Superfund program official, the incorrect data entries may have resulted from regional program managers’ misinterpretation of EPA’s guidance on CERCLIS coding. We consider another 22 percent of the sites unlikely to become eligible for the National Priorities List because, according to responding officials, they either do not require any cleanup action (183 sites), have already been cleaned up (182 sites), or are currently undergoing final cleanup (304 sites) under state programs. No information is available on the status of the remaining 13 sites because of missing survey responses (see fig. 1). Final cleanup under way (304 sites) Sites potentially eligible for the NPL (1,789 sites) We performed most of our analysis of site conditions, cleanup activities, and plans for future cleanups for the 1,789 sites remaining after we excluded the categories of sites that are shaded in the figure. We refer to the remaining sites as potentially eligible sites. They include 1,739 nonfederal sites and 50 federal facilities. Responses to our surveys indicate that many of the 1,789 sites that are potentially eligible for the National Priorities List pose risks to human health or the environment. Most of them threaten drinking water sources or groundwater; they are generally located in populated areas; and although many of the sites are fenced to prevent entry, workers, visitors, and trespassers may have direct contact with contaminants at more than half of the sites. The sites are contaminated most often with metals, but other contaminants are also present. Officials of EPA, other federal agencies, and states who responded to our survey characterized the risks presented by about two-thirds of the potentially eligible sites. They said that about 17 percent of the sites currently pose high human health and environmental risks; another 10 percent of the sites potentially pose high future risks. In addition, officials were unsure about the severity of site conditions for a large proportion of potentially eligible sites. A large portion of the potentially eligible sites have contaminated nearby groundwater, drinking water sources, or both. As figures 2 and 3 indicate, about 73 percent of the potentially eligible sites have already contaminated groundwater, and another 22 percent of the sites, approximately, could contaminate groundwater in the future. In addition, about 32 percent of the potentially eligible sites have already contaminated drinking water sources, and about 56 percent more could contaminate drinking water sources in the future. Actual groundwater contamination (1,301 sites) The contamination at many of the potentially eligible sites is also resulting in a number of other adverse conditions. Table 1 shows the percentage of potentially eligible sites that have experienced or contributed to specific conditions. As the table also shows, respondents to our surveys were uncertain whether the conditions were present at a relatively large percentage of the potentially eligible sites. As figure 4 shows, the sites that are potentially eligible for the National Priorities List are contaminated by a variety of pollutants. Metals—primarily heavy metals such as lead, mercury, or cadmium—are the principal contaminants at these sites. These metals can cause brain and kidney damage and birth defects. The second most prominent contaminants at these sites are volatile organic compounds (VOC). VOCs are carbon-based compounds, such as benzene, that easily become vapors or gases and can cause cancer, as well as damage to the blood, immune, and reproductive systems. A large portion of the potentially eligible sites are also contaminated by semivolatile organic compounds (SVOC), which are similar to VOCs and can result in human respiratory illnesses. Additional major contaminants at the sites are pesticides, the most toxic of which can cause acute nervous system effects and skin irritations and may cause reproductive system effects and cancer; polychlorinated biphenyls (PCB), which can cause skin irritations and other related conditions and may contribute to causing cancers, liver damage, and reproductive and developmental effects; dioxins, which are also a suspected human carcinogen; and other unspecified contaminants. The potentially eligible sites are generally located in populated areas: Ninety-six percent are within a half mile of residences or places of regular employment. We asked officials of EPA, other federal agencies, and states to rank the relative risks of potentially eligible sites. The officials responding to our surveys said that they could assess the current risks of 67 percent of the sites and the potential risks of 68 percent of the sites. According to these officials, about 17 percent of the potentially eligible sites currently pose high risks (see fig. 5), and another 10 percent of the sites (for a total of 27 percent) could pose high risks in the future (see fig. 6) if they are not cleaned up. Average current risks (455 sites) Low current risks (443 sites) The 1,789 sites that are potentially eligible for the National Priorities List include (1) 686 sites where some cleanup activities have reportedly taken place or are currently being conducted but the final cleanup remedies are not yet under way and (2) 1,103 sites where officials reported that no substantive cleanup activities beyond initial site assessments or investigations have occurred or no information on cleanup progress is available. Data on the year in which each potentially eligible site was entered into EPA’s records—the “discovery date”—indicate that a significant portion of these sites have been in EPA’s and states’ inventories of known hazardous waste sites for more than a decade. Furthermore, 45 percent of the sites reported to have high current risks and 47 percent of the sites with high potential risks have not had any cleanup activities, or no information on their cleanup progress is available. EPA, other federal agencies, and the states reported conducting some cleanup actions at 38 percent of the potentially eligible sites. Figure 7 shows the number and percentage of potentially eligible sites at which federal and state agencies have undertaken some cleanup activities or conducted other actions such as providing alternative water supplies. (App. IV presents data on the distribution of the sites with and without reported cleanup actions among states and responsible federal agencies.) EPA, other federal agencies, and the states have completed removal actions or interim, partial response actions (not characterized by survey respondents as final cleanup solutions), including changing the water supplies of affected residents, at 576 of the 686 sites with cleanup actions. At the other 110 sites, responding officials told us that some cleanup is under way, but they are not sure if it will be a final response. EPA, other federal agencies, and the states reported conducting no cleanup activities beyond site assessments at the remaining 1,103 potentially eligible sites, or no information on cleanup progress at these sites is available. One hundred and seventy (55 percent) of the 307 sites that are estimated to currently pose high risks have undergone some cleanup activities, while 137 (45 percent) of these sites reportedly have seen no cleanup activities, or no information on cleanup progress is available (see fig. 8). Similarly, 254 (53 percent) of the 476 sites said to potentially pose high risks have undergone some cleanup actions, and 222 (47 percent) have reportedly undergone none, or information is lacking (see fig. 9). See appendix V for additional discussion of the sites at which cleanup actions have been taken. Most of the hazardous waste sites that are potentially eligible for the National Priorities List were “discovered,” that is, entered into EPA’s inventory of sites needing examination, more than a decade ago. As table 2 indicates, 10 percent of the potentially eligible sites were discovered in 1979 or earlier, and 42 percent were discovered before 1985. As shown in figure 10, one-third of the sites that have been known for 10 to 14 years and another third of the sites that have been in the inventory for 15 years or more have undergone some cleanup activities. Conversely, the majority of the sites that have been known for 10 years or more have reportedly made no cleanup progress, or no information on cleanup progress is available. According to the CERCLIS database, many of the potentially eligible sites have not only been in the inventory for a long time but have also been awaiting a National Priorities List decision for several years. The CERCLIS database records the date of the “last action” taken at the inventory sites, including, among other actions, the completion of site inspections or expanded site inspections. These dates generally can be used as an indication of when the sites became potentially eligible for placement on the National Priorities List. The last action recorded for 87 percent of the potentially eligible sites is the completion of a site inspection. Another 12 percent of the sites have completed or are undergoing expanded site inspections. The data show that the last action at half of the potentially eligible sites occurred in 1994 or earlier. The last action date for 24 percent of the sites is 1995, and for 27 percent, 1996 or later. For 4 percent of the sites, the last recorded action took place before 1990. It is uncertain whether most potentially eligible sites will be cleaned up; who will do the cleanup; under what programs these activities will occur; what the extent of responsible parties’ participation will be; and when cleanup actions, if any, are likely to begin. Responding officials did not indicate the final outcome for 53 percent of the 1,789 potentially eligible sites (see fig. 11). They estimated that 536 (30 percent) of the sites will be cleaned up under state programs but usually could not give a date for the start of cleanup or say whether responsible parties would participate. Collectively, they believed that 232 (13 percent) of the potentially eligible sites may be listed on the National Priorities List and cleaned up under the Superfund program, but there are few sites that both federal and state officials agreed would be listed (see fig. 12). Sites likely to be cleaned up under state programs (536) Respondents thought that the largest portion of the potentially eligible sites for which they could predict a cleanup outcome—536 sites, or 30 percent of the 1,789 sites—are likely to be cleaned up under state enforcement or voluntary cleanup programs. However, state officials were able to estimate when they were likely to begin cleaning up only 121 (23 percent) of the 536 sites. They expected to begin cleanup activities at 84 of these sites before the end of 1998 and at 35 sites by the year 2000. State officials also said that parties responsible for the waste at the sites that are expected to be cleaned up under state programs are likely to clean up only 172 (32 percent) of the 536 sites. Such parties are unlikely to participate in cleanups at another 29 (5 percent) of these sites. For the remaining two-thirds of the sites that states reported are likely to be cleaned up under state programs, the extent of responsible parties’ participation is uncertain. Our survey data also show that states are more likely to have cleanup plans for the near future (within 5 years) if responsible parties are available to pay for cleanups. If responsible parties are expected to clean up a site, states are more than twice as likely to have plans to begin work on the cleanup within the next 5 years (10 percent) as for a site at which cleanup by responsible parties is unlikely (4 percent). Furthermore, states are most likely to have plans to complete the cleanup within 5 years if responsible parties are likely to clean up all or almost all of the site. Twenty-one percent of the sites with such parties are expected to be completed by 2003. State officials also provided information about their state’s capabilities for compelling responsible parties to clean up potentially eligible sites or to fund cleanup activities, if necessary. Officials of 33 (75 percent) of the 44 states participating in our telephone survey said that their state’s enforcement capacity (including resources and legal authority) to compel responsible parties to clean up potentially eligible sites is excellent or good. Officials of 5 (11 percent) of the participating states believed that their state’s enforcement capacity is fair, and another 5 (11 percent) said that their state’s enforcement capacity is poor or very poor. The remaining state official was uncertain about the state’s enforcement capability. Furthermore, officials of 11 states (25 percent) told us that their state’s financial capability to clean up potentially eligible sites, if necessary, is excellent or good. Officials of 7 (16 percent) of the states said that their state’s ability to fund cleanups is fair, and 23 (52 percent) said that their state’s ability to fund these cleanups is poor or very poor. The remaining three officials were uncertain about their state’s funding capability. (App. VI presents, by state, officials’ assessments of their state’s ability to fund cleanup activities at potentially eligible sites). EPA officials told us that 43 potentially eligible sites are likely to be cleaned up under other programs such as the Resource Conservation and Recovery Act program. EPA or state officials said that, in their opinion, as many as 232 (13 percent) of the potentially eligible sites may be listed on the National Priorities List in the future. As shown in figure 12, EPA and the states agreed on the possible listing of only a few sites. In general, EPA and state officials believed that those sites with responsible parties who are likely to clean them up are less likely candidates for placement on the National Priorities List. Of the 232 sites cited as possible National Priorities List candidates, 154 (66 percent) have no identified responsible party or no responsible party who officials felt certain is able and willing to conduct cleanup activities. Survey respondents considered such parties likely to clean up all or almost all of only 22 (9 percent) of the 232 sites. No information was provided on the likely extent of responsible parties’ participation in cleaning up the remaining 24 percent of these sites. High-risk sites are more likely to be cited as National Priorities List candidates than others. One hundred twenty-nine (56 percent) of the sites that may be listed on the National Priorities List currently pose high risks, according to survey respondents. Another 45 (19 percent) of the sites pose average risks, and 12 sites (5 percent) pose low risks. Responding officials were unable to estimate the risks of the remaining 46 (20 percent) of these sites. In our telephone surveys, we asked state officials about the types of sites that the states prefer to be placed on the National Priorities List. Officials of 26 (60 percent) of the 44 states that participated in the surveys told us that they are more likely to support listing sites with cleanup costs that are very high compared to those for other types of sites. Although respondents from EPA, other federal agencies, and states jointly believed that as many as 232 of the potentially eligible sites may eventually be placed on the list, none of these sites has yet been proposed for listing. EPA respondents cited several major reasons that the agency has not yet decided whether to propose these sites for the National Priorities List or remove them from further consideration for listing. The most common reasons were that EPA considers the state program to have the lead for cleanup or more data on the current risks of the sites are needed. Other major factors are shown in figure 13. EPA has already made decisions about whether or not to place on the National Priorities List most of the sites that have come into its hazardous waste site inventory. However, decisions to list a large number of sites potentially eligible to enter the Superfund program or to exclude them from further consideration for listing have been deferred, in many cases for over a decade. Our surveys of officials of EPA, other federal agencies, and states indicate that there is a need to decide on how to address these potentially eligible sites. First, about a quarter of the sites may pose high risks to human health and the environment, in the opinion of officials responding to our surveys. Responding officials said that they cannot rank the risks of another third of the sites. Second, some cleanup activities were reported to have occurred at only about half of the sites whose risks were rated high by survey respondents. Third, although all 1,789 potentially eligible sites included in our surveys may require cleanup, officials of EPA, other federal agencies, and states are uncertain about what cleanup actions will be taken at more than half of them and whether EPA or the states should take these actions. Furthermore, some states have concerns about their enforcement and resource capabilities for cleaning up sites. In view of the risks associated with many of the potentially eligible sites and the length of time that EPA or the states have known of them, timely action by EPA and the states is needed to obtain the information required to assess the sites’ risks, set priorities for cleanups, assign responsibility to EPA or the states for arranging the cleanups, and inform the public as to which party is responsible for each site’s cleanup. Also, as part of the process, inaccurate or out-of-date information on sites that are classified in the CERCLIS database as awaiting a National Priorities List decision needs to be corrected. Because of the need for current and accurate information on the risks posed by the 1,789 sites that are potentially eligible for the National Priorities List in order to set cleanup priorities and delineate cleanup responsibilities, we recommend that the Administrator, EPA, in consultation with each applicable state, (1) develop a timetable for EPA or the state to characterize and rank the risks associated with the potentially eligible sites and (2) establish interim cleanup measures that may be appropriate for EPA and the state to take at potentially eligible sites that pose the highest risks while these sites await either placement on the National Priorities List or state action to fully clean them up; in consultation with each applicable state, (1) develop a timetable for determining whether EPA or the state will be responsible for cleaning up individual sites, taking into consideration, among other factors, some states’ limited resources and enforcement authority, and (2) once a determination is made, notify the public as to which party is responsible for cleaning up each site; and correct the errors in the CERCLIS database that incorrectly classify sites as awaiting a National Priorities List decision and prevent the recurrence of such errors so that the database accurately reflects whether sites are awaiting a listing decision. We provided copies of a draft of this report to EPA for its review and comment. EPA provided written comments, which are reproduced in appendix VII. Overall, EPA agreed with the basic findings and recommendations of the report and stated that it believes that the report will be useful to the Congress, the agency, states, and others interested in the future of the Superfund program. EPA also said that it has made National Priorities List decisions for many of the sites in its hazardous waste site inventory and made significant progress toward cleaning up listed sites. We have added this information to the report. EPA also provided technical and clarifying comments that we have incorporated in the report as appropriate. We attempted to obtain information on all 3,036 sites that EPA has identified as awaiting a National Priorities List decision, including 157 federal sites and 2,879 nonfederal sites. To obtain this information, we developed surveys that we sent to officials in EPA’s 10 regional offices, the 50 states, the District of Columbia, Guam, Midway Island, the Northern Mariana Islands, Puerto Rico, the Navajo Nation, and 14 other federal agencies with responsibility for sites that are potentially eligible for the National Priorities List and awaiting EPA’s decision on their disposition. These agencies include the departments of Agriculture, the Air Force, the Army, Defense, Energy, the Interior, the Navy, and Transportation; the Bureau of Land Management; the General Services Administration; the National Aeronautics and Space Administration; the U.S. Army Corps of Engineers; the U.S. Coast Guard; and the U.S. Forest Service. We also conducted a telephone survey with officials in 44 states to determine general information on their hazardous waste management programs and sites within their jurisdiction. (App. II discusses our scope and methodology in greater detail.) We conducted our review between May 1997 and November 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees; the Administrator, EPA; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix VIII. The Environmental Protection Agency’s (EPA) regulations outline a formal process for assessing hazardous waste sites and placing them on the National Priorities List (NPL). The process begins when EPA receives a report of a potentially hazardous waste site from a state government, a private citizen, or a responsible federal agency. EPA enters a potentially contaminated site into a database known as the Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS). EPA or the state in which the potentially contaminated site is located then conducts a preliminary assessment to decide whether the site poses a potential threat to human health and the environment. (According to EPA, about half of the assessments are conducted by states under funding from EPA.) If the preliminary assessment shows that contamination may exist, EPA or a state under an agreement with the agency may conduct a site inspection, a more detailed examination of possible contamination, and in some cases a follow-on examination called an expanded site inspection. Using information from the preliminary assessment and site inspection, EPA applies its Hazard Ranking System to evaluate the site’s potential threat to the public health and the environment. The system assigns each site a score ranging from 0 to 100 for use as a screening tool to determine whether the site should be considered for further action under Superfund. A site with a score of 28.5 or higher is considered for placement on the NPL. Once EPA determines that an eligible site warrants listing, the agency first proposes that the site be placed on the NPL and then, after receiving public comments, either lists it or removes it from further consideration. EPA may choose not to list a site if a state prefers to deal with it under its own cleanup program. Generally, EPA’s policy is to not list sites on the NPL unless the governor of the state in which the site is located concurs with its listing. Our objectives in this review were to (1) determine the number of sites awaiting an NPL decision that remain potentially eligible for the list; (2) describe the characteristics of these sites, including their health and environmental risks; (3) determine the status of any actions to clean up these sites; and (4) collect the opinions of EPA and other federal and state officials on the likely final disposition of these sites, including the number of sites that are likely to be added to the Superfund program. EPA’s CERCLIS database indicates that as of October 8, 1997, 3,036 sites were potentially eligible for the NPL on the basis of a combination of criteria. These criteria include a preliminary hazardous ranking system score of 28.5 or above, the completion of a site inspection or the initiation of an expanded site inspection, and a status that neither eliminates the site from consideration for the NPL nor includes a proposal to list it. Because our objectives require data for each site, we did not sample the sites but included all 3,036 in our survey. To obtain information on all 3,036 sites that EPA identified as awaiting an NPL decision, we developed three mail surveys. These surveys appear in appendix III. We sent the first of the surveys to officials in EPA’s 10 regional offices responsible for evaluating the sites and making decisions about listing. Because state officials may have closer contact with some of the sites, we sent the second survey to officials in the 50 states, the District of Columbia, Puerto Rico, Guam, the Northern Mariana Islands, Midway Island, and the Navajo Nation (collectively referred to as states in this report). In addition, we sent a third survey to federal agencies that are responsible for cleaning up the 157 federally owned and/or operated sites that were classified as awaiting an NPL decision. We sent surveys on the 157 sites to 14 federal agencies, including the departments of Agriculture, the Air Force, the Army, Defense, Energy, the Interior, the Navy, and Transportation; the Bureau of Land Management; the General Services Administration; the National Aeronautics and Space Administration; the U.S. Army Corps of Engineers; the U.S. Coast Guard; and the U.S. Forest Service. The three surveys asked respondents for detailed information on the conditions at each site, including the site’s current and potential risks, and their opinions on the involvement of potentially responsible parties and the likely outcome for the site’s cleanup, including any potential for NPL listing. We mailed our three surveys in November and December 1997 and received the final survey responses in September 1998. We received one or more survey responses for 3,023 (99.6 percent) of the 3,036 sites identified by EPA as awaiting an NPL decision. On the basis of these responses, we identified 1,234 sites that are no longer eligible for the NPL or no longer awaiting an NPL decision. Because we received no survey responses for 13 sites, we could not determine whether they are still eligible for the NPL; therefore, we excluded these sites from our analyses. The remaining 1,789 sites are analyzed in this report as potentially eligible sites. Of these sites, 1,739 were nonfederal sites, and 50 were federally owned and/or operated sites. Through our surveys, we obtained information from both EPA and the states on 1,319 (76 percent) of the 1,739 potentially eligible nonfederal sites. This information includes 1,326 state responses (76 percent) and 1,732 responses from EPA (99.6 percent). Similarly, we obtained information from at least two of the three possible respondents—EPA, other federal agencies, and states—for 45 (90 percent) of the 50 potentially eligible federal sites. Responsible federal agencies provided information for 39 (78 percent) of the 50 potentially eligible federal sites, states provided responses for 26 (52 percent) of the federal sites, and EPA regions provided responses for 49 (98 percent) of the federal sites. Because 19 states—including California, Massachusetts, and New York, which account for 19 percent of the 3,036 sites—did not fully respond to our initial survey mailing, in July 1998 we sent a second survey to these states. In order to minimize the effort required for states to complete this follow-up survey, we eliminated sites that EPA and other federal agencies had identified as no longer eligible for the NPL. In addition, the follow-up survey included as a starting point the information on each site that EPA regions had provided in their responses. We asked state officials to confirm or correct the information provided to us by EPA regions. In the follow-up survey, we also repeated the original questions asked of the states but not of EPA regions. The original state survey was included as a reference source. This follow-up effort resulted in our receiving an additional 85 completed surveys from some states. However, despite numerous contacts, we received no survey responses from California, Massachusetts, Nebraska, and the District of Columbia. Rather than responding to our survey, California officials suggested that we obtain their responses to a brief 1-page survey on NPL-eligible sites conducted by the Association of State and Territorial Solid Waste Management Officials. Similarly, Massachusetts officials provided us copies of their responses to the Association’s survey. However, because of differences in the format, specificity of answers, comparability of answers, and topics covered, we could not incorporate the results of that survey into our analyses. In addition, New York State officials agreed to respond to only three survey questions for the sites in the state that EPA classified as awaiting an NPL decision. The three questions asked for information about whether sites would be listed on the NPL and what state cleanup activities had occurred at the sites. The responses to these questions were incorporated into our analyses. While our overall survey response rate was high, our data for some states are incomplete. We did not receive fully completed state surveys for 491 of the 1,789 potentially eligible sites. Nearly three-quarters of these sites are located in California (125 sites) and Massachusetts (190 sites). In addition, we received only partial information from New York for 54 of its 56 potentially eligible sites. Table II.1 shows the 16 states that either did not respond to our survey or responded only in part, and the number and percentage of potentially eligible sites in each state for which we did not receive fully completed surveys. EPA regions I and V notified us that because of time and resource constraints, they had taken a generic approach to answering certain survey questions: That is, they answered certain questions in a standardized manner for all sites in the region rather than on a site-specific basis. Questions addressed in this manner included, among others, those relating to the likely placement of sites on the NPL and the risks posed by the sites. For example, for most sites, Region I answered our questions about the degree of human health or environmental risks posed by each site by responding that it is “too early to tell/more information is needed to answer” because, according to Region I officials, “risk assessments are not conducted for most CERCLIS sites, and thus the current risks posed by these sites are difficult to determine.” EPA Region II responded to key survey questions in a similar manner. Consequently, because neither EPA regions I, II, and V nor three states in those regions —Massachusetts (190 sites), New Jersey (66 sites), and New York (54 sites)—provided complete survey information, we could not characterize the conditions at these sites with the same degree of accuracy as for other sites. For example, these three states account for 54 percent of the sites for which we could not obtain an official’s estimate of the risks to human health and the environment. We conducted pretests of our surveys with officials in six states, at two federal agencies, and in five EPA regional offices. Each pretest consisted of a visit with an official by GAO staff. We attempted to vary the types of sites for which we conducted pretests and the familiarity of the respondents with the sites. In some cases, the respondent used only site records to answer our survey. In other cases, the respondent knew most of the answers without consulting records. The pretest attempted to simulate the actual survey experience by asking the official to fill out the survey while GAO staff observed and took notes. Then the official was interviewed about the survey items to ensure that (1) the questions were readable and clear, (2) terms were precise, (3) the survey was not a burden that would result in a lack of cooperation, and (4) the survey appeared independent and unbiased. We made appropriate changes to the final survey on the basis of our pretesting. In addition to our pretesting, we obtained views on our surveys from managers in EPA’s Office of Emergency and Remedial Response in Washington, D.C., which oversees the Superfund program. We incorporated comments from these reviews as appropriate. In analyzing survey responses, we reviewed comments written by respondents on the surveys, including marginal comments, comments at the end of the survey, and comments when the respondents provided explanations after checking “other.” If a respondent’s comment explaining the selection of “other” could reasonably be interpreted as another of the answer choices provided for the question, we revised the response as appropriate. In some cases, respondents’ comments indicated a misunderstanding of our questions or answer choices. In these cases, where possible, we revised the response to reflect the appropriate answer. In other cases, respondents checked more than one answer; we then selected, where possible, what we considered to be the appropriate answer, on the basis of other responses in the survey or our own judgment. The procedures used in this editing process were documented in an internal 17-page document provided to all of the GAO reviewers of the survey responses. At least two reviewers analyzed each survey response, and the reviewers coordinated their efforts to ensure that all reviewers followed the established procedures. Both the original answers and the answers revised by reviewers were recorded. In our surveys of officials of EPA regions, states, and federal agencies, some of the questions we asked about particular sites were identical. We combined the responses to these questions where possible in this report. If opinions differed, we used a set of criteria to combine answers. Namely, we chose the answer that seemed to reflect the most knowledge of the site. For site conditions, we assumed that any affirmative answer was the more knowledgeable. For example, if one respondent said that a site has groundwater contamination and the other respondent was unable to comment on that site’s contamination, we recorded the site as having groundwater contamination. We also sought to avoid understatement of the risks posed by a site. Therefore, if respondents disagreed on the level of a site’s risks, we selected the response indicating the more severe threat. For example, sites scored by any respondent as high-risk were recorded as high-risk sites. Furthermore, if a respondent indicated in any survey response that a site might be included on the NPL, we recorded the site as a possible candidate for the NPL. Finally, when opinions about the most likely outcome for a site were in conflict—for example, if the state thought that EPA would clean up a site but EPA thought the state would conduct the cleanup—we recorded the most likely outcome as unknown. In addition to our mail surveys, we also conducted a telephone survey with officials in 44 states to determine general information on their hazardous waste management programs and sites within their states. State officials in Idaho, New York, Missouri, Utah, Virginia, and Wyoming declined to participate in our telephone survey. We conducted our review between May 1997 and November 1998 in accordance with generally accepted government auditing standards. The 1,789 sites that are potentially eligible for the NPL include 1,739 nonfederal sites and 50 federal facilities. Among the 1,789 sites, there are (1) 686 sites at which some cleanup activities have taken place or are currently being conducted, but the final cleanup remedy is not yet under way, and (2) 1,103 sites for which no substantive cleanup activities have been reported or no information on cleanup progress is available. The 1,789 sites that are potentially eligible for placement on the NPL are located in 48 states, the District of Columbia, Puerto Rico, and the Northern Mariana Islands and under the jurisdiction of the Navajo Nation (hereinafter referred to as states). Table IV.1 shows, for each state, the number of (1) sites classified in EPA’s inventory as awaiting an NPL decision as of October 8, 1997, (2) sites that our surveys indicate are unlikely to become eligible for the NPL, (3) potentially eligible sites at which some cleanup activities have been conducted, (4) potentially eligible sites at which there has been no reported cleanup progress or for which no information on cleanup progress is available, and (5) sites for which we received no surveys. Number of sites for which no surveys were received (continued) Number of sites for which no surveys were received (continued) California, the District of Columbia, Massachusetts, and Nebraska did not respond to our surveys. For these states, the data in table IV.1 are based on EPA’s survey responses alone and, for that reason, may be less reliable than for states having responses from both EPA and states. New York provided responses to only a few questions in our survey. Under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), federal agencies are responsible, under EPA’s supervision, for evaluating and cleaning up properties under their jurisdiction. As required by CERCLA, EPA has established a Federal Agency Hazardous Waste Compliance Docket that lists federal facilities awaiting evaluation for possible cleanup. Once a federal facility is listed on the docket, the responsible agency then conducts a preliminary assessment to gather data on the facility and performs a site inspection, which may involve taking and analyzing samples, to learn more about potential contamination there. Ten federal agencies other than EPA have primary responsibility for managing the 50 federal facilities that are potentially eligible for the NPL. Table IV.2 presents for each agency the number of (1) sites classified in EPA’s inventory as awaiting an NPL decision as of October 8, 1997, (2) sites that our surveys indicate are unlikely to become eligible for the NPL, (3) potentially eligible sites at which some cleanup activities have been conducted, and (4) potentially eligible sites at which there has been no reported cleanup progress or for which no information on cleanup progress is available. We asked officials of EPA, other federal agencies, and states about the cleanup actions that have been conducted at the potentially eligible sites. These activities include interim measures to mitigate the contamination, such as removing waste or taking action to protect people against contaminated drinking water sources. These actions were not considered by the officials to be final cleanup remedies. As figure V.1 shows, of the total 1,789 potentially eligible sites, 13 percent exhibit one or more of the conditions associated with contaminated drinking water sources. The majority of these sites have undergone some cleanup activities. Survey data indicate that some cleanup activities have occurred at 77 percent of the sites for which nearby residents are advised not to use wells and at 72 percent of the sites for which residents are advised to use bottled water. Figure V.1 includes, among other factors, the five most prevalent adverse conditions identified by officials responding to our surveys. As this figure indicates, the majority of the sites with these conditions reportedly have made no cleanup progress, or no information on cleanup progress is available. No known cleanup actions have been taken at (1) 56 percent of the sites at which workers or visitors may come into direct contact with contaminants; (2) 57 percent of the sites at which trespassers may come into direct contact with contaminants; (3) 52 percent of the sites with fences, barriers, and/or signs to prevent entry into contaminated areas; (4) 61 percent of the sites associated with fish that may be unsafe to eat; and (5) 48 percent of the sites about which nearby residents have expressed some health concerns. During our telephone survey of officials in 44 states to obtain general information on their hazardous waste management programs, officials gave their opinions about their state’s capability to fund cleanup activities if responsible parties were not willing or able to pay for these actions. Officials of about a quarter of the responding states told us that their state’s financial capability to clean up potentially eligible sites, if necessary, is excellent or good, and more than half said that their state’s ability to fund these cleanups is poor or very poor. Table VI.1 presents, by state, the responding officials’ assessments of each state’s ability to fund cleanup activities at potentially eligible sites. State officials’ assessment of state’s financial capability to clean up potentially eligible sites (continued) James F. Donaghy, Assistant Director Vincent P. Price, Senior Evaluator Rosemary Torres Lerma, Staff Evaluator Fran Featherston, Senior Social Science Analyst Alice Feldesman, Assistant Director The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO surveyed Environmental Protection Agency (EPA) regions, other federal agencies, and states to: (1) determine the number of sites classified as awaiting a National Priorities List (NPL) decision that remain potentially eligible for the list; (2) describe the characteristics of these sites, including their health and environmental risks; (3) determine the status of any actions to clean up these sites; and (4) collect the opinions of EPA and other federal and state officials on the likely final disposition of these sites, including the number of sites that are expected to be added to the NPL. GAO noted that: (1) on the basis of surveys of EPA regions, other federal agencies, and states, GAO has determined that 1,789 of the 3,036 sites that EPA's database classified as awaiting a NPL decision in October 1997 are still potentially eligible for placement on the list; (2) GAO considered the 1,234 other sites as unlikely to become eligible for various reasons; (3) the other sites do not require cleanup in the view of the responding officials, have already been cleaned up, or have final cleanup activities under way; (4) officials of EPA, other federal agencies, and states said that many of the potentially eligible sites present risks to human health and the environment; (5) the potentially eligible sites are generally located in populated areas; (6) officials of EPA, other federal agencies, and states said that about 17 percent of the potentially eligible sites currently pose high human health and environmental risks and that another 10 percent could also pose high risks in the future if they are not cleaned up; (7) however, these officials were unsure about the severity of risks for a large proportion of the sites; (8) responding officials said that some cleanup actions have taken place at 686 of the potentially eligible sites; (9) no cleanup activities beyond initial site assessments or investigations have been conducted, or no information is available on any such actions, at the other 1,103 potentially eligible sites; (10) many of the potentially eligible sites have been in states' and EPA's inventories of hazardous sites for extended periods; (11) 73 percent have been in EPA's inventory for more than a decade; (12) no cleanup progress was reported at the majority of the sites that have been known for 10 years or more; (13) responding officials did not indicate whether or how more than half of the potentially eligible sites would be cleaned up; (14) collectively, EPA and state officials believed that 232 of the potentially eligible sites might be placed on the NPL in the future; (15) however, EPA and the states agreed on the listing prospects of only a small number of specific sites; (16) officials estimated that almost one third of the potentially eligible sites are likely to be cleaned up under state programs but usually could not give a date for the start of cleanup activities; (17) officials of about 20 percent of the states said that their state's enforcement capacity to compel responsible parties to clean up potentially eligible sites is fair to very poor; and (18) officials of about half of the states told GAO that their state's financial capability to clean up potentially eligible sites is poor or very poor.
As amended, the PSDA requires Medicare and Medicaid funded hospitals, nursing homes, hospices, HHAs, and managed care plans— including MA and Medicaid health maintenance organizations—to maintain written policies and procedures related to advance directives. Among other things, the policies and procedures maintained by covered providers are required to specify that the provider will: (1) provide written information to all adult individuals receiving medical care by or through the provider on their rights under state law to make decisions concerning medical care, including the right to execute an advance directive; and (2) document in the medical record whether the individual has an advance directive. The PSDA defines an advance directive as a written instruction, such as a living will or durable power of attorney for health care, recognized under state law (whether statutory or as recognized by the courts of the state) and relating to the provision of such care when the individual is incapacitated. For example, an advance directive may be used to record an individual’s wish to receive all available medical treatment, to withdraw or withhold certain life sustaining procedures, or to identify an agent to make medical decisions on the individual’s behalf if necessary. The most common forms of advance directives are living wills and health care powers of attorney. During the last 6 months of life, many individuals receive care from one or more covered providers, including hospitals and nursing homes, and having an advance directive that specifies an individual’s treatment preferences to covered providers in preparation for difficult medical decisions that may arise at the end of life may be useful. According to the IOM, advance directives are most effective when used as part of advance care planning, which may involve multiple, in-depth discussions with family members, legal and financial counsel, and healthcare providers, and may also include the formulation of medical orders. The IOM also reported that multiple discussions at various stages of life are needed, with greater specificity as an individual’s health deteriorates, because an individual’s medical conditions and treatment preferences may change over time. Therefore, a comprehensive approach to end-of-life care, rather than any one document, such as an advance directive, helps to ensure that medical treatment given at the end of life is consistent with an individual’s preferences. The six provider types covered by the PSDA provide or arrange for Medicare or Medicaid health care services in multiple settings for individuals of varying demographics. For example, nursing homes generally provide care in an institutional setting to older individuals who have chronic conditions, such as congestive heart failure, while hospices generally deliver palliative care in an institutional, home, or home-like setting to critically ill individuals of various ages who are close to the end of life. The characteristics and distribution of individuals enrolled in MA and Medicaid managed care also have similarities and differences. For example, individuals enrolled in MA may be disabled or elderly (over the age of 65), and individuals enrolled in Medicaid managed care may include adults who are also disabled and elderly. However, in 2013, CMS reported that half of the Medicaid population was children in comparison to Medicare in which 83 percent of beneficiaries were over age 65. In addition, while most individuals enrolled in Medicare are not enrolled in a managed care plan, over 70 percent of individuals enrolled in Medicaid are enrolled in some form of Medicaid managed care, according to CMS. However, the distribution of individuals enrolled in Medicaid across various demographic groups—disabled, elderly, adults, and children— varies widely by state. In order to participate in the Medicare or Medicaid programs, covered providers must comply with applicable federal standards, including PSDA requirements. CMS is responsible for oversight of providers’ compliance with PSDA requirements and does so through both state survey agencies and accrediting organizations. CMS enters into agreements with state survey agencies to conduct oversight activities of covered providers. Specifically, four of the six covered provider types—hospitals, nursing homes, HHAs, and hospices—must demonstrate their compliance with federal standards to a state survey agency. These agencies conduct surveys of covered providers—observations, interviews, and document/record reviews—that assess compliance with applicable requirements for Medicare and/or Medicaid participation. The survey process covers multiple standards. For example, there are about 200 quality and safety standards for nursing homes that range from determining the prevalence of pressure sores and use of restraints to documenting the posting of an individual’s bill of rights. However, in some cases, particularly for hospitals, accrediting organizations provide primary oversight. Specifically, hospitals, HHAs, and hospices that choose to undergo accreditation by an accrediting organization, rather than certification from a state agency, must demonstrate to the accrediting organization their ability to meet the standards of accreditation, including PSDA standards. The accreditation organization subsequently recommends to CMS certification of providers meeting such standards. The processes that accrediting organizations use to certify providers for Medicare participation are subject to CMS review and approval. In addition, under agreements with CMS, state survey agencies annually survey a sample of accredited providers to verify the results of surveys conducted by the accrediting organizations, and assess the organizations’ ability to monitor providers’ compliance with federal standards. The two remaining covered provider types—MA and Medicaid managed care plans—must contract with CMS or individual states to participate in the Medicare or Medicaid programs. Specifically, under MA, CMS contracts with private health plans to provide covered services to individuals who enroll in an MA plan, while under Medicaid managed care, individual states contract with private health plans to cover medical services; however, both MA and Medicaid managed care plans are prospectively paid a per person, or capitated, payment. CMS develops and disseminates guidance through operations manuals, memoranda, or model documents to five of the six covered provider types—hospitals, nursing homes, HHAs, hospices, and MA plans—to inform these providers of the requirement to maintain written policies and procedures about advance directives and to describe how the agency will monitor providers’ implementation. For example, CMS issues operations manuals specific to five provider types that describe the advance directive requirement and how each of these providers is required, in accordance with federal regulations, to maintain and provide each patient with written notice of the provider’s policies related to advance directives. These operations manuals also describe how the providers are required to maintain policies related to documentation of an individual’s advance directive in the individual’s medical record. CMS shares oversight of Medicaid managed care plans, the sixth covered provider type, with individual states. CMS is responsible for approving managed care contracts to ensure that they conform to advance directive requirements in federal regulation, and states are responsible for administering these contracts, including providing guidance to plans and ensuring that plans comply with contractual requirements, according to CMS officials. As a result, CMS does not issue guidance to Medicaid managed care plans. CMS also provides an operations manual to four covered provider types—hospitals, nursing homes, HHAs, and hospices—to help these providers understand the standards state survey agencies will use during surveys to monitor the providers’ implementation of the advance directive requirement. Covered provider types may use this information to ensure that survey standards are met. In addition, CMS’s guidance informs the standards that accrediting organizations use during surveys to monitor accredited providers’ implementation of the advance directive requirement, because these standards must be approved by CMS as meeting or exceeding the Medicare standards. Guidance in the operations manual describes to state survey agencies and covered providers the activities and documents that may be observed and reviewed during surveys. For example, the operations manual indicates that state survey agencies may review the provider’s policies, examine an individual’s medical records for documentation that required information was provided to the individual, and whether or not the individual has an advance directive; or conduct interviews with other individual patients and provider staff to understand how the provider’s policies are implemented. One stakeholder we spoke with that represented HHAs reported that the survey process described in the operations manual demonstrates the importance that CMS places on advance directives. Additionally, CMS issues memoranda available to state survey agencies and four of the six covered provider types—hospitals, nursing homes, HHAs, and hospices—that contain clarifications and new or revised guidance related to the advance directive requirement. For example, in September 2012, a CMS memorandum notified state survey agencies that CMS had updated its guidance regarding how survey agencies should assess nursing home compliance with the advance directive requirement and encouraged survey agencies to share the information with providers. Further, in October 2013, a CMS memorandum to state survey agencies clarified nursing homes’ cardiopulmonary resuscitation (CPR) policies in the context of an individual’s advance directive. According to the memorandum, nursing homes must provide CPR to all individuals in their care unless an individual’s advance directive specifies otherwise, and may not establish or implement facility-wide “no CPR” policies. The memorandum instructs state survey agencies to examine nursing home policies and individuals’ medical records to ensure that no such policy has been established or implemented. A stakeholder that represented nursing home providers reported that the memoranda and updated survey guidance for nursing home providers clarified and reinforced CMS’s expectations for nursing homes’ policies related to advance directives, and demonstrated CMS’s focus on providing oversight in this area. In addition to the guidance that CMS provides to MA plans—the fifth provider type—through the Medicare Managed Care Manual (chapter 4 entitled, “Benefits and Beneficiary Protections”), CMS also provides MA plans with model documents used to inform enrollees about advance directives to demonstrate how plans are to implement the guidance in the manual. For example, the model document contains the exact wording that the plans must use to inform individuals enrolled in MA plans about their right to formulate an advance directive. MA plans are required to provide this document, called an Evidence of Coverage, to each individual at initial enrollment and each year thereafter. According to CMS officials, MA plans are not permitted to modify the language in the model document unless otherwise instructed by CMS. Officials also reported that CMS annually reviews the policies and procedures in the model document and, when necessary, updates them to ensure that they reflect current laws and CMS policies. CMS relies on states to provide guidance to Medicaid managed care plans—the sixth provider type—because states are responsible for administering contracts with these plans. CMS’s activities to monitor covered providers’ implementation of the advance directive requirement vary across the six covered provider types and include periodic surveys, contract reviews, and the collection of certain related data. CMS enters into agreements with state survey agencies to conduct most surveys. Specifically, under agreements with CMS, state survey agencies periodically survey four of the six provider types—hospitals, nursing homes, HHAs, and hospices. Survey frequencies for each provider type are determined by statute or CMS policy. For example, the frequency of nursing homes, HHAs, and beginning in 2015, hospice standard surveys is statutorily determined and must occur, on average, every year for nursing homes and every 3 years for HHAs and hospices. The frequency of hospital standard surveys is determined by CMS and should occur, on average, every 3 years. According to CMS officials, state survey agencies follow up with providers to correct deficiencies found during surveys, and may work with CMS to impose enforcement actions, such as civil monetary penalties and termination, on providers that do not correct deficiencies in a timely manner. Through state survey agencies, CMS retains data regarding deficiencies related to advance directives identified during surveys of hospitals, nursing homes, HHAs, and hospices. The data—which, according to CMS officials, the agency uses for enforcement actions—indicate that the rate of noncompliance with the advance directive requirement among these four covered provider types in 2012 and 2013 was less than 3 percent for the providers surveyed in each given year. For example, about 2 percent of the 14,161 nursing homes that were surveyed in 2013 had a deficiency related to the advance directive requirement. Deficiencies related to the advance directive that were identified during surveys of the four provider types included providers’ failure to inform individuals about advance directives, including failure to provide individual patients with written information about the providers’ policies regarding advance directives. Providers also failed to accurately document an individual’s advance directive in the medical record. Surveyors based their findings on observations, medical record reviews, and interviews with provider staff, and noted that a provider’s failure to ensure that individuals have an opportunity to formulate complete and accurate directives has the potential to cause harm to individuals who may receive treatment or have treatment withheld when their exact treatment preferences are not known. In addition, accrediting organizations— through findings from periodic surveys of providers that would include findings related to providers’ compliance with the advance directive requirement—may recommend to CMS whether accredited providers should maintain their certification. In addition to the survey process for hospitals, nursing homes, HHAs, and hospices, CMS reviews contracts from the two remaining covered provider types—MA plans and Medicaid managed care plans. Specifically, CMS reviews MA and Medicaid managed care plan contract provisions addressing compliance with applicable requirements, including the advance directive requirement. According to CMS officials, each MA plan must annually renew its contract with CMS indicating that it will comply with Medicare laws and regulations, which includes the advance directive requirement. Although the MA plan’s contract application indicates that CMS may conduct monitoring activities, such as on-site visits to the plan’s facilities to verify the plan’s compliance with Medicare requirements, CMS does not currently conduct such activities related to the advance directive requirement. CMS officials told us that current audits of MA plans are focused on outcome based measures, such as plans’ coverage determinations, which would not indicate noncompliance with the advance directive requirement. For Medicaid managed care plans, CMS officials reported that CMS staff review contracts between the plan and individual states prior to implementation of a new plan contract or when revisions are made to an existing approved contract to ensure that the contract addresses provisions related to advance directives. CMS staff use a contract review checklist that includes the regulatory language related to the advance directive requirement when conducting their review. For example, staff are to indicate on the review checklist whether the contract under review requires that the plan maintain written policies and procedures on advance directives for all adult individuals receiving medical care by or through the plan. However, CMS does not currently have data on the extent to which Medicaid managed care plans’ contracts address the advance directive requirement. Although CMS recently began electronically collecting contract review data, the data will indicate the extent to which plans’ contracts address advance directive requirements, but not the extent to which plans’ implementation is in compliance with the contractual provisions. CMS does not conduct audits of Medicaid managed care plans to monitor implementation or identify noncompliance with contractual provisions. CMS officials reported that this is because individual states are legally responsible for monitoring the Medicaid managed care plans with which they contract. Providers use various approaches to inform individuals about their right to have an advance directive, either as part of the admission or the enrollment process depending on the type of covered provider, according to CMS and stakeholder officials, and the limited amount of information found in the literature about certain types of providers. For example, four of the provider types—hospitals, nursing homes, HHAs, and hospices— provide individuals with information about their right to formulate an advance directive during the admission process, according to interviews with stakeholder officials representing these provider types and 10 studies. In contrast, MA plans and Medicaid managed care plans provide written information to individuals on their right to formulate an advance directive during the enrollment process, according to stakeholders we interviewed. MA plans provide individuals enrolling in the plan with a model document developed by CMS—the Evidence of Coverage—to inform individuals about their right to have an advance directive, according to CMS officials. Medicaid managed care plans also provide written information at the time of enrollment, according to a stakeholder official that represented a Medicaid managed care plan. Providers’ approaches also vary in the extent to which they each discuss information about advance directives with individuals, according to the literature and stakeholder officials. Specifically, hospital staff do not generally discuss advance directives with individuals during the admission process, as five stakeholders, including officials representing three provider types and individuals close to the end of life, and six studies noted. By contrast, staff for three other covered provider types— nursing homes, HHAs, and hospices—nurses, social workers, or case managers generally discuss advance directives during the admissions process as part of one or more advance care planning discussions with individual patients, according to the findings from four studies and five stakeholders. However, information on the extent to which MA and Medicaid managed care plan providers discuss advance directives with individuals enrolled in the plans is more limited. Specifically, we did not identify any peer reviewed studies in our literature review that addressed MA and Medicaid managed care plans. In addition, although a stakeholder representing both an MA plan and Medicaid managed care plan told us that while the stakeholder’s plans require their providers to discuss advance directives with individual enrollees during an initial health assessment and during annual visits with their physicians, not all plans take a similar approach. Covered providers generally document whether an individual has an advance directive either in paper medical records or in an electronic health record, according to the limited information in the literature and the stakeholder officials we interviewed. Specifically, each stakeholder official we spoke with and six studies we reviewed found that all six covered provider types document individuals’ advance directives using either paper medical records or electronic health records, and some of these sources indicated that providers may also keep a copy of individuals’ directives in these records if such documents are available. Providers face similar challenges in informing individuals about advance directives and documenting them, according to the literature and stakeholder officials. The challenges to informing individuals about advance directives include discomfort talking about end-of-life issues, confusion about which staff should have the discussions with individuals, and lack of staff time to have the discussions. Specifically, 18 studies found and five stakeholder officials representing providers and seniors confirmed that providers, individual patients, or both are often uncomfortable talking about end-of-life issues, in some cases even when an individual is close to the end of life. For example, 9 studies found that physicians often do not communicate poor prognoses with individuals, in part, due to their discomfort to do so, which can deprive individuals of the opportunity to understand that they are nearing the end of life and the opportunity to discuss their advance care preferences in that context. Three studies also found confusion about which staff—nurses, social workers, or physicians—should have discussions to inform individuals about advance directives, although most individuals prefer to have these discussions with their physicians, according to 3 other studies. In addition, 10 studies found that physicians may either not have the time or do not spend the time discussing end-of-life issues with individual patients. Two stakeholders that represented managed care plans and seniors also noted the time constraints physicians face when discussing advance directives with individual patients. Other challenges that providers face informing individuals about advance directives are associated with individuals’ lack of understanding about advance directives and challenges informing certain demographic groups about them, according to the literature. Specifically, 13 studies found that many individuals lack an understanding about advance directives, assume incorrectly that these documents are expensive or require attorneys, or have difficulty understanding complex medical information included in some advance directive forms. Thirteen studies also identified challenges specific to Latinos or African Americans, such as language barriers, lack of trust in health care providers, or fear that advance directives may prevent them from getting the care they want to receive. In addition to the challenges with informing individuals about advance directives, providers face similar challenges documenting this information, such as errors in individuals’ medical records and challenges related to access or updates to advance directives, according to the literature and stakeholder officials. Specifically, nine studies found errors in individuals’ records related to advance directives, such as lack of documentation about advance directives that should have been included in the records— in one case despite the fact that individuals had recently discussed directives with providers. In addition, five studies reported challenges related to access to this information, such as challenges identifying where a copy of individuals’ directives may be located or concerns that information about directives may not be transferred with individuals if they are moved from one provider to another; for example, from a nursing home to a hospital. Similar concerns about access to documents were reiterated by four stakeholders we interviewed that represented providers and seniors. Two stakeholders representing nursing homes and managed care plans reported that providers face difficulties ascertaining in the documentation whether individuals had recently reviewed or updated their directives, or if the directives in individuals’ records were current. Providers may better address challenges to inform individuals about advance directives and document them by using leading practices, according to the literature and stakeholders. Some of the leading practices for informing individuals about advance directives include patient education, materials tailored for specific groups, or an iterative advance care planning process. For example, one study using the Respecting Choices program—an advance care planning model developed by Gunderson Health System that includes multiple stages of care planning involving patients, providers, and communities— demonstrated that a provider using patient education and staff training efforts can increase the extent to which individuals understand and complete advance directives. In addition, 12 studies suggested that providers use materials designed for specific groups, such as videos, for those with low literacy or information developed for those with specific medical conditions that can help individuals better understand and communicate their preferences with providers. Eleven studies and four stakeholders that represented providers and individuals nearing the end of life also suggested using an iterative advance care planning process, such as a process that would start with community education about advance directives, continue with increasingly specific discussions with providers as an individual’s health deteriorates, and culminate with the completion of increasingly specific documents, such as advance directives or medical orders, as an individual nears the end of life. Fifteen studies noted the importance of such planning when individuals are diagnosed with a major illness or impending loss of decision-making capacity so that the individuals can communicate their preferences with providers before they are unable to do so. Providers may also use leading practices to better address challenges to documenting information about advance directives in order to help ensure the accuracy and the accessibility of this information in an individual’s medical record, according to the literature and stakeholder officials. For example, five studies found that using specific documentation methods, such as spreadsheets or electronic health record systems, can improve the accuracy—including quantity and quality—of the information about advance directives maintained in individuals’ medical records. Six studies also suggested that providers adopt electronic health record systems that, in addition to indicating whether individuals have an advance directive, could contain copies of the directives to ensure that individuals’ preferences are more easily accessible to providers and families, especially for individuals that may transfer from one provider to another. Many adults in the United States have advance directives. In 2013, about 47 percent of adults over the age of 40 had an advance directive, according to IOM’s report Dying in America. In addition, an earlier nationally representative survey that included younger adults age 18 and older found that an estimated 26 percent of this population had an advance directive during the 2009 and 2010 time period. The prevalence of individuals with advance directives varies by the type of provider that individuals are served by—hospitals, nursing homes, HHAs, and hospices—according to the literature. For three of these four provider types (hospice, nursing homes, and HHAs), a 2011 National Center for Health Statistics report found that 88 percent of discharged hospice patients in 2007 had advance directives, compared to 65 percent of nursing home patients in 2004, and 28 percent of HHA patients in 2007. (See fig. 1.) Among the four covered provider types for which information was available, our analysis of 12 studies found that hospital patients were least likely to have advance directives and that hospice patients, who are by definition close to the end of life, were the most likely to have advance directives as compared with nursing home and HHA patients. We did not find peer reviewed studies in our literature review on the prevalence of those with advance directives among individuals enrolled in MA and Medicaid managed care plans, although these individuals may also be served by the other four covered provider types. In addition to variations by provider type, the prevalence of advance directives also varies among individuals based on certain demographic characteristics, such as medical conditions including chronic and life threatening diseases, according to the literature. For example, in 2010, individuals 18 years of age and older with chronic diseases were more likely than those without such diseases to have advance directives, with an estimated 33 percent and 22 percent prevalence, respectively. (See fig. 2.) A total of 20 studies found that individuals with certain medical conditions were more likely to have advance directives than healthier individuals. In general, certain medical conditions—such as diabetes, malignancies, renal dysfunction, dementia, or declining health—increased the likelihood that individuals had advance directives, according to the literature. The prevalence of advance directives also varies by age, race, income, education, and gender, according to the literature. Older individuals were more likely to have advance directives than younger individuals. For example, a 2009 and 2010 nationally representative survey found that an estimated 51 percent of individuals 65 years of age and older had advance directives, while among individuals 18 to 34 years old, an estimated 12 percent had advance directives. The survey also indicated that an estimated 31 percent of whites compared to an estimated 17 percent of African Americans or Latinos had advance directives. In addition, this study found that an estimated 32 percent of individuals with incomes of $75,000 or more had advance directives in comparison to an estimated 21 percent of those with incomes under $25,000. Similarly, prevalence among individuals with post-graduate educations compared to those who had not completed high school was an estimated 38 percent and an estimated 14 percent, respectively, according to the study. Women were also more likely to have advance directives than men, an estimated 28 percent versus 25 percent, according to the study. In addition to this study, 36 studies found variations in the prevalence of advance directives by age, race, income, education, or gender. The prevalence of advance directives has been increasing over time, according to the literature and CMS data. For example, prevalence within the older population has increased over time, according to a study examining the prevalence of advance directives for those 60 years of age or older who died between 2000 and 2010. This study found that individuals 60 years of age and older who died during that period with an advance directive increased from an estimated 47 percent to 72 percent. Available information on nursing home residents also shows an increase over time. Our analysis of CMS data found that the proportion of nursing home patients with advance directives has increased between 2004 and 2014. We found that the average percentage of nursing home patients who had an advance directive increased from 46 percent in 2004 to 55 percent in 2014, based on data from nursing homes that were surveyed during that time. (See fig. 3.) However, the percentage of nursing home residents having advance directives fluctuated over this period. Eight additional studies found increases in the prevalence of those with advance directives over time for a specific group, such as those from one state or type of provider. Factors that may have contributed to the increasing prevalence of individuals with advance directives over time, according to the literature, include community education efforts and provider staff training. We requested comments on a draft of this product from HHS. HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, James C. Musselwhite Jr., Assistant Director; George Bogart; Kye Briesath; Leia Dickerson; Julianne Flowers; Jennel Lockley; Drew Long; and Vikki Porter made key contributions to this report.
Advance directives, such as living wills or health care powers of attorney, specify—consistent with applicable state law—how individuals want medical decisions to be made for them should they become unable to communicate their wishes. Many individuals receive medical care from Medicare and Medicaid funded providers during the last 6 months of life, and may benefit from having advance directives that specify treatment preferences. According to IOM, advance directives are most effective when part of a comprehensive approach to end-of-life care called advanced care planning. GAO was asked to review information related to advance directives. This report examines (1) how CMS oversees providers' implementation of the PSDA requirement; (2) what is known about the approaches providers use and challenges they face to inform individuals about advance directives; and (3) what is known about the prevalence of advance directives and how it varies across provider types and individuals' demographic characteristics. To do this work, GAO reviewed CMS documents and survey data reported by state survey agencies into CMS's Certification and Survey Provider Enhanced Reporting system about covered providers' implementation of the PSDA requirement. GAO also conducted a literature review of peer reviewed articles and federal government reports. In addition, GAO interviewed CMS officials and stakeholders representing providers and individuals likely to benefit from advance directives. The Centers for Medicare & Medicaid Services (CMS) oversees providers' implementation of the advance directive requirement in the Patient Self Determination Act (PSDA) to maintain written policies and procedures to inform individuals about advance directives, and document information about individuals' advance directives in the medical record by providing guidance and monitoring covered providers. Covered providers include hospitals, nursing homes, home health agencies (HHAs), hospices, and Medicare Advantage (MA) plans that receive Medicare and Medicaid payments. CMS, an agency within the Department of Health and Human Services (HHS), provides operations manuals, memoranda, and model documents to these providers to inform them about the advance directive requirement and describe how the agency will monitor providers' implementation. Because individual states are responsible for administering contracts with and providing guidance to Medicaid managed care plans, also specified in the PSDA, CMS ensures that the contracts include the advance directive requirement, but does not issue guidance to these plans. To monitor providers' implementation of the advance directive requirement, CMS primarily relies on other entities. CMS enters into agreements with state survey agencies to periodically survey and report data, which CMS collects, on deficiencies related to advance directives for hospitals, nursing homes, HHAs, and hospices. CMS also relies on accrediting organizations to survey providers that participate in the Medicare program through accreditation and subsequently make recommendations to CMS regarding providers' participation in Medicare. In addition, CMS reported reviewing MA and Medicaid managed care plans' contracts to determine that they include the advance directive requirement. Approaches used to inform individuals about advanced directives vary by type of provider, but providers face similar challenges, according to stakeholders interviewed and literature GAO reviewed. For example, hospitals, nursing homes, HHAs, and hospices inform individuals about advance directives during the admission process, while MA plans and Medicaid managed care plans inform individuals during enrollment. Challenges in informing individuals about advance directives include discomfort talking about end-of-life issues and lack of staff time for such discussions. Providers may address these challenges by using leading practices, such as patient education or population specific materials. Many adults have advance directives, but estimated prevalence varies by provider type and an individual's demographic characteristics. In 2013, 47 percent of adults over the age of 40 had an advance directive, according to the Institute of Medicine (IOM) report, Dying in America . However, the prevalence of individuals with advance directives varies by type of provider and demographic characteristic. For example, a National Center for Health Statistics report found that 88 percent of discharged hospice patients had advance directives in 2007 compared to 65 percent of nursing home patients in 2004. Studies GAO reviewed found that individuals who were older, white, had higher education or incomes, or were women were more likely to have advance directives than others. HHS provided technical comments on a draft of this report, which were incorporated as appropriate.
Several circumstances make it difficult to make the best possible determination of FDLP’s financial performance at this time. First, because FDLP is a relatively new program, it has a short history of repayment activity and little historical data are available. Second, because Education lacks historical FDLP data, Education relies heavily on data from the guaranteed loan program to develop estimates for most key cash flow assumptions in its FDLP cash flow model, which is used to estimate the subsidy cost of the program. While this is appropriate for the interim, guaranteed loans may perform differently from FDLP loans and therefore, Education ultimately will need to use FDLP data. Education plans to phase out the use of guaranteed loan data as FDLP data become available. Our ability to answer some of your specific questions was limited because the needed data were not readily available. For example, Education’s cash flow model and financial systems do not readily provide comparable information on estimated and actual defaults. Also, Education did not have readily available performance data by “cohort,” which refers to all the loans of a particular loan type for which a subsidy appropriation is provided for a given fiscal year. For this reason, Education was not able to give us a comparison of estimated to actual cash flows at the cohort level during the time frames of this review. Comparisons of estimates and actuals at the cohort level are key to identifying the causes of disparities, which, in turn, is key to improving future subsidy cost estimates. Furthermore, there is little information on the effects of loan consolidations on FDLP subsidy costs. This is significant because Consolidation loan volume has been rapidly increasing. Education is taking or plans to take steps to address these limitations in the future. Because Education has not documented its previous sensitivity analyses, we asked Education to perform a limited sensitivity analysis of FDLP subsidy costs and found that the subsidy calculation is most sensitive to changes in interest rates. Specifically, the interest rates involved were the discount rate—generally the rate at which Education borrows money from the Department of the Treasury to finance its loans—and the borrower rate. The difference, or spread, between the borrower rate and discount rate determines the magnitude the change in interest rates has on the FDLP subsidy cost. Because these rates cannot be readily predicted from year to year, estimating the subsidy cost of FDLP is very difficult. Therefore, wide fluctuations in subsidy costs can be expected depending on the extent of interest rate changes. Because FDLP is a direct loan program that allows its borrowers to defer payment until after the borrower leaves school, several years would typically pass between the time the borrower receives the loan and begins making repayments. This deferment of principal and interest payments from borrowers has contributed to the negative cash flow FDLP experienced that totaled about $2 billion as of September 30, 1999. Although more cash will be received by Education when more borrowers enter repayment, Education is unable to determine when FDLP will have a positive cash flow primarily because of uncertainty related to the key cash flow assumptions. Further, because Education lacks key data on loan consolidations and default data is not readily available, Education’s ability to predict future cash flows is limited. This further impedes Education’s ability to estimate when and how much of this negative cash flow will be recovered. We are making several recommendations to address the limitations identified during our review. Education is the primary agency overseeing federal investments in support of educational programs for U.S. citizens and eligible noncitizens. In fiscal year 1999, more than 8.1 million students received over $53 billion in federal student financial aid, including loans and grants, through programs administered by Education. FDLP offers four different loan types. The Federal Direct Stafford Subsidized/Ford Loan Program (Stafford Subsidized), available only to students with a demonstrated financial need, provides loans to undergraduate, graduate, and professional students. Interest is subsidized by the federal government while the student is in school, and during the grace, or deferment period. A loan origination fee is charged to obtain these loans. The borrower rate is variable and based on the 91-day Treasury bill rate plus an add-on amount that has ranged from 1.7 percent to 3.1 percent, with a maximum borrower rate of 8.25 percent. Education reported that the outstanding balance of this loan type was $19.7 billion as of September 30, 1999. Unsubsidized) provides loans to undergraduate, graduate, and professional students regardless of financial need. The borrower is responsible for interest that accrues during any period. Interest that accrues while the student is in school or during the grace period or deferment period is added to the loan balance. A loan origination fee is charged to obtain these loans. The borrower rates on these loans are the same as the borrower rates on Stafford Subsidized loans. Education reported that the outstanding balance of this loan type was $11.9 billion as of September 30, 1999. The Federal Direct PLUS Program provides loans to parents of dependent students. The borrower is responsible for interest that accrues during any period. A loan origination fee is charged to obtain these loans. The borrower rate is variable and currently based on the 91-day Treasury bill rate plus an add-on amount of 3.1 percent, with a maximum borrower rate of 9 percent. Education reported that the outstanding balance of this loan type was $2.8 billion as of September 30, 1999. The Federal Direct Consolidation Loan Program (Consolidation loans) allows borrowers to combine their loans from different federal student loan programs into a single loan with one monthly payment. After the promissory note has been signed for the new Consolidation loan, the underlying loan(s) are paid off. The Higher Education Act Amendments of 1998 (P.L. 105-244) provided that for all Direct Consolidation Loan applications received from February 1, 1999, through June 30, 2003, the borrower rate is a fixed rate for the life of the loan. The rate is the lesser of the weighted average of the interest rates on the loans being consolidated, or 8.25 percent, the current maximum allowable rate. Borrower rates on previously disbursed Consolidation loans are variable rates, similar to the other FDLP loan types. Education reported that the outstanding balance of this loan type was $12.1 billion as of September 30, 1999. Borrowers most commonly repay their FDLP loans using one of four repayment plans: standard, extended, graduated, or income contingent. These four options differ by the amount of time allowed to repay loans and the flexibility of the repayment schedule. With standard repayment, borrowers make fixed payments of at least $50 a month for up to 10 years. With extended repayment, they make fixed payments of at least $50 a month over a period generally ranging from 12 to 30 years, depending on the total amount borrowed. With graduated repayment, borrowers’ payments start out low and then increase, usually every 2 years; the repayment period generally ranges from 12 to 30 years, depending on the total amount borrowed. The income contingent repayment plan is the most flexible, allowing borrowers to make monthly payments that are based on adjusted gross income, family size, and the total amount of their outstanding loans. The Federal Credit Reform Act of 1990 (FCRA) was enacted to require agencies to more accurately measure the government’s cost of federal loan programs and to permit better cost comparisons both among credit programs and between credit and noncredit programs. Prior to the implementation of FCRA, credit programs were reported in the budget on a cash basis. Thus, loan guarantees appeared to be free in the budget year, while direct loans appeared to be as expensive as grants. As a result, costs were distorted and credit programs could not be compared meaningfully with other programs and with each other. FCRA and the related accounting standards and budgetary guidance, together known as credit reform, were established to more accurately measure the government’s costs of federal credit programs and to permit better comparisons both among credit programs and between credit and noncredit programs. As part of implementing credit reform, agencies are required to estimate the net cost of extending credit over the life of a loan, generally referred to as the subsidy cost, based on the present value of estimated net cash flows, excluding administrative costs. Budgeting guidance requires agencies to maintain supporting documentation for subsidy cost estimates. Further, auditing standards related to estimates indicate that agency management is responsible for accumulating sufficient relevant and reliable data on which to base the estimated cash flows. SFFAS No. 2 states that each credit program should use a systematic methodology to project expected cash flows into the future. To accomplish this task, agencies develop cash flow models. A cash flow model is a computer program that generally uses historical information and various assumptions including defaults, prepayments, recoveries, and the timing of these events to estimate future loan performance. Those assumptions that have the greatest impact on the estimated subsidy cost are often referred to as the key assumptions. These cash flow models, which should be based on sound economic, financial, and statistical theory, identify key factors that affect loan performance. Agencies use this information to make more informed predictions of future credit performance. Generally, the data used for these estimates are updated or reestimated after the fiscal year end to reflect any changes in actual loan performance since the estimates were prepared, as well as any expected changes in assumptions related to future loan performance. Appendix I provides a detailed discussion of estimating credit program costs under credit reform. The glossary at the end of this report provides a list of commonly used terms related to credit program budgeting and accounting. How much financing has been provided to Education for the direct loan program through borrowing from Treasury and appropriations received? Amounts borrowed from Treasury are amounts that Education expects to be repaid by borrowers in the future. Amounts appropriated are amounts that Education has estimated it will lose as a cost of extending credit through FDLP. For fiscal years 1995 through 1999, Education’s FDLP has borrowed $59.4 billion from Treasury to finance the program, repaying $7.8 billion of that amount. Table 1 provides an annual accounting of this information. Over the same period, considering reestimates, Education has received $688 million in appropriations (see table 2). Education finances FDLP through a combination of appropriations and borrowing from Treasury as required by FCRA. For loan programs subject to the act, agencies are required to estimate the cost of extending or guaranteeing credit, called the subsidy cost. The subsidy cost is the present value of disbursements from the government (loan disbursements and other payments) minus estimated payments to the government (repayments of principal, interest receipts, fees, and other recoveries or payments) over the life of the loan. The subsidy cost is generally the amount that Education estimates will not be repaid by borrowers. This estimate is financed with appropriated funds and is generally “reestimated” or updated annually. The portion of Education’s direct loans that Education predicts will ultimately be repaid by borrowers is financed by borrowing from Treasury and is not considered a cost to the government because it is expected to be returned to the government in future years. If the present value of the estimated cash outflows from the government exceeds the present value of the estimated cash inflows, there is a positive subsidy or cost to the government. However, if the present value of the estimated cash inflows to the government exceeds the present value of the estimated cash outflows, there is a negative subsidy. When there is a negative subsidy, a higher level of borrowing from Treasury occurs than when there is a positive subsidy because Education must borrow an amount greater than the dollar amount of loans disbursed. This additional borrowing occurs because Education does not receive any appropriated funds and therefore experiences a temporary shortfall because, in addition to disbursing the full loan amount, Education pays the negative subsidy to its program account. This additional amount of borrowing as well as the amount of loans disbursed is expected to be repaid by the borrower, primarily through principal and interest payments, over the life of the loan. For example, if a hypothetical FDLP loan of $100 had a negative subsidy of $5, the amount of borrowing required would be $5 more than the face value of the loan. Accordingly, Education would borrow a total of $105 from Treasury. If, however, FDLP had a positive subsidy, required borrowing from Treasury would be less. For example, if a hypothetical FDLP loan of $100 had a positive subsidy cost of $5, the subsidy cost of $5 would be financed with appropriated funds, and the remaining $95 would be financed by Treasury borrowings (the amount Education expects to be repaid). Additionally, Education is required to periodically update or “reestimate” loan program costs for differences between (1) estimated loan performance and related cost and (2) the actual program costs recorded in the accounting records as well as expected changes in future economic performance. When program costs are reestimated for loans disbursed in prior years, the revised estimate can either increase or decrease the original subsidy estimate. These reestimates can also affect the level of borrowing and appropriations. Generally, downward reestimates are considered offsetting receipts, which are netted against the subsequent year’s appropriations, and upward reestimates require additional appropriations. Table 2 shows FDLP’s original subsidy estimates and reestimates for the 1995 through 1999 cohorts. For example, the 1997 cohort column in table 2 shows that this group of loans was originally estimated to have a positive subsidy of $336 million. Since then, Education has reestimated the cost of the 1997 cohort twice, increasing its cost by $80 million in fiscal year 1998 and decreasing its cost by $69 million in fiscal year 1999. Therefore, as of fiscal year 1999, the estimated net cost of the 1997 cohort was a positive subsidy of $347 million. In contrast, the fiscal year 1999 column shows that the 1999 cohort was the first cohort originally estimated to have a negative subsidy. For fiscal years 1995 through 1999, Education’s FDLP estimates and reestimates for all cohorts show a total positive subsidy of $688 million, and therefore Education has received net appropriations totaling this amount. Because FDLP is a relatively new program, there is limited historical data to predict future borrower behavior. Additionally, the future estimated cost of this program, as explained in questions 3 and 7, is especially sensitive to changes in interest rates. Therefore, fluctuations such as those shown in table 2 are not unexpected and are likely to continue in the future. Have cash inflows (excluding borrowings from Treasury and borrower principal repayments) exceeded cash outflows (excluding repayments to Treasury and loan disbursements)? Loan origination fees and interest receipts from borrowers are the primary sources of cash inflows for FDLP. Net interest payments to Treasury on borrowed funds to finance the loans disbursed are the primary source of cash outflows. As shown in table 3, for fiscal years 1995 through 1999, total cash outflows exceeded total cash inflows by about $2 billion because the interest receipts from borrowers and origination fees were less than the amount of interest Education had to pay to Treasury. Inflows exceeded outflows only in fiscal years 1995 and 1996. The $2 billion negative cash flow for FDLP is at least partially due to a timing difference in the cash flows. Education is required to make interest payments to Treasury, even if the borrower is not currently making interest payments to Education. As of September 30, 1999, 46 percent of the loan portfolio was in a grace or deferment status. As a result, Education subsidizes or generally accrues this interest. However, Education must repay the interest on borrowings from Treasury even though it does not expect to receive interest payments from borrowers until sometime in the future. This accrued interest can be substantial—$2.3 billion as of September 30, 1999. Education is unable to determine when FDLP will have a positive cash flow primarily because of uncertainty related to the key cash flow assumptions. As discussed in question 3, the estimated cost of FDLP is sensitive to changes in interest rates and other factors that will affect the program’s cash flows. In addition, reductions in origination fees, as occurred in fiscal year 1999, discussed in question 5, will also have an impact on whether FDLP has an overall negative or positive cash flow in the future. Further, cash flows for FDLP can be affected by changes in macroeconomic conditions, such as unemployment rates and inflation. In Education’s calculation of its subsidy cost estimates for the Federal Direct Loan Program, what are the key cash flow assumptions, how sensitive are Education’s subsidy costs to changes in these assumptions, and what data are used to support these assumptions? An effective approach to identifying key cash flow assumptions is to perform a detailed analysis of all cash flow assumptions—called a sensitivity analysis—in order to determine which assumptions have the greatest impact on the estimated cost of FDLP. Education told us that it performs informal analyses of the cash flow assumptions that result in about 90 percent of the change in subsidy costs each year. However, Education did not provide any supporting documentation for this analysis. Further, Education told us that it has not performed a sensitivity analysis of all cash flow assumptions in its model. As this type of analysis would be extremely time-consuming, we requested that Education perform and document a limited sensitivity analysis as a basis on which to answer this question. Based on this limited sensitivity analysis, there were seven key cash flow assumptions that when adjusted, had a significant impact on the estimated cost of the loan program. These assumptions were discount rates, borrower rates, loan maturity, collections on previously defaulted loans, defaults, origination fees, and when repayments begin. The analysis showed that FDLP’s subsidy cost was most sensitive to changes in the discount rate and borrower rate. While some of the data supporting these key assumptions are provided by other agencies or specified by law, Education supported other assumptions by using a combination of guaranteed loan program and economic data, a reasonable approach, since the direct loan program is relatively new and limited historical data are available. To ensure that all key assumptions have been identified, and to determine how sensitive Education’s subsidy cost estimates are to changes in key assumptions, Education would have to conduct a thorough sensitivity analysis. According to Technical Release 3, one approach to perform such an analysis is to individually adjust each assumption by a fixed proportion (e.g., increased and decreased by 10 percent) and run the revised cash flows through the OMB Credit Subsidy Calculator to determine the assumption’s effect on the estimated subsidy cost. Timing assumptions for when defaults and collections occur and when repayments begin should also be adjusted in a systematic manner. Those assumptions that when adjusted, caused the largest change in the subsidy cost are determined to be the key cash flow assumptions. Education budget staff told us that they perform analyses of the cash flow assumptions that result in about 90 percent of the change in subsidy cost each year when they prepare budget estimates and reestimates. However, they do not maintain documentation of these analyses. Education has also done sensitivity analysis on the larger guaranteed loan program, which Education uses to help identify the key assumptions for the FDLP. However, because Education’s cash flow model has a large number of assumptions, there is no assurance that all key assumptions have been identified through the informal analyses that Education performed for FDLP. Because a formal analysis of all cash flow assumptions would take a significant amount of time, we asked Education to perform and document a limited sensitivity analysis of the assumptions it believed to be key and added two other assumptions related to the largest loan types, risk groups, and repayment options that we felt might also be key. Based on the results of the limited sensitivity analysis, we determined that seven of the nine cash flow assumptions tested were key. These assumptions follow. Discount rate − this rate is used to calculate the present value of the expected future cash flows of the loan program and the interest portion of the subsidy cost. This rate is generally the same rate at which agencies borrow funds from Treasury. Borrower rate − the interest rate borrowers pay Education for their loans. This rate is based on the 91-day Treasury bill plus various add-on amounts that range from 1.7 percent to 3.1 percent with a maximum borrower rate of 8.25 percent or 9.0 percent depending on loan type. Loan maturity − the time it takes for a loan to be paid in full. Loan maturity varies depending on loan amount and repayment option selected by the borrower. Generally, borrowers have from 10 to 30 years to repay their loans. Collection rate − the percentage of defaulted loan amounts subsequently recovered through Education’s collection process. Default rate − the percentage of principal that will not be paid because of borrower defaults. Origination fee − the fee borrowers pay to Education to obtain a loan. Beginning repayment − the percentage of loans beginning to make principal and interest repayments each quarter. As a result of the limited sensitivity analysis, two of the additional assumptions that we requested be included in the analysis were identified as key assumptions—loan maturity and origination fees. Loan maturity is important because it sets the amount of time borrowers are expected to take to repay their loans and, accordingly, the number of years Education estimates that it will receive interest payments from borrowers. The origination fee assumption is important because it determines the amount of fee receipts Education will receive. There could also be other key assumptions that will not be identified until Education completes a thorough sensitivity analysis. Identification of key assumptions is important to ensure proper monitoring of those assumptions and to adjust future subsidy estimates for changes in assumptions. Tables 4 and 5 summarize the results of the sensitivity analysis for seven of the nine cash flow assumptions tested, which entailed adjusting each assumption by a set amount to determine the impact on the subsidy cost. For borrower and discount rates, loan maturity, loan origination fee, and default and collection rates, this adjustment involved increasing and decreasing by 10 percent the values currently in the cash flow model. For the assumption related to timing—the beginning repayment assumption— the adjustment involved was an annual acceleration of 5 percent to the amount of loans beginning repayment in the first 5 years of the loan term. While the tables show the impact of decreasing the assumptions, similar results were obtained by increasing the assumptions. Because changes in two of the nine cash flow assumptions tested had very little impact on the overall subsidy cost, they were not determined to be key and were excluded from the table. Table 4 presents the results of the analysis in terms of the percentage change in the subsidy cost of each loan profile, which encompasses the type of loan, the type of school the student attends, and in some cases the year of schooling for the student and the repayment option selected. Generally, the higher the percentage, regardless of whether it was positive or negative, the more sensitive the subsidy cost was to change in this assumption. The loan profiles are as follows. Loan Profile 1 − Represents loans to freshmen and sophomore students attending 4-year schools who have obtained Stafford Subsidized loans and chose the standard repayment option. Loan Profile 2 − Represents loans to junior and senior students attending 4-year schools who have obtained Stafford Unsubsidized loans and chose the standard repayment option. Loan Profile 3 − Represents loans to junior and senior students attending 4-year schools who have obtained Stafford Unsubsidized loans and chose the graduated repayment option. Loan Profile 4 − Represents PLUS loans to parents of freshmen and sophomore students attending 4-year schools who chose the standard repayment option. Loan Profile 5 − Represents Consolidation loans to borrowers who chose the extended repayment option. Loan Profile 6 − Represents Consolidation loans to borrowers who chose the income contingent repayment option. Table 5 presents the estimated dollar impact on the subsidy cost of each loan profile for the fiscal years 1995 through 1999 cohorts based on the results of the sensitivity analysis. These loan profiles represent $16.7 billion of FDLP loans disbursed during that time. Based on results of the analysis in tables 4 and 5, the estimated cost of FDLP was clearly most sensitive to changes in the discount rate and the borrower rate. Loan maturity also showed a relatively high level of sensitivity for all six loan profile costs. Tables 4 and 5 further demonstrate that the impact of changing these assumptions differs among loan profiles. For example, the subsidy costs of all six loan profiles showed a large degree of sensitivity to changes in the discount rate and the borrower rate, indicating that changes in these assumptions would significantly affect the estimated cost of FDLP, with the largest effect on a percentage basis for loan profile 5−Consolidation loans with the extended repayment option. This would likely be the case because these loans begin repayment in the first year and generally have longer repayment periods, thus magnifying the impact of interest changes. It is especially important to monitor assumptions displaying this high level of sensitivity because even a small change in them can have a significant impact on the estimated cost of the loan program. Table 6 summarizes the sources of data Education used to support the seven key cash flow assumptions identified in the sensitivity analysis. As shown in table 6, for two of the seven key cash flow assumptions, data sources are provided by other agencies. Specifically, the borrower rate and discount rate are generally provided by OMB and updated based on actual Treasury interest rates, or set by the 91-day Treasury Bill rate from the last auction in May conducted by Treasury. These rates, the most significant of the key assumptions, are determined externally and are outside of Education’s control. For most of the key cash flow assumptions in our analysis, Education used FFELP data because they were the best available data. SFFAS No. 2 states that agencies should use the historical experience of the loan program when estimating future loan performance. However, since FDLP has only existed since 1994, and Education estimates that average loan maturities range from 9 to 27 years, Education lacks adequate historical data to estimate future performance of the loan program. According to Technical Release 3, agencies may use the experience of other federal or private sector loan programs when estimating the cost of new loan programs that lack adequate historical data. These data, often referred to as proxy data, should be an interim step to gathering the appropriate historical data upon which to base future estimates of loan performance. Education officials told us that Education is currently accumulating the actual cash flow data for the direct loan program and plans to continue phasing out the use of proxy data in the future. Without performing a more thorough sensitivity analysis, Education may not identify all key assumptions in its FDLP cash flow model. Knowledge of these key assumptions would provide management with the ability to more efficiently monitor the economic trends and cash flow assumptions that most affect the loan program’s financial performance and, accordingly, to prepare reasonable estimates of the program’s cost. While some of the changes in assumptions−particularly those related to interest rates−occur outside Education’s control, understanding the impact that changes in assumptions have on program costs also would provide management with a tool to help predict the impact of certain policy changes on the cost of the program. How closely do Education’s subsidy cost estimates and their underlying assumptions compare to actual loan performance for each loan and to what extent does Education track differences between its subsidy cost estimates and actual loan performance for each loan cohort? Prior to this request, Education had not done a formal documented analysis comparing estimated subsidy costs to actual loan performance for FDLP. Typically, such an analysis would entail comparing estimated cash flows included in the cash flow model to actual cash flows recorded in the agency’s financial systems. However, as discussed below, actual cash flow data from Education’s financial systems were not totally comparable to the data used in the cash flow model. While we were able to determine differences between estimated and actual cash flows for certain of the key assumptions, sufficient detailed information was not available to assess the reasons for most of the differences. Based on our analysis, some significant differences between the estimated and actual cash flows were noted. Although Education could not identify the specific reasons for some of these fluctuations, Education updates its assumptions for actual interest rates and loan performance when calculating reestimates. Education’s analysis of estimated and actual loan performance for FDLP, prepared at our request, compared estimated to actual cash flows related to five of the seven key cash flow assumptions identified in question 3—the borrower rate, loan maturity, beginning repayment assumption, origination fees, and collections on defaulted loans. The comparison did not include any analysis of defaults because Education was unable to readily provide comparable data on either estimated or actual defaults. Due to the nature of direct loan programs, Education’s FDLP cash flow model estimates principal and interest payments that will be missed in a given fiscal year as a result of a default, while Education’s financial systems do not specifically recognize this “absence of cash flow.” Rather, the financial systems report defaults as entire loan amounts that are written off in a given fiscal year. Further, the overall analysis was limited by the fact that readily available data in the financial systems were not totally comparable to the data available in the cash flow model. Specifically, Education’s financial systems lacked readily available data at the cohort and loan profile level. Education therefore used fiscal year totals from the financial systems in its analysis. Appropriately performing an analysis of estimated to actual cash flows would require having readily available actual data as captured in the cash flow model−by cohort, key cash flow assumption, and loan profile. Although agencies are not required to compare estimated cash flows to actual cash flows on a cohort basis, such an approach would provide a more meaningful analysis than comparing fiscal year totals. According to Education, its approach is consistent with standard credit reform practice in which costs for all loan cohorts are reestimated each year using the latest cash flow model and assumptions. However, Education’s budget officials have acknowledged that their analysis has certain limitations. For example, the difference between estimated and actual loan performance could be understated because of offsetting differences among different cohorts. Further, because Education’s analysis compared loan performance in total, variances in loan performance within individual cohorts may become minimized. These variances may indicate anomalies or trends that were not expected when the credit subsidy estimate was originally calculated. Because we were unable to analyze specific cohorts included in Education’s analysis, we were unable to determine whether, over time, estimated cash flows became more predictive of actual cash flows. Education officials told us that they are currently working to obtain a subsidiary ledger that will provide readily available data that are comparable to data in the cash flow model to allow for a comparison of estimated cash flows to actual cash flows on a cohort level. Even though cohort-level data were not available, we were able to analyze estimated cash flows and actual cash flows on an overall basis for certain key cash flow assumptions. As shown in table 7, and figures 1 through 4, some of Education’s estimated cash flows varied significantly from actual cash flows in total and by fiscal year. For three of the four key cash flows—interest receipts, origination fees and default collections—included in this comparison, actual cash flows were less than the amount Education estimated. As shown in table 7, from fiscal years 1995 through 1999, the largest variance occurred between Education’s estimated and actual interest receipts. In total, Education received about $1.6 billion less than expected during this 5-year period. In contrast, Education received about $392 million more principal receipts during the same period. Of the four key cash flows included in table 7, Education’s estimated origination fees had the least amount of a percentage variance. From fiscal years 1995 through 1999, actual origination fees were $87 million, nearly 6 percent less than estimated. In addition to significant variances in total, some of Education’s estimated to actual cash flows varied significantly within individual fiscal years. For example, as shown by figures 1 and 4, significant variances occurred between the estimated and actual amounts of both interest receipts and collections on defaulted loans in fiscal years 1998 and 1999. In contrast, as shown in figure 3, with the exception of those in fiscal year 1995, Education’s estimates of origination fees were relatively close to the actual amounts received in all fiscal years. During fiscal years 1996 through 1999, differences between estimated and actual origination fees varied from about 1 percent to about 6 percent. According to Education, the main reason for the significant difference between estimated and actual interest receipts is the way its cash flow model handles loan consolidations. Typically, the original loans that are consolidated into a new loan would be treated as prepayments, and estimated future cash flows from these underlying loans should be eliminated in the cash flow model. However, Education’s cash flow model does not adjust for prepayments. Education currently compensates for this by shortening the loan maturity in an attempt to reflect the consolidation or prepayment of the original underlying loans. However, this approach may misstate the timing and characterization of cash flows reported annually. For example, when borrowers consolidate their loans, accrued interest on the original loans is added to the principal balance for some loan types, while the borrower is in school and in other deferment situations. When borrowers repay their loans, some of the payment for accrued interest is shown in the accounting records as payments of principal. According to Education, this helps explain the differences depicted in figure 2 where more principal was received than estimated for 4 of the 5 years included in our review. However, because Education was unable to provide the supporting data for this explanation, we were unable to verify whether the way consolidations are modeled is (1) truly the primary cause of the significant difference between the estimated and actual interest receipts from borrowers and (2) a key factor in differences in principal receipts. According to Education’s budget staff, they are analyzing the method used to allocate borrower repayments between principal and interest, and they acknowledged that they are not totally comfortable with the current split. In addition, as discussed in question 6, Education has been working to improve its modeling of consolidations and plans to develop a different cash flow model that will allow Education to model and track cash flows at the individual loan level. Education’s budget staff told us that they believe this new cash flow model will address most of the problems they face in modeling consolidations. The largest difference between estimated and actual origination fees occurred in fiscal year 1995. According to an Education budget official, this difference was due to a reporting anomaly that caused Education to underreport the amount of actual origination fee data. Because Education was unable to provide any supporting documentation for this explanation, we were unable to verify whether this was the actual cause of the difference. Figure 4 shows that Education’s actual collections on defaulted loans were less than estimated collections. However, because Education’s cash flow model estimates collections as a percentage of the amount of loans that default, and we did not receive any information on defaults, neither we nor Education are able to determine the underlying cause of the difference. According to SFFAS No. 2, for credit program managers, information on estimated default losses and related liabilities, when recognized promptly, can be an important tool in evaluating credit program performance. This information can help determine a credit program’s overall financial condition and identify its financing needs. Education prepared reestimates that accounted for, in aggregate, the differences between estimated and actual loan performance. However, because it lacked data captured by loan profile, cohort, and key assumption, Education was limited in its ability to identify the underlying causes of amounts reestimated. See question 1 for a discussion of reestimates for fiscal years 1995 through 1999. Prior to this review, most of Education’s analysis of estimated to actual loan performance had been performed for FFELP, rather than for FDLP, because the guaranteed program is significantly larger than the direct loan program and historical data supporting the direct loan program estimates was limited. In using FFELP data, Education officials believed that the two loan programs’ performance would be similar. However, up until 1993, FFELP only offered borrowers the standard repayment option and currently only two of FFELP’s three repayment plans are similar to those offered under FDLP. Therefore, FFELP historical data may not prove very predictive of FDLP, which offers primarily four repayment options. These repayment options would likely affect the timing and amount of cash flows; however, under existing guidance, Education may use FFELP data as a proxy for actual historical data to support some of the key cash flow assumptions for FDLP, as discussed in question 3. Without a separate analysis specific to FDLP, Education has limited information about how well its estimates for FDLP track with actual cash flows. Based on the information provided by Education for fiscal years 1995 through 1999, total actual cash inflows were less than estimates for three of the four key cash flows. Most notably, a significant difference exists between the estimated and actual amount of interest receipts Education receives from borrowers. While Education officials provided an explanation, supporting evidence was not provided to corroborate their explanation. Even though differences between estimated and actual cash flows are expected, and the reestimation process allows Education an opportunity to adjust its estimates of future cash flows based on actual experience, better understanding the causes of significant variances would help Education more effectively estimate FDLP costs. However, without the cohort, loan type, and cash flow assumption-level data, Education’s ability to assess whether its cash flow model is reasonably predicting borrower behavior is limited. As a result, Education lacks critical information necessary to update future cash flow models. In addition, Education’s inability to provide an analysis of defaults, one of the key cash flow assumptions, further impedes Education’s ability to effectively predict future cash flows. What effect have reduced loan origination fees had on subsidy costs, and how has Education taken account of these changes in its subsidy cost estimates and reestimates? In August 1999, Education reduced its origination fees for FDLP student loans from 4 percent to 3 percent. According to Education, this reduction was done in order to ensure that both FDLP and FFELP borrowers receive the same terms, conditions, and benefits. As a result of the fee reduction, Education’s subsidy cost estimates for the fiscal year 2001 cohort show an increase of approximately $93 million, or 23 percent, compared to what would have been estimated with the 4 percent fee. However, Education officials reported that they believed that the overall effect would be cost neutral when considered in light of the higher subsidy costs associated with guaranteeing loans under FFELP. Since the fee reduction occurred late in the fiscal year, and thus applied to a limited amount of the fiscal year 1999 loan volume, Education did not take account of the fee reduction in its reestimates prepared in December 1999. However, in the President’s Budget for fiscal year 2001, subsidy estimates reflect the fee reduction, and Education plans to continue accounting for the change in origination fees, in accordance with applicable guidance for federal credit agencies. Education reduced the student loan origination fee from 4 percent to 3 percent for the Stafford Subsidized and Stafford Unsubsidized loan types in August 1999, which resulted in increased subsidy costs for these loan types of approximately $55 million and $38 million, or 13 percent and 6 percent, respectively, in the fiscal year 2001 cohort estimate. This amounted to a $93 million, or a 23 percent, increase in the overall FDLP subsidy cost estimate for the fiscal year 2001 cohort, compared to what it would have been assuming the same loan volumes. The fee reduction did not apply to the PLUS loan type’s origination fee, which remained at 4 percent, or the Consolidation loan type, which does not charge an origination fee to borrowers. Since the overall FDLP subsidy cost is a weighted average determined by the subsidy costs of the four FDLP loan types and their loan volumes, the increase in the overall FDLP subsidy cost depends on the loan amounts made for each loan type−known as the mix of loans. Table 8 summarizes the increases to FDLP subsidy cost estimates for each loan type due to the fee reduction, as well as the estimated mix of loans in fiscal year 2001. In their report, Cost of the 1999 Reduction in Direct Loan Fees, Education officials recognized that the fee reduction would increase the cost for FDLP. However, they believed that the increase would be offset by the ability to attract borrowers to FDLP who might otherwise obtain loans from the more costly FFELP whose lenders, according to Education officials, were offering interest and fee discounts to attract borrowers. For the fiscal year 2001 cohorts, FDLP’s cost was a net inflow of about $3 per $100 in loans versus FFELP’s cost of about $11 per $100 in loan guarantees. The first time the fee reduction could have been taken into account was in Education’s subsidy cost estimates and reestimates prepared in December 1999. The fee reduction was factored into Education’s subsidy cost estimates of the fiscal year 2000 and 2001 cohorts prepared in December 1999 for the fiscal year 2001 President’s Budget. However, given that the fee reduction did not take effect until August 1999, Education did not factor the fee reduction into its fiscal year 1999 reestimates because it applied to only a small amount of the fiscal year 1999 loan volume. Education has stated that the fee reduction will be incorporated into the fiscal year 1999 cohort reestimate of subsidy costs prepared for the fiscal year 2002 President’s Budget. What effects have increased consolidations had on subsidy costs, and how has Education taken account of these changes in its subsidy cost estimates and reestimates? By obtaining an FDLP Consolidation loan, borrowers can combine their loans from different federal student loan programs into a new single loan and make one monthly payment. Consolidation loans accounted for 45 percent of new direct loan dollars disbursed in fiscal year 1999 and 26 percent of total FDLP direct loan dollars outstanding as of September 30, 1999. While it is clear that the volume of Consolidation loans is increasing, determining the effects of consolidations is difficult because many factors need to be considered, including loan maturity, prepayments, borrower rates, and discount rates. In order to properly consider all of these factors, an extensive loan-by-loan analysis of cash flows, applying scenarios with and without a consolidation, would be required. Since Education has not performed this type of detailed analysis, there is no way of knowing the impact of increased consolidations on subsidy costs for FDLP. Education estimates and reestimates the subsidy cost of Consolidation loans similarly to the other FDLP loan types. For the original underlying loans, a consolidation is in essence a loan prepayment. Education factors both the consolidation of the underlying loans and prepayments into FDLP subsidy cost estimates and reestimates by shortening the loan maturity assumption, which affects the time estimated for loan repayments to be received. While adjusting for consolidations and other prepayments through the maturity assumption may at least partially take into account the cash flow changes over time, as discussed in question 4, it is likely to result in misstatements and mischaracterization of cash flows reported annually. Education officials told us that they recognize the limitations of their current approach and are working to develop an approach to analyze the impacts of consolidations and other prepayments and how they can be appropriately factored into their cash flow model. What effect have declining interest rates had on subsidy costs, and how has Education taken account of these changes in its subsidy cost estimates and reestimates? Interest rates can affect subsidy costs directly through borrower rates and discount rates and indirectly through borrower behavior. When the borrower rate is greater than the discount rate, Education will receive more interest from borrowers than it will pay in interest to Treasury to finance its loans. This has been the situation over the short life of FDLP. Because Education’s cash flow model is continually being updated and previous versions of the cash flow model with original assumptions were not fully maintained, it was not possible to determine the precise effect on subsidy costs of changes in interest rate versus changes in other cash flow assumptions. However, it is clear that the decline in interest rates from 1995 through 1999 has had a greater impact on discount rates than borrower rates because of the borrower rate cap. This has resulted in an increased interest rate spread−the difference between the borrower rate and discount rate−that has contributed to FDLP’s estimated negative subsidy for the fiscal year 1999 cohort. Education accounts for interest rate changes in total in its annual reestimates. The two types of interest rates that are used to estimate the subsidy costs of FDLP are the borrower rate and discount rate. The borrower rate determines the amount of interest charged to borrowers. The borrower rate for the Stafford Subsidized, Stafford Unsubsidized, and PLUS loan types is variable—adjusted annually—and is based on the 91-day Treasury bill plus various add-on amounts that have ranged from 1.7 percent to 3.1 percent depending on the loan type and the borrower repayment status, with a maximum borrower rate of 8.25 percent or 9.0 percent depending on the loan type. The borrower rate for Consolidation loans made after February 1, 1999, is fixed and calculated based on the weighted average of the borrower rates of the loans that were consolidated, with a maximum allowable rate of 8.25 percent. As the borrower rate declines, Education receives less interest from the borrower and, all else being equal, the subsidy cost of FDLP increases. The discount rate is the interest rate used to calculate the present value of the estimated future cash flows and is generally equal to the rate at which interest is paid by Education on the amounts borrowed from or held by Treasury. The discount rate used for each cohort is fixed and determined by the interest rates prevailing during the period that the cohort’s loans were disbursed (normally such disbursement occurs within 2 years of loan origination for FDLP). Therefore, the discount rate can differ significantly among cohorts. This is important because cohorts with lower discount rates have a lower borrowing cost and, as a result, a lower subsidy cost compared to an otherwise identical cohort with a higher discount rate. As discussed more fully in question 8, since 1995, FDLP borrower rates have been greater than the discount rates, which has resulted in a positive interest rate spread, as shown in figure 5. However, the spread was not significant enough in the early years of the program to cover other subsidy costs, such as defaults and interest subsidies. In fiscal year 1999, the spread became large enough to result in an estimated negative subsidy. Beyond the direct effect of changes in interest rates on borrower and discount rates, interest rates can also affect borrower behavior, which, in turn, can affect defaults and prepayments and ultimately, subsidy costs. Given all these variables and the fact that interest rate fluctuations are nearly impossible to predict with any certainty, continued changes in FDLP subsidy costs should be expected. In order to calculate its subsidy cost estimates, Education uses OMB economic assumptions related to future interest rates for its borrower rate and discount rate assumptions. As part of the reestimate process, Education updates its borrower rate and discount rate assumptions based on actual interest rates and revised OMB economic assumptions. Education has not prepared separate interest rate reestimates, as required by OMB Circular A-11. However, Education told us that its method of reestimating FDLP subsidy costs has been accepted by OMB in the past. Specifically, Education accounted for changes in discount rates as part of its technical reestimate process. As a result, Education is unable to readily provide a historical analysis of the impact on subsidy costs due to changes in discount rates. Education staff have stated that at the request of OMB, interest rate reestimates will be prepared as part of the reestimate process for the fiscal year 2002 President’s Budget. What are the future prospects for the continued negative subsidy for the Federal Direct Loan Program? Education’s most recent estimates of the fiscal year 1999 through 2001 cohorts indicate a negative subsidy cost. However, we cannot predict with any certainty the future prospects for the continued estimated negative subsidy for FDLP because it is a relatively new program with limited historical data and is very sensitive to fluctuations in interest rates and other factors. Based on the results of the sensitivity analysis, discussed in question 3, and the effects of interest rate fluctuations on subsidy costs, the primary factor determining whether FDLP has a negative or positive subsidy is the difference, or spread, between the borrower rate and discount rate. When the borrower rate is greater than the discount rate, Education will receive more interest from borrowers than it will pay to Treasury for borrowing funds, which increases the likelihood of a negative subsidy. Conversely, when the borrower rate is less than the discount rate, Education will pay more in interest to Treasury than it will receive from borrowers, which decreases the likelihood of a negative subsidy. However, several other factors, including defaults and consolidations, could also affect whether the estimated subsidy continues to be negative. While some conditions are more favorable than others for a continued estimated negative subsidy, whether and for how long a negative subsidy remains in effect is unclear at this time and greatly depends on future interest rates. While other factors do come into play, interest rates are the key factor in assessing the future cost of FDLP. In the limited history of FDLP, large fluctuations in interest rates have not been experienced. Figure 6 shows the trend of the 91-day Treasury bill rate, which is used to determine borrower rates, over the past 20 years. The shaded area shows the history of FDLP, a period during which interest rates have been relatively stable. The difference between the borrower rate and discount rate, or spread, is a key driver of subsidy costs. This spread can be analyzed to help determine the likelihood of a negative subsidy. The greater the spread, the more likely a negative subsidy will result. As discussed in question 7, the current estimated negative subsidy has primarily been a result of borrower rates being greater than discount rates, which will result in Education receiving more interest from borrowers than it will pay for funds borrowed from Treasury. This condition results in a positive spread. In the earlier years of FDLP, the spread did not offset other subsidy costs, such as defaults and interest subsidies. Fiscal year 1999 was the first year that the positive spread resulted in a negative subsidy. Education has estimated that this will continue through fiscal year 2001. If the discount rate were higher than the borrower rate, a negative subsidy would be unlikely because the spread would no longer be positive. This could easily occur because interest rates can fluctuate significantly over time and the discount rate for a cohort of loans is fixed and determined by interest rates prevailing during the cohort’s disbursement period, while borrower rates are variable for three of the four FDLP loan types and capped at a maximum allowable rate. Fiscal year 1995 was the first full fiscal year of existence for FDLP and as of September 30, 1999, only about 54 percent of FDLP outstanding loan amounts were in repayment status because of the deferred payment terms offered under this program. As a result, there is limited historical data related to loan repayments, defaults, and consolidations, among other things, to use as a basis for a prediction on the future behavior of borrowers and the impact this will have on subsidy costs. While positive spreads increase the possibility of a continued negative subsidy for FDLP, other factors that increase costs or reduce cash inflows decrease the likelihood of a negative subsidy. For example, less favorable macroeconomic conditions, such as high unemployment, will likely result in increased defaults, or if there are further reductions in loan origination fees, the cost of the program increases and, thus, the likelihood of a negative subsidy decreases. The mix of loans among the four loan types could also have an impact on whether an overall negative subsidy continues, because not all loan types, which have separate subsidy cost estimates, have negative subsidies. For example, the Stafford Subsidized loan type subsidizes interest for students while they are in school. Because this is a significant cost of the Stafford Subsidized loan type, it may always have a positive subsidy cost regardless of the spread. Therefore, if the FDLP portfolio were to have a larger portion of Stafford Subsidized loans, this new mix of loans would reduce the likelihood of a negative subsidy. Further, as discussed in question 6, since the effect on subsidy costs due to consolidations is unknown at this time, and depends on future interest rates and the future performance of these Consolidation loan borrowers, the increase in Consolidation loan volume could also have a significant impact on the future prospects for continued negative subsidies. What data did Education use to project an estimated savings of $4 for every $100 of direct student loans, as it reported in November 1999? In projecting an estimated savings of $4 for every $100 of direct student loans, Education netted the estimated negative subsidy and the administrative costs per $100 of loans. To do this, Education used its subsidy cost estimate reported in the budget for the fiscal year 2000 mid- session review for the subsidy cost portion of the total cost. This estimate is based on the types of data described in the response to question 3. To estimate the federal administrative cost portion, Education used contract expenditure data as well as data from its accounting system, OMB’s cost inflation factors, and historical data. These estimated savings pertained only to the fiscal year 2000 cohort. Education chose the fiscal year 2000 cohort because (l) congressional interest in the federal student loan programs was future-oriented and (2) the data available for estimating costs for the fiscal year 2000 cohort were more accurate and complete than the data available for earlier cohorts. However, the projected savings will not necessarily occur with other cohorts and may not continue to occur for the fiscal year 2000 cohort, depending on future interest rate fluctuations. Table 9 displays comparative cost estimates for the direct loan program for the fiscal year 2000 and fiscal year 2001 cohorts. These data show how changes in subsidy cost estimates can affect total cost estimates over a relatively short period. The first column shows the initial administrative, subsidy, and total cost estimates reported in Education’s November 1999 cost study for the fiscal year 2000 cohort. As shown in the table, the total program cost could change from $4.11 in cost savings for every $100 in loans for the fiscal year 2000 cohort to 58 cents in costs for every $100 in loans for the fiscal year 2001 cohort. For the fiscal year 2001 cohort, the negative subsidy declined from $7.73 to $3.04. An Education official explained that the increase in the subsidy cost for the fiscal year 2001 cohort is due to changes in the spread between the borrower rate and the discount rate. Education officials also believe that the underlying assumptions used to project the administrative cost will not change significantly from one cohort to the next since they are not highly sensitive to changes in loan volume. In a similar cost study, issued in March 1999, Education’s Office of Inspector General concluded that in any given year, either FFELP’s or FDLP’s costs (e.g., subsidy and administrative) could be greater depending on how prevailing economic conditions affect subsidy costs. To develop and assign administrative costs to the direct loan program, Education used certain costs specified in OMB guidance as well as historical costs (such as costs in relevant contracts, salaries, rent, and travel). These costs include any expenditure associated with program support activities such as processing applications, serving customers, and disbursing and collecting loans. Table 10 shows the types of data Education used to estimate the administrative cost of the direct loan program. Education officials told us that some data on actual overhead costs were taken from Education’s cost accounting system (for example, salaries, expenses, and rent) and Office of Student Financial Assistance (OSFA) records. Education projected the administrative costs over the expected life of all the loans in the fiscal year 2000 cohort using predetermined inflation factors that existed in many of the contracts, OMB inflation factors, or a combination of historical data and OMB inflation factors. To develop the lifetime federal administrative cost estimates, Education first assigned costs to one of three categories—loan origination, servicing or account maintenance, or overhead. It then applied a three-step approach to calculate these costs, by cohort and type, for FDLP and FFELP. The approach included developing the annual spending levels for the two loan programs based on volume-driven costs that depend on the number of loans or similar activity measures, such as the number of loan applications, and on nonvolume-driven costs including personnel and fixed costs, such as rent and travel, that do not depend on the number of loans or similar activity measures; assigning annual spending for each loan program; and calculating the net present value of future administrative cost by cohort. To assign administrative costs to each loan program, Education used designated funding sources, loan volume, and self-developed cost assumptions. Any costs involving both grants and loans, such as application processing, were allocated to the loan programs based on the proportion of loan recipients to grant recipients. Any cost for activities common to both loan programs was assigned based on annual projections of the number of borrowers in each program. Overhead costs were assigned to the two loan programs based on the source of funds. For example, overhead expenses funded using section 458 of the Higher Education Act were assumed by Education to be used for FDLP even though some of these funds are used for FFELP costs. In using this assumption, Education believes that it is overstating the portion of overhead costs attributable to the direct loan program. Education projected administrative costs over 50 years—fiscal years 2000 through 2050. The 50-year period was used to reflect the maximum amount of time that all borrowers in a cohort could be in school, in a deferment or forbearance period, making loan repayments, or making payments on loan defaults. After consulting with Education, we concluded that performing this analysis over a shorter period—within the 9 to 27-year range Education uses in estimating subsidy costs—would not produce significantly different results. Education chose not to include several cost items in its calculation of administrative cost for the loan programs. These included costs for information system upgrades and improvements that Education believes could reduce future per loan costs of delivering financial aid. Education officials did not include these costs because the specific components of these system upgrades had not been determined at the time the study was issued, and they believe the initial cost will be offset by future savings. However, the study did not include a cost analysis to support this belief. To the extent that these costs are not offset by future savings in other cost categories, Education’s administrative cost estimates will be understated. According to its budget proposal, Education plans to spend $48.5 million on information systems modernization for student aid programs in fiscal year 2001. Education excluded loan origination costs for consolidation loans because provisions of FCRA—section 502(5)(B)(iii)—include fees as a subsidy cost. Additionally, Education chose not to include noncontract costs associated with offices outside OSFA since these costs only represented $3.2 million of a total $600 million and included no more than 32 personnel. OSFA is currently developing a cost allocation model that will identify the total administrative cost for each of the major financial aid programs as well as the per unit cost of delivering each loan or grant award. OSFA plans to use this model to identify areas where it can reduce these per-unit delivery costs and to assess how well it is accomplishing these reductions. Unlike the November 1999 cost study, the OSFA model will include noncontract costs from other offices within Education that have a role in delivering student financial aid. It will use data primarily from Education’s accounting system to determine total and per unit costs. While this administrative cost information will be useful, changes in the subsidy costs from one cohort to the next are the primary drivers of total program costs. Subsidy costs, in turn, are primarily affected by interest rates and therefore cannot be predicted with any certainty. Developing reasonable estimates of subsidy costs for loan programs is a complex task. Numerous assumptions must be taken into account and projections must be made for the estimated life of the loans, which could be up to 30 years. Because FDLP’s subsidy costs are determined largely by interest rates—specifically the difference, or spread, between the borrower and discount rates—and since interest rate fluctuations cannot be predicted with any certainty, it is uncertain that the current trend in negative subsidy costs for FDLP will continue. A change in interest rates, for example, can cause a negative subsidy to become positive. Even with improvements to Education’s cash flow model, it is important to recognize that estimates of subsidy costs are sensitive to interest rate volatility. That being said, there are also other factors that affect the subsidy cost of FDLP, such as origination fees paid by borrowers, defaults, subsequent collections on defaulted loans, and the timing of loan repayments. While Education is able to estimate origination fees close to the actual amounts in the financial systems, the other key cash flows varied significantly. These cash flows are primarily estimated based on looking at the history of how borrowers perform under the conditions provided by each loan type within FDLP. Because the program is relatively new, Education has primarily used the history of FFELP as a basis for its FDLP estimates. While this is reasonable given that it is the best historical data available, it may not be very predictive of FDLP borrower behavior because FDLP offers different repayment options than those reflected in most of the historical data related to FFELP. Additionally, Education’s current model for estimating FDLP subsidy costs does not directly take into account certain key factors, such as prepayments and consolidations. This limitation hinders Education’s ability to determine the impacts of consolidation activities, which are increasing significantly. Also, Education was unable to provide actual data related to defaults, which are a key assumption. Finally, the fact that Education does not currently have the information readily available to make meaningful comparisons of estimated to actual cash flows, and, most important, to identify the reasons for differences, significantly impedes Education’s ability to refine future estimates based on actual results. Therefore, the reliability of Education’s subsidy cost estimates is negatively affected not only by the volatility of interest rates but also by limitations in the department’s ability to monitor and adjust for other key factors in its subsidy cost estimation process. Education is aware of these limitations and has efforts underway to begin to address them. To provide more meaningful cost estimation information that can be effectively used by Congress and program decisionmakers to make timely and well-informed judgments about FDLP, we recommend that the Secretary of the Department of Education charge the Budget Director, who has overall responsibility for preparing FDLP cost estimates, to take the following actions: Develop and implement a method to acquire actual cash flow data on the same basis as the cash flow model−by loan profile, cohort, and key assumption−to facilitate a detailed comparison of estimated to actual cash flows. Formalize and document the sensitivity analysis of assumptions included in the FDLP cash flow model to ensure that all key assumptions used in the cash flow model have been identified and to determine the sensitivity of FDLP subsidy costs to changes in these assumptions. Develop and implement a method of routinely comparing FDLP’s estimated and actual cash flows, including identifying significant differences in total and by cohort, researching significant differences to determine the specific cause, determining any revisions needed in the cash flow model to ensure that it reasonably predicts future borrower behavior, and determining whether, over time, projected loan performance is reasonably predictive of actual loan performance. Perform an analysis of the effects of consolidations on FDLP subsidy costs and develop an approach to directly factor consolidations into the cash flow model. Develop and implement a plan to prepare interest rate reestimates to isolate the effects on subsidy costs of changes in interest rates versus changes in other assumptions. Refine the administrative cost modeling so that the costs of computer system upgrades are incorporated, as well as the cost savings that would result from these upgrades. We provided the Department of Education copies of a draft of this report for review and comment. On December 7, 2000, we met with cognizant Education officials and obtained oral comments on a draft of this report. Education officials generally agreed with our answers to the questions, findings, conclusions, and recommendations. Education is in the process of taking actions to address some of these recommendations. For example, Education officials told us that they are currently working to obtain a subsidiary ledger that will provide readily available data that are comparable to data in the cash flow model to allow for a comparison of estimated to actual cash flows on a cohort level. Further, Education is in the process of researching and modeling the effects of consolidations on subsidy cost estimates. Education also provided technical comments, which we have incorporated as appropriate. We are sending copies to the Secretary of Education and other interested parties. Copies will also be made available to others upon request. Please contact either Linda M. Calbom at (202) 512-9508 or Cornelia M. Ashby at (202) 512-8403, if you or your staffs have any questions concerning this report. Key contacts and major contributors to this report are listed in appendix III. The Federal Credit Reform Act of 1990 (FCRA) was enacted to require agencies to more accurately measure the government's cost of federal loan programs and to permit better cost comparisons both among credit programs and between credit and noncredit programs. FCRA assigned to OMB the responsibility for coordinating the cost estimates required by the act. OMB is authorized to delegate to lending agencies the authority to estimate costs, based on its own written guidelines. These guidelines are contained in OMB Circular A-11, sections 85.1 through 85.12, and supporting exhibits, as well as other OMB guidance, including OMB Circular A-34, Instructions on Budget Execution, and other documents. The Federal Accounting Standards Advisory Board (FASAB) developed the accounting standard for credit programs, SFFAS No. 2, Accounting for Direct Loans and Loan Guarantees, which became effective in fiscal year 1994. This standard, which generally mirrors FCRA, established guidance for estimating the cost of direct and guaranteed loan programs as well as recording direct loans and the liability for loan guarantees for financial reporting purposes. The actual and expected costs of federal credit programs should be fully recognized in both budgetary and financial reporting. To determine the expected cost of a credit program, agencies are required to predict or estimate the future performance of the program. This cost, known as the subsidy cost, is the present value of disbursements—over the life of the loan—by the government (loan disbursements and other payments) minus estimated payments to the government (repayments of principal, payments of interest, other recoveries, and other payments). For loan guarantees, the subsidy cost is the present value of cash flows from estimated payments by the government (for defaults and delinquencies, interest rate subsidies, and other payments) minus estimated payments to the government (for loan origination and other fees, penalties, and recoveries). To estimate the cost of loan programs, agencies first estimate the future performance of direct and guaranteed loans when preparing their annual budgets. The data used for these budgetary estimates should be reestimated to reflect any changes in loan performance since the budget was prepared. These reestimated data are then used in financial reporting when calculating the allowance for subsidy (the cost of direct loans), the liability for loan guarantees, and the cost of the program. In the financial statements, the actual and expected costs of loans disbursed as part of a credit program are recorded as a “Program Cost” on the agencies' Statement of Net Costs for loans disbursed. In addition to recording the cost of a credit program, SFFAS No. 2 requires agencies to record direct loans on the balance sheet as assets at the present value of their estimated net cash inflows. The difference between the outstanding principal balance of the loans and the present value of their net cash inflows is recognized as a subsidy cost allowance—generally the cost of the direct loan program. For guaranteed loans, the present value of the estimated net cash outflows, such as defaults and recoveries, is recognized as a liability and generally equals the cost of the loan guarantee program. In preparing SFFAS No. 2, FASAB indicated that the subsidy cost components—interest, defaults, fees, and other cash flows—would be valuable for making credit policy decisions, monitoring portfolio quality, and improving credit performance. Thus, agencies are required to recognize, and disclose in the financial statement footnotes, the four components of the credit subsidy—interest, net defaults, fees and other collections, and other subsidy costs—separately for the fiscal year during which direct or guaranteed loans are disbursed. In addition, nonauthoritative guidance is contained in the previously discussed Technical Release of the Credit Reform Task Force of the Accounting and Auditing Policy Committee, entitled Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act. This Technical Release provides detailed implementation guidance for agency staff on how to prepare reasonable credit subsidies. Further, the Technical Release provides suggested procedures for auditing credit subsidy estimates. In estimating cash flows, Education and other credit agencies are required to predict borrower behavior−how many borrowers will pay early, pay late, or default on their loans and at what point in time. Generally, the subsidy costs equal the amount of estimated losses to the federal government and are financed with appropriated funds. The portion of Education's direct loans that Education predicts will ultimately be collected is financed by borrowing from Treasury. For example, a hypothetical FDLP loan of $100 may have a subsidy cost of $20 (the amount Education expects to lose), which is financed with appropriated funds, and the remaining $80 is financed by Treasury borrowings (the amount Education expects to be repaid). Budgeting guidance requires agencies to maintain supporting documentation for subsidy cost estimates. Further, auditing standards related to preparing estimates indicate that agency management is responsible for accumulating relevant, sufficient, and reliable data on which to base the estimates. SFFAS No. 2 indicates that each credit program should use a systematic methodology to project expected cash flows into the future. To accomplish this task, agencies should develop cash flow models. A cash flow model is a computer program that generally uses historical information and various assumptions, including defaults, prepayments, recoveries, and the timing of these events, to estimate future loan performance. These cash flow models, which should be based on sound economic, financial, and statistical theory, identify key factors that affect loan performance. Agencies use this information to make more informed predictions of future credit performance. The August 1994 User's Guide to Version r.8 of the OMB Credit Subsidy Model provides general guidance on creating cash flow models to estimate future delinquencies, defaults, recoveries, etc. This user's guide states that, “In every case, the agency or budget examiner must maintain current and complete documentation and justification for the estimation methods and assumptions used in determining the cash flow figures used for the OMB Subsidy Model” to calculate the credit subsidy. According to SFFAS No. 2, to estimate the cost of loan programs and predict the future performance of credit programs, agencies should establish and use reliable records of historical credit performance. Since actual historical experience is a primary factor upon which estimates of credit performance are based, agencies should maintain a database, also known as an information store, at the individual loan level, of historical information on all key cash flow assumptions, such as defaults or recoveries, used in calculating the credit subsidy cost. Additional nonauthoritative guidance on cash flow models may be found in the Model Credit Program Methods and Documentation for Estimating Subsidy Rates and the Model Information Store issue paper prepared by the Credit Reform Task Force of the Accounting and Auditing Policy Committee. This draft “Information Store” Task Force paper provides guidance on the type of historical information agencies need to reasonably estimate the cost of credit programs. The information store should provide three types of information. First, the information store should maintain key loan characteristics at the individual loan level, such as the loan terms and conditions. Second, it should track economic data that influence loan performance, such as property values for housing loans. Third, an information store should track historical cash flows on a loan-by-loan basis. The data elements in an information store should be selected to allow for more in-depth analyses of the most significant subsidy estimate assumptions. In addition to using historical databases and the cash flow models, other relevant factors must be considered by agencies to estimate future loan performance. These relevant factors include economic conditions that may affect the performance of the loans, financial and other relevant characteristics of borrowers, the value of the collateral to loan balance, changes in recoverable value of collateral, and newly developed events that would affect loan performance. When new programs are established or changes are made to existing programs, historical supporting documentation for cash flow assumptions may not exist. In the absence of valid, relevant historical experience, the agency may use relevant experience from other federal or private sector loan programs. These data, often called proxy data, should be temporarily used while the agency collects adequate historical data for the new or revised loan program. Agencies prepare estimates of loan program costs as a part of their budget requests. Later, after the end of the fiscal year, agencies are required to update or “reestimate” loan costs for differences among estimated loan performance and related cost, the actual program costs recorded in the accounting records, and expected changes in future economic performance. The reestimate should include all aspects of the original cost estimate, including prepayments, defaults, delinquencies, recoveries, and interest. Reestimates of the credit subsidy allow agency management to compare the original budget estimates with actual program results to identify variances from the original estimate, assess the quality of the original estimate, and adjust future program estimates as appropriate. Any increase or decrease in the estimated cost of the loan program is recognized as a subsidy expense or a reduction in subsidy expense for both budgetary and financial statement purposes. The reestimate requirements for interest rate and technical assumptions (defaults, recoveries, prepayments, fees, and other cash flows) differ. For budget purposes, OMB Circular A-11 states that agencies must reestimate the interest portion of the estimate when a cohort is substantially disbursed or generally when at least 90 percent of the direct loans or guaranteed loans are disbursed. The technical reestimate, for budgetary purposes, generally must be done annually, after the close of each fiscal year as long as the loans are outstanding, unless OMB approves a different plan, regardless of financial statement significance. For financial statement reporting purposes, both technical and interest rate reestimates are required annually, at the end of the fiscal year, whenever the reestimated amount is significant to the financial statements. If there is no significant change in the interest portion of the estimate prior to the loans being 90 percent disbursed, then the interest rate reestimate may be done once when the loans are at least 90 percent disbursed. In addition, SFFAS No. 18, which was effective beginning in fiscal year 2001, requires that reestimates be measured and reported in two separate components: interest rate reestimates and technical/default reestimates. Interest rate reestimates are made to adjust the credit subsidy estimate for the difference between the discount rate originally estimated and the actual interest rates prevailing during the years the loan was disbursed. To calculate the size of this effect, all other assumptions (repayment rates, default rates, etc.) must be the same as those used to calculate the original subsidy estimate. Technical reestimates are made to adjust for all changes in assumptions other than interest rates. The purpose of the technical reestimate is to adjust the subsidy estimate for differences between the original projection of cash flows and the amount and timing of expected cash flows based on actual experience, new forecasts of future economic conditions, and other events and improvements in the methods used to estimate future cash flows. This report responds to your request and that of the former Chairman of the House Committee on the Budget that we prepare a report on the financing of Education's William D. Ford Federal Direct Loan Program (FDLP). To respond to your request, for fiscal years 1995 through 1999 we reviewed Education's audited financial statements and examined the workpapers of Education's Independent Public Accountants. We interviewed knowledgeable personnel from Education's budget office and obtained information relevant to the questions we were asked to answer. We assessed Education's credit subsidy estimation practices against federal accounting and budget standards, including SFFAS No. 2, Accounting for Direct Loans and Loan Guarantees; OMB Circular A-11, Preparation and Submission of Budget Estimates; and guidance contained in the Federal Financial Accounting and Auditing Technical Release 3, Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act. The scope and methodology for responding to each of the nine questions you asked is discussed as follows. How much financing has been provided to Education for the direct loan program through borrowing from Treasury and appropriations received? We obtained from Education schedules of borrowings from Treasury, repayments to Treasury, and appropriations received for fiscal years 1995 through 1999. We verified the schedules of Treasury borrowing to data contained in the workpapers of Education's Independent Public Accountant. We obtained schedules of original subsidy estimates and reestimates for fiscal years 1995 through 1999 cohorts. We verified the subsidy appropriations to Education's original documentation, SF132 reports on Apportionment and Reapportionment, and schedule 1151s to return negative subsidy to Treasury. Have cash inflows (excluding borrowings from Treasury and borrower principal repayments) exceeded cash outflows (excluding repayments to Treasury and loan disbursements)? Data relating to loan origination fees, interest receipts from borrowers, and net interest payment on Treasury borrowings were obtained from the Appendix to the President's Budget for fiscal years 1997 through 2001, which contained actual data for the fiscal years 1995 through 1999. We also verified fiscal years 1995, 1996, and 1997 actual cash flows to Education's statement of cash flows in its financial statements. For fiscal years 1998 and 1999, we verified the actual data to cash collection amounts provided by Education's financial systems. In Education's calculation of its subsidy cost estimates for FDLP, what are the key cash flow assumptions, how sensitive are Education's subsidy costs to changes in these assumptions, and what data are used to support these assumptions? To gain an understanding of Education's cash flow model, we reviewed Education's model documentation, the workpapers of Education's independent public accountant, and various reports. To identify which of the over 1,900 cash flow assumptions were key cash flow assumptions, we first discussed with Education budget staff what cash flow assumptions they believed were key cash flow assumptions for FDLP based on their prior analyses. Since much of the data used to estimate the cost of FDLP are proxy data from FFELP, we determined what cash flow assumptions were key cash flow assumptions for FFELP based on the independent public accountant's workpapers. Based on our experience with other federal credit programs, we identified other assumptions that we believed may also be key. We then conducted an analysis of FDLP to identify the most significant loan profiles, which includes the loan type, risk category, and repayment option. To determine how sensitive FDLP's cost was to changes in these key assumptions, we requested that Education budget staff conduct a limited sensitivity analysis of the assumptions they thought might be key as well as the other assumptions we identified. In instructing Education on how to perform the sensitivity analysis, we generally followed the guidance contained in the Federal Financial Accounting and Auditing Technical Release 3, Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act and requested that Education increase and decrease by 10 percent the value of each nontiming related assumption presumed to be key. Because timing assumptions are modeled differently, and should also be adjusted in a systematic manner, we requested that Education increase the amount of the loans in the beginning repayment assumption by 5 percent during the first 5 years to simulate a decrease in the time it took borrowers to repay their loans. We analyzed the results of the limited sensitivity analysis and determined that any assumption that produced a change of at least 2 percent and $13 million in the estimated cost of any single loan profile tested was a key cash flow assumption. We then met with agency officials to identify the data sources for key cash flow assumptions. How closely do Education's subsidy cost estimates and their underlying assumptions compare to actual loan performance for each loan cohort and to what extent does Education track differences between its subsidy cost estimates and actual loan performance for each loan cohort? We compared cash flows related to five of the seven key cash flow assumptions identified in question 3 (interest payments, principal payments, default rate, origination fees, and collections on defaulted loans) and obtained estimated and actual cash flow data for fiscal years 1995 through 1999. Due to the nature of direct loan programs, the comparison did not include any analysis of defaults because Education was unable to readily provide comparable data on estimated and actual defaults. The discount rate assumption was not included because it does not directly affect the amount or timing of cash flows. Rather, this assumption is used to estimate the present value of the cash flows. Because the actual cash flow data in Education's financial systems were not totally comparable to data available in the cash flow model (by cohort, key cash flow assumption, and loan profile), Education obtained actual cash flow data on fiscal year totals from its financial systems for its analysis. For estimated cash flows, original cash flow models from fiscal years 1995 through 1999 were not fully maintained, thus, we used Education's analysis using its current cash flow model and assumptions for each fiscal year beginning with 1995. We verified that the actual cash flow data provided agreed to the amounts reported in Education's budget submissions. We also verified fiscal year 1995, 1996, and 1997 actual cash flows to Education's statement of cash flows in its financial statements. For fiscal years 1998 and 1999, we verified the actual data to cash collection amounts provided by Education's financial systems. For total cash flows for fiscal years 1995 through 1999, we then compared estimated to actual cash flows to determine the amount of the difference. We met with Education budget staff to determine and request supporting documentation for the causes of these differences. Since supporting documentation was unavailable, we were unable to corroborate Education's explanations for these differences. What effect have reduced loan origination fees had on subsidy costs, and how has Education taken account of these changes in its subsidy cost estimates and reestimates? To determine the impact of reduced loan origination fees on subsidy costs, we requested that Education calculate the credit subsidy costs for FDLP overall and each of the four loan types with the origination fee equal to 4 percent and 3 percent. We analyzed the results of the calculation and discussed with Education personnel how they accounted for the reduced loan origination fees in subsidy cost estimates and reestimates. What effects have increased consolidations had on subsidy costs, and how has Education taken account of these changes in its subsidy cost estimates and reestimates? To assess the impact of increased consolidations, we discussed consolidations with Education personnel, including how they work, their history, and how consolidations are modeled in subsidy cost estimates and reestimates. We determined the various factors that could affect how consolidations effect FDLP subsidy costs. Because Education had not performed the detailed analysis necessary to determine the actual effect of consolidations, we were unable to determine the impact of increased consolidations on subsidy costs for FDLP. What effect have declining interest rates had on subsidy costs, and how has Education taken account of these changes in its subsidy cost estimates and reestimates? Because Education's cash flow model is continually being updated and copies of the model with original assumptions were not fully maintained, it was not possible to determine the precise effect on subsidy costs of changes in interest rates versus other changes. In order to address this question, we assessed the general impact of declining interest rates by analyzing the effect of declining borrower rates and discount rates. To determine the impact of declining borrower rates and discount rates, we requested that Education calculate subsidy costs for four scenarios. We analyzed the results of these calculations and discussed with Education personnel their procedures for accounting for changes in interest rates in the credit subsidy estimates and reestimates. We compared Education's procedures with the guidance provided in OMB Circular A-11. What are the future prospects for the continued negative subsidy for the Federal Direct Loan Program? We analyzed the results from several of the other questions to determine what conditions increase or decrease the likelihood for continued negative subsidy of FDLP. We analyzed the effect of fee reductions and increased consolidations from question 6, the impact of interest rates on subsidy cost from question 7, and the result of the sensitivity analysis from question 3. What data did Education use to project an estimated saving of $4 on every $100 of direct student loans, as it reported in November 1999? We analyzed Education's November 1999 cost study to get a general understanding of the methodology used to develop administrative and subsidy cost estimates for FFELP and FDLP. We interviewed Education personnel to obtain a more detailed understanding of the methodology and data sources used to assign, develop, and project the administrative and subsidy cost estimates. We also reviewed spreadsheets and other documentation prepared by Education to support its findings. In addition to those named above, Daniel R. Blair, Marcia L. Carlsen, Susan T. Chin, Anh Dang, Cheryl D. Driscoll, Julia B. Duquette, Elizabeth M. Kreitzman, Kirsten L. Landeryou, Joel R. Marus, Andrew Sherrill, Linda W. Stokes, and Maria Zacharias made key contributions to this report. The following is a group of terms commonly used in credit budgeting and accounting. The definitions for many of these terms are equally applicable to loan guarantees. However, since FDLP is a direct loan program, references to loan guarantees have been omitted. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
The Department of Education runs two major federal student loan programs, the William D. Ford Federal Direct Loan Program (FDLP) and the Federal Family Education Loan Program (FFELP). Under FDLP, students or their parents borrow money directly from the federal government through the schools the students attend. Under FFELP, money is borrowed from private lenders, and the federal government guarantees repayment if the borrowers default. GAO investigated concerns about Education's reliance on estimates to project FDLP costs and a lack of historical information on which to base those estimates. GAO found that developing a reasonable estimate of subsidy cost for loan programs is complex. Many assumptions must be taken into account and projections must be made for the life of the loans. Because FDLP's subsidy costs are determined largely by interest rates and interest rate fluctuations cannot be predicted with any certainty, it is unclear whether the current trend in negative subsidy costs for FDLP will continue. In addition, other factors, such as origination fees paid by borrowers, defaults, subsequent collections on defaulted loans and timing of loan repayments, affect the subsidy cost of FDLP. Although Education is able to estimate origination fees close to the actual amounts in the financial system, other key cash flows varied significantly. Also, Education's current model for estimating FDLP subsidy costs does not directly take into account key factors, such as prepayments and consolidations.
Customs’ mission is to ensure that all goods and persons entering and exiting the United States do so in compliance with all United States laws and regulations. The mission includes protecting the American public from the introduction of illegal drugs into society. In August 1995, Customs considered the Southwest border the drug smugglers’ area of choice, with hundreds of thousands of pounds of cocaine and marijuana shipped from Mexico to the United States yearly, according to intelligence estimates. According to Customs, this environment, with narcotics being smuggled through ports of entry and Customs inspectors seeking to prevent such illegal entries, is typically dangerous, difficult, and contradictory. Customs further stated that “enforcement strategies producing more determined, violent, smarter, organized, better equipped, and funded violators because of the very high economic incentives to continue their actions.” In its 1994 reorganization report, Customs recognized the continuing controversy over achieving the right balance between enforcing the law and facilitating the flow of conveyances, merchandise, and people into the country. From the mid-1960s until the mid-1990s, Customs’ organizational structure included a headquarters, region and district offices, Special Agent in Charge (SAC) offices, and ports of entry. In 1993, Customs created a team to reorganize the Customs Service. Some results of that effort were that Customs moved to a system of management by process and reorganized its field and headquarters structures. In October 1995 Customs reorganized its headquarters, which included creating an Office of Field Operations responsible for overseeing the Customs Management Centers (CMCs) and ports. The Office of Investigations’ responsibilities continued to include overseeing the Special Agents in Charge (SACs) in the field. The October 1995 field organization changes included abolishing the regions and districts and creating the CMCs to act as a single management level between the ports and headquarters. To gain an understanding of the blue ribbon panel, why it was created, and Customs’ response to its report, we read the panel’s report and transcripts of congressional hearings that dealt with Customs’ Southwest Region problems and the panel. We also interviewed three panel members—the former chairman, who was not a Customs employee; and two others who were Customs employees during the panel and continue to be Customs employees. To determine the status of the implementation of the recommendations, we reviewed a status report Customs prepared for us in August 1995 on actions Customs took in response to each recommendation. In February 1996, Customs updated relevant portions of this report. We also reviewed documents that Customs officials provided that were related to actions taken to implement the recommendations. To expand on the information provided in the Customs-prepared report and to determine if Customs officials knew if the problems identified by the panel still existed, we interviewed Customs officials from offices that were at the time of our review responsible for areas covered by the report, including the Offices of Investigations, Internal Affairs, and Human Resources Management. We also interviewed officials in Customs’ Office of Planning and Evaluation and the Treasury Department’s Office of Inspector General (IG). However, we did not verify the accuracy of the information provided or validate that policies and procedures to which Customs officials referred were being adhered to. We did our work at Customs headquarters in Washington, D.C.; at the San Diego Customs Management Center; and in Oklahoma City, OK. We conducted this review between August 1995 and June 1996 in accordance with generally accepted government auditing standards. We obtained written comments from Customs on a draft of this report. These comments are discussed at the end of this report and reprinted in appendix II. In Customs’ 1992 report on its implementation of the panel recommendations, it stated that in early 1991 it had come under intense scrutiny from national media and congressional oversight committees because of allegations of corruption and mismanagement in its Southwest Region. According to testimony by the Commissioner in 1992, she created the panel when she became aware of the scope and seriousness of the allegations in Texas. According to the transcript of hearings held by the Commerce, Consumer, and Monetary Affairs Subcommittee of the House Committee on Government Operations in 1992, a December 1992 Customs-prepared response to that Subcommittee, and a 1991 Treasury IG report on Customs’ Southwest Region, allegations included mismanagement by the Special Agent in Charge, harassment of and retaliation against whistleblowers, conspiracy by management to cover up criminal conduct of enforcement management suppression of a major drug investigation, existence of an old-boy network, improper associations or affiliations between Customs law enforcement officers and individuals possibly involved with drug trafficking and money laundering at the border, and noncooperation by Customs management with other law enforcement organizations. Furthermore, according to the December 1992 Customs-prepared response to the Commerce, Consumer, and Monetary Affairs Subcommittee, the allegations focused on two Office of Enforcement field locations. In June 1991 the blue ribbon panel convened. The Commissioner created the panel, in part, because of allegations of corruption, harassment, retaliation, mismanagement, and an old-boy network in Customs’ Southwest Region. The panel was made up of nine individuals—five from outside Customs and four from within. It was chaired by the then General Counsel of the Department of Housing and Urban Development, who had formerly been, among other things, Associate Attorney General in the U.S. Justice Department, Assistant Secretary of the Treasury for Enforcement, U.S. Attorney for the Northern District of Oklahoma, and an FBI agent. The panel did its work over an approximately 6-week period, according to its chairman. It conducted over 150 interviews and briefings with Customs employees and non-Customs officials in the Southwest Region and Washington, D.C. The panel looked at two offices within Customs—Enforcement and Internal Affairs. Those non-Customs officials interviewed were said to be key federal, state, and local law enforcement officials. They included employees of the FBI, the Marshals Service, the Immigration and Naturalization Service, and the U.S. Attorneys Office. According to the chairman’s testimony, much of the panel’s information was anecdotal and if the panelists heard it repeatedly, they considered it a finding. The chairman further stated in the testimony that the panel did not have subpoena power and it was not a grand jury; nor did it view its work as a law enforcement mission. The panel examined system failures. He said that “hat the report concentrated on was assuming the integrity of all the allegations and all the swirling controversy, how could these things happen.” The panel’s suggestions, he added, were to tighten down and firm up disciplinary processes and the management and supervisory structures. The panel issued its report in August 1991. The report had 50 findings and 51 recommendations categorized into 7 areas: management, Office of Enforcement, Office of Internal Affairs, training, whistleblowers, and discipline. As stated in the panel’s report, the recommendations reflected the consensus of the panel and proposed approaches to rectify the conditions that generated the report’s findings. The overall aim of the recommendations was to safeguard the integrity and strengthen the management systems of the Customs Service. While the panel did not determine if the Southwest region situation was representative of the rest of Customs, it believed the implications of the findings and recommendations could be applied to the entire agency. The chairman of the panel testified that because the panel uncovered “systemic management failures in the Southwest, the likelihood of that occurring elsewhere is certainly not only possible, but probable.” “The Blue Ribbon Panel found fundamental weaknesses in the Customs Service management systems at all operational levels—Headquarters, regional management, SAC office management, and regional Internal Affairs management. The apparent breakdown of the management structure in the Southwest Region was precipitated by inadequate and/or inattentive supervision in specific cases. Compounding those situations, managers were unable and/or unwilling to address serious supervisory and management problems. There was an absence of management accountability, and a perception of a collusive relationship between management and Internal Affairs. Customs’ management systems failed to identify and correct these deficiencies.” “The Blue Ribbon Panel determined that the Office of Internal Affairs (IA), at least in the Southwest Region, did not recognize the gravity of the circumstances that caused the perception of corruption, nor did it promptly initiate or complete certain investigations of related allegations. Non-criminal misconduct and mismanagement matters were explicitly removed from the IA purview. As a result, Internal Affairs did not provide the necessary safeguard to protect the reputation, operations and organizational effectiveness of the Customs Service.” “Office of Enforcement (OE) activities in the Southwest Region suffer from a lack of national direction and from confused and competing lines of authority that undermine effectiveness. Clearly articulated recruitment, mobility, and career path policies do not exist and the influence of various ’old boy’ networks taints the objectivity of the selection process and rating systems.” Some of the actions that the panel recommended in the sections on management, the Office of Internal Affairs, and the Office of Enforcement were the following: Customs should establish an Office of Organizational Effectiveness (OOE) led by an Associate Commissioner who would report directly to the Commissioner, at a level above the Assistant Commissioners. The Associate Commissioner’s recommended responsibilities included supervising the Assistant Commissioner for Internal Affairs and reforming Customs pursuant to the panel’s recommendations. The Office of Internal Affairs should be restructured. Its responsibilities should include the comprehensive and aggressive internal inspection program recommended by the panel and investigation of matters related to mismanagement, criminal misconduct, and serious noncriminal misconduct. Direct line authority should be established in the Office of Enforcement from the Assistant Commissioner for Enforcement through the Special Agent in Charge to the agent. Customs should establish a national recruitment policy and mobility policy for the Office of Enforcement. Customs accepted the panel’s findings and recommendations almost immediately and took several major actions to implement these recommendations. In April 1992, the Commissioner testified that “the Customs Service accepted the findings and recommendations of the panel and went to work using teams of managers and executives that we developed, and we have put together a comprehensive implementation plan that is just as hard hitting as the report was.” According to Customs’ 1992 report on its implementation of the recommendations, the implementation effort was national in scope and focused on the development or redesign of management systems throughout Customs to prevent a future reoccurrence. A Customs December 1992 written response to the Chairman of the Commerce, Consumer, and Monetary Affairs Subcommittee stated that Customs had “made implementation of the Panel’s recommendations a top priority and dedicated substantial resources to the effort.” According to Customs’ 1992 report on its implementation of the recommendations, Customs created a Board of Directors to direct the implementation process. This board included the Commissioner, Deputy Commissioner, senior managers, and the Department of the Treasury IG. In the Commissioner’s testimony for the April 1992 hearing she explained that Customs also established internal task forces of managers and subject matter experts to respond to each recommendation. These task forces designed implementation strategies, action plans, and milestones for implementing the recommendations. In October 1991, the action plans were given to various Assistant Commissioners to continue the implementation efforts in their areas of responsibility. The Board of Directors, among others, monitored these implementation efforts. Some of the actions Customs took that were related to the panel’s recommended actions cited above were the following: Customs formally established OOE in April 1992 with the appointment of its Associate Commissioner. According to testimony by the Commissioner, the position of the head of OOE was established at a level above the Assistant and Regional Commissioners with commensurate authority and responsibility to oversee the reforms and to compel action as necessary. The Associate Commissioner was also given responsibility for overseeing the Office of Internal Affairs. Customs’ 1992 revision to its Organization Handbook stated that OOE was “intended to ensure effective transition to an organization which incorporates reforms called for by the Blue Ribbon Panel. Therefore, the continued necessity for this organization will be reviewed after a three year period and annually thereafter.” The chairman of the panel testified in July 1991 that the decision on whether the Associate Commissioner position should be temporary or permanent is the Commissioner’s decision. In December 1992 Customs issued a report describing the progress it had made in implementing the panel’s recommendations. Customs closed OOE in October 1994. According to the acting Associate Commissioner at that time, OOE was closed as part of Customs’ reorganization and the reduction of its headquarters staff. The Director, Office of Planning and Evaluation, stated that when OOE was closed Customs felt it had substantially implemented the key provisions made by the panel and that those recommendations were institutionalized throughout the agency. The recommendations that remained either required additional funding or could be addressed under the reorganized agency. He also stated that Customs’ December 1992 report analyzed the agency’s actions taken regarding the panel’s recommendations and that Customs used that report in making its determination that by October 1994 it had devoted sufficient time and effort to virtually bring the recommendations to a conclusion and would close OOE. Customs restructured the Office of Internal Affairs. The Office’s responsibilities included investigating criminal misconduct, serious noncriminal misconduct, and certain mismanagement matters. According to Customs’ 1992 report on its implementation of the panel recommendations, while OOE was in place, OOE was to be the recipient of mismanagement allegations. With the abolishment of OOE, according to an IA official, IA is to be the recipient of mismanagement allegations and is to determine if IA should conduct an investigation or should refer the allegation to management. Additionally, while OOE was in existence, OOE was responsible for the internal inspection program. The Office of Internal Affairs became responsible for those inspections when OOE closed. Direct line authority was established in the Office of Enforcement in 1991. The Office of Investigations instituted a national recruitment program and has drafted a mobility program. To report on the status of the implementation of the recommendations, we used the following implementation categories: Fully implemented. The entire wording of the recommendation has been fulfilled, except in cases where the panel did not define terminology. In those instances, we did not assess the recommendation on the basis of the undefined terms. If Customs had implemented the rest of the recommendation, we categorized it as fully implemented. Substantially implemented. Either (1) implementation has occurred or action has been taken that, while not responsive to the letter of the recommendation, generally was consistent with its purpose; or (2) a recommendation was not clearly defined; however, Customs took actions that appeared to be responsive to the recommendation. Partially implemented. Only a portion of the recommendation has been implemented. When the wording of the recommendation had multiple parts, if one part or a portion of a part had been implemented (but not all parts), we categorized the recommendation as “partially implemented.” Not implemented—action taken. No part of the recommendation has been implemented, but some action has been taken toward the completion of the recommendation. For example, if legislation had been introduced to address the recommendation, but it had not been enacted into law, we categorized the recommendation as “not implemented—action taken.” Not implemented—no action. No part of the recommendation has been completed, and no action has been taken to address the recommendation. Insufficient information. Insufficient or conflicting information prevented us from determining the status of the recommendation. We did not evaluate the recommendations or determine whether Customs could or should have implemented them. The implementation status may have varied over time; however, our analysis reflects the status of implementation for most recommendations as of February 1996 with updates on others provided through June 1996. We took a fairly literal reading of the recommendations to determine into which implementation category each recommendation fell. If a subjective or unclear term was not defined in the recommendation, we did not assess the recommendation on the basis of that term. For example, Integrity recommendation 4 said that “Internal Affairs must aggressively monitor and act upon perceptions of Federal, State, and local law enforcement officials ...” The panel did not state what it meant by “aggressively;” therefore, we did not assess the recommendation on the basis of whether Internal Affairs’ actions were “aggressive.” Several of the recommendations made general references to other recommendations in the report. For instance, Integrity recommendation 1 stated that “Implementing this recommendation requires that the Customs Service adopt the recommended restructuring of Internal Affairs discussed elsewhere in this report.” In this and similar instances, we did not attempt to determine which specific recommendations the panel was referring to. Therefore, we did not factor the implementation of those statements into our categorization of the status of the implementation of the recommendation. Each recommendation, the supporting material provided by Customs, and our interview write-ups were reviewed by GAO evaluators to determine the implementation status of the recommendations. At least two additional GAO staff reviewed each categorization to reach concurrence on the categorization status. Table 1 is a summary of our categorization of the recommendations broken out by the sections of the panel’s report. Appendix I shows by recommendation the actions Customs has taken and our assessment of the implementation status. Various characteristics of the recommendations should be kept in mind when reading the statistics. A number of recommendations were made up of multiple parts. Each part had to be fully implemented for us to categorize the recommendation as fully implemented. In its report, the panel referred to the recommendations’ interlocking relationships, and we found that the report does not contain 50 discrete recommendations. Thus, a portion of one recommendation can be part of a number of recommendations. If Customs did not fully implement that portion of the recommendation, it could affect the implementation category of other recommendations. For example, included in Management recommendation 4 is a statement that raters of key managers should solicit input from other appropriate parts of the Customs organization. Customs did not implement this portion of the recommendation because, according to the Commissioner in her April 1992 testimony, officials in Customs’ Office of Human Resources and a group of managers thought it would diminish accountability within the managers’ chain of command. This portion of the recommendation was also included in Management recommendation 10 and Office of Enforcement recommendation 6; because Customs chose not to implement this portion of the recommendations, we categorized all three as partially implemented. Similarly, in some cases, the implementation of one recommendation relied upon the implementation of a particular facet of another recommendation, at least in part. For example, Management recommendation 2 recommends the establishment of a management inspection program in which office inspections are to occur at least every 2 years. Customs has a management inspection program, but inspections are scheduled every 3 or 4 years according to the Director, Management Inspections Division. Thus, when other recommendations state that something should be done through the management inspection process (such as in Integrity recommendation 4, which states that Internal Affairs should monitor and act upon perceptions of law enforcement officials through the management inspection process), we did not classify the recommendation as fully implemented, even if Customs was doing what was recommended, because the inspections were not being done as frequently as recommended. We asked Customs officials whether they knew if the general and specific problems identified by the panel still existed. The Assistant Commissioners for Investigations and Internal Affairs said that they had in place or were developing oversight mechanisms to alert them to problem areas in their offices. An example of an oversight mechanism provided by the Assistant Commissioner for Investigations was its Office of Policy and Oversight, which he established in August 1995. The office reports directly to him, and one of its functions is to look for trends and patterns of systemic noncompliance that are identified through such things as audit reports and cases brought before the Discipline Review Boards. The Assistant Commissioner for Internal Affairs explained some ongoing efforts in his office that he believed would assist in identifying potential problem areas. These included the development of performance measures for investigations and management inspections and the development of an automated management inspection information system that he said should improve Internal Affairs’ ability to do trend analyses of inspection findings. The Assistant Commissioner for Investigations said that he believed that the Office of Investigations-related problems identified by the panel had diminished significantly. He said the status of the problems varied by issue, and he discussed special agent training as an example. One of the panel’s training recommendations concerned the need for agents to receive continuing formalized in-service training. The Assistant Commissioner believed that training was an area where further enhancement was still needed, and his office had embarked on a training effort that had already resulted in a better trained workforce. Built into this effort were various policies and processes that would allow for evaluation, oversight, and accountability. The Assistant Commissioner for Internal Affairs believed that the problems related to Internal Affairs when the panel did its work no longer existed. As an example, he discussed the issue of lengthy investigations. One of the panel’s findings was that “because of the failure to conclude investigations, employees who were targets of allegations of serious misconduct and/or perceived integrity violations remain under a cloud of suspicion.” The Assistant Commissioner described actions that Internal Affairs took to diminish this problem and to allow his office to explain the reasons for lengthy cases when they occurred. These actions included making managers take a more hands-on approach in overseeing the investigations, highlighting cases in the automated tracking system when they reached certain time frames, and distributing monthly reports that depicted the ratio of the length of cases by office. We requested comments on a draft of this report from the Commissioner of Customs or his designee. On July 30, 1996, the Director, Office of Planning and Evaluation, provided us with written comments, which are printed in full in appendix II. The Director expressed appreciation for a “comprehensive review of where Customs stands” with respect to the blue ribbon panel recommendations, and offered technical and clarifying comments and additional information which we incorporated as appropriate. As agreed with the Subcommittee, unless you publicly announce the report’s contents earlier, we plan no further distribution until 14 days after the date of this letter. We will then send copies to the Secretary of the Treasury; the Commissioner of Customs; the Director, Office of Management and Budget; the ranking minority member of your Subcommittee; the Chairman and ranking minority member of the Senate Finance Committee; and other interested parties. We will also make copies available to others on request. Major contributors to this report are listed in appendix III. Please call me on (202) 512-8777 if you or your staff have any questions. This appendix contains (1) the 50 Blue Ribbon Panel recommendations regarding the panel’s review of integrity and management issues of the Customs Service; (2) Customs’ written response, which was provided to us in August 1995 and updated in February 1996, on how it implemented each recommendation; (3) a further updated response based on our discussions with Customs officials between February and July 1996; and (4) our categorization of the implementation status of the 50 recommendations using the following categories: fully implemented, substantially implemented, partially implemented, not implemented—action taken, not implemented—no action, and insufficient information. These categories are defined on pages 9 and 10 of the letter. The recommendations are reproduced verbatim from the panel’s report as were Customs’ written responses. The updated responses were derived from interviews we held with Customs officials from the offices of Planning and Evaluation, Investigations, Internal Affairs, Human Resources Management, and Chief Counsel; Treasury’s Office of Inspector General; and from documentation we obtained. Our categorization of the implementation status of the recommendations was based on our assessment of the extent to which Customs implemented the panel’s recommendations. We did not verify the accuracy of the information provided or validate that the policies and procedures to which Customs’ officials referred were being adhered to. The panel did not always define terminology in the recommendations. In these instances, we did not assess the panel’s recommendation on the basis of those terms but on the implementation of the rest of the recommendation. For example, the panel’s Integrity recommendation 1 (p. 20) states in part that “ll allegations of corruption should be expeditiously investigated by Internal Affairs.” The panel did not define what it meant by “expeditiously.” In our categorization of this recommendation, we did not assess the panel’s recommendation for “expeditiously;” therefore, we did not assess this recommendation on the basis of actions taken “expeditiously.” Several of the panel’s recommendations referred generally to other recommendations in the panel’s report . For example, in Integrity recommendation 1 (p. 20), the panel stated that “Implementing this recommendation requires that the Customs Service adopt the recommended restructuring of Internal Affairs discussed elsewhere in this report.” In our categorization of the implementation status of this recommendation, and for all recommendations that had this type of referral, we did not attempt to determine which specific recommendations discussed elsewhere in the report the panel was referring to. Therefore, we did not base our categorization of the recommendation on the portion stating that “Implementing this recommendation requires that the Customs Service adopt the recommended restructuring of Internal Affairs discussed elsewhere in this report.” Integrity recommendation 1: All allegations of corruption should be expeditiously investigated by Internal Affairs. Implementing this recommendation requires that the Customs Service adopt the recommended restructuring of Internal Affairs discussed elsewhere in this report. Written response: Customs has implemented several initiatives which have contributed to more timely IA investigations into allegations of corruption. First, because IA’s Management Inspections Division interviews Customs employees as well as employees in outside agencies (including U.S. Attorneys) as part of its inspection process, allegations or perceptions of corruption can be brought to IA attention quickly. Second, Customs has trained groups of senior level agents in the Office of Investigations (OI) known as flying squads to conduct high priority investigations at locations throughout the country under the direction of IA. Third, IA has developed new systems and procedures for receiving and processing allegations, including an automated case management system that has improved the consistency, timeliness and professionalism of IA investigations. Fully implemented. Note 1: We did not assess the panel’s recommendation for “expeditiously.” Note 2: We did not attempt to determine which specific recommendations discussed elsewhere in the panel’s report the panel was referring to regarding the implementation of this recommendation. Therefore, we did not assess the portion of the recommendation stating that “Implementing this recommendation requires that the Customs Service adopt the recommended restructuring of Internal Affairs discussed elsewhere in this report.” Updated response: Customs’ Special Assistant Commissioner, Office of Internal Affairs (IA), told us that when allegations come into IA they are logged into IA’s automated case management system. From the log, an agent opens a case for a preliminary investigation. He said all allegations that IA receives are to be logged onto the system and assigned a case number. IA has 60 days to determine whether the case should proceed from a preliminary investigation to a formal one. One way in which the length of formal cases is tracked in IA’s case management system is that cases 60, 90, and 120 days old are highlighted—the computer screen flashes when the cases reach these intervals. The Special Assistant Commissioner, IA, said that the agent revisits the case with his/her supervisor on at least these 30-day intervals. He said that if a case is close to approaching 6 months old, IA sends a memorandum to the IA Special Agent in Charge (SAC). If the case is over 6 months old, and no activity has been conducted on it for the past 2 weeks, IA supervisors will determine why there has been no activity. Desk officers can also review the computer screens, according to the Special Assistant Commissioner, IA. They should know when a case exceeds the 180 days. Integrity recommendation 2: The Customs Service must immediately remove, both from their positions and from their geographical location, Customs personnel found responsible for corruption and/or contributing to the perception of corruption. Written response: Evidence of actual corruption is treated as criminal conduct and employees face removal from their positions and the Service. Where perceptions of corruption exist, employees have been transferred due to “loss of effectiveness.” However, it should be noted that Customs reviews cases involving perceptions of corruption on a case by case basis and determines disciplinary action based on the facts surrounding the individual case as well as mitigating and aggravating factors. Substantially implemented. Updated response: Customs officials further stated that the Office of Chief Counsel was very involved in determining the actions to implement regarding employees contributing to the perception of corruption. They also mentioned that there are considerations of fairness to the individual because these cases were allegations of corruption and not actual acts of corruption. Other issues involved the employee’s right to have the Office of Special Counsel intervene, which they said could prevent automatic removal based on perceptions of corruption. We determined that Customs’ implementation of this recommendation was in the “substantially implemented” category because it took actions that were not responsive to the letter of this recommendation, but the actions were generally consistent with this recommendation’s purpose. Specifically, Customs does not necessarily remove both from their positions and from their geographical location personnel found responsible for corruption and/or contributing to the perception of corruption; rather, personnel face removal with determinations made on a case-by-case basis. Integrity recommendation 3: Customs should feel the same obligation to exonerate employees who have been unfairly accused of wrongdoing as it does to aggressively pursue them. The Customs Service must expeditiously and formally notify appropriate management officials and the targets of allegations of corruption of the results of their investigations. Written response: In the past many employees who were the subjects of IA investigations were not informed by management that the investigations had been closed without management action. As a result of this recommendation Customs issued Directive 099 1420-010 which designates responsibility to the Principal Field Headquarters Officers (through their Labor and Employee Relations (LER) Offices) to respond to reports of investigation, and to notify Customs employees who are subjects of completed IA investigations that the investigations are closed, and that management determined that no further action is contemplated. In addition to the above referenced Customs Directive, Customs developed a case tracking system which enables managers and LER Specialists to track the progress of investigations and respond to employees quickly upon the investigation’s completion. Fully implemented. Updated response: Customs’ Directive on “Reports of Investigation Issued by the Office of Internal Affairs” dated November 5, 1993, advises managers of the IA and LER automated case tracking procedures and their responsibility to respond to reports of investigation transmitted by IA. The directive included time frames for notifying subjects of investigations and management officials of the investigation results. Written response: Ongoing. See INTEGRITY, Recommendation 1 [Customs’ written response to Integrity recommendation 1 is copied below]. Partially implemented. Customs has implemented several initiatives which have contributed to more timely IA investigations into allegations of corruption. First, because IA’s Management Inspections Division interviews Customs employees as well as employees in outside agencies (including U.S. Attorneys) as part of its inspection process, allegations or perceptions of corruption can be brought to IA attention quickly. Second, Customs has trained groups of senior level agents in the Office of Investigation (OI) known as flying squads to conduct high priority investigations at locations throughout the country under the direction of IA. Third, IA has developed new systems and procedures for receiving and processing allegations, including an automated case management system that has improved the consistency, timeliness and professionalism of IA investigations. Updated response: Customs’ written response to the panel’s recommendation for conducting management inspections of all offices every 18 to 24 months was “Lack of resources have precluded implementation of comprehensive inspections at least every two years as recommended. However, each SAC office receives a comprehensive, spot-check, or special assessment every two years.” We determined that Customs partially implemented this recommendation because the panel stated that the recommendation be implemented “through the management inspection process described elsewhere in this report.” The panel recommended that such inspections be conducted at least every 2 years. Customs is scheduled to conduct these inspections every 3 or 4 years due to lack of resources and IA’s Office of Management Inspections Director’s view that conducting comprehensive inspections for every office once every 3 or 4 years is sufficient. IA’s Director, Management Inspections Division, said that he did not agree with the blue ribbon panel’s recommendation to conduct comprehensive inspections of all offices every 18 to 24 months. He believed that conducting such inspections for every office once every 3 or 4 years was sufficient. He said that the Management Inspections Division is scheduled to conduct comprehensive inspections of SAC offices every 3 or 4 years. The Division conducts follow-up inspections after comprehensive inspections are completed, along with spot checks. The Director of MID said that if problems exist at an office, MID conducts a comprehensive inspection sooner than scheduled. In addition to comprehensive inspections where IA investigators contact other law enforcement officials that deal with Customs to determine if there are perceptions of corruption, the Special Assistant Commissioner of IA said that IA has contacts with these officials in other manners, such as Customs’ participation in joint task forces. Integrity recommendation 5: Blue Ribbon Panel recommendations reforming training, management, supervision, professional conduct guidelines, discipline policies, personnel assignments, rotation policies, and intelligence support should be adopted to eliminate the conditions that contribute to unwarranted perceptions of integrity violations. Written response: Since the issuance of the Blue Ribbon Panel Report, IA intensified efforts to enhance the integrity of the Service through the development of ethics and integrity training for all Customs employees. During FY 92, over 92% of the Customs work force received this training. Some form of this training continues to be given in every basic training course at the Customs Academy as well as in supervisory and managerial training courses. In addition, former Commissioner Hallett issued a memorandum dated December 20, 1991, which informed all employees about three new categories of misconduct for inclusion in the Table of Penalties and Offenses. The new categories address whistleblower retaliation and supervisors and managers who fail to report misconduct or to take appropriate disciplinary action. Partially implemented. We did not attempt to determine which specific recommendations the panel was referring to in this recommendation. We based the categorization on the recommendations related to the actions Customs selected in the written response to demonstrate its implementation of the recommendation. A mobility policy incorporating the features set forth in the Blue Ribbon Panel Report has been drafted. Implementation of the policy has been delayed due to the high cost associated with such extensive mobility features and funding restrictions within the Customs Service. However, home town initial assignments are avoided whenever possible depending on funding and the needs of the service. Customs examined the role of OI and IA intelligence. As a result, Field Area Intelligence Units were established in regional cities, under the line authority of the SAC and the functional authority of the Director of Intelligence to provide for national oversight with continuing intelligence support to regional organizations. Additionally, an intelligence function has also been established in IA to analyze allegations, investigations and conduct threat assessments. Updated response: An Office of Investigations official said that OI is not doing hometown initial assignments except in large metropolitan areas. We categorized this recommendation as “partially implemented” because Customs fully implemented some of the targeted recommendations but did not fully implement 1 of them. Specifically, Customs identified the 2 panel recommendations on discipline regarding sanctions against managers and supervisors who fail to report instances of misconduct and who fail to take appropriate disciplinary actions. These targeted recommendations refer to “discipline policies” stated by the panel for this recommendation. Customs fully implemented these recommendations. Another panel recommendation identified by Customs as addressing integrity issues that Customs fully implemented was: “Management recommendation 8: Customs should examine the role of intelligence to assure that the intelligence product effectively serves all of the Customs components.” This targeted recommendation refers to “intelligence support” stated by the panel for this recommendation. Customs did not, however, fully implement the panel’s Office of Enforcement recommendation 3 that Customs should establish a mobility policy. Customs drafted a mobility policy but did not implement it because Customs decided it would be too costly. Management recommendation 1: The Commissioner should establish an Office of Organizational Effectiveness, led by an Associate Commissioner who reports directly to the Commissioner, at a level above the Assistant Commissioners. The Associate Commissioner would supervise the Assistant Commissioner (Internal Affairs) and would be responsible for the current programs within Internal Affairs as well as the new responsibilities called for in this report. - The Associate Commissioner should be charged with reforming the Customs Service pursuant to the recommendations of the Blue Ribbon Panel. Written response: OOE was established and remained in existence from April 1992 until October 1994. OOE was led by an Associate Commissioner who supervised IA as well as carried out the responsibilities and reforms called for in the Blue Ribbon Panel report. In accordance with the sunset provisions placed on OOE and pursuant to the Customs Service plans to reorganize itself, OOE was abolished in 1994 after ensuring that corrective actions called for by the Blue Ribbon Panel were firmly ensconced in Customs. The Assistant Commissioner (Internal Affairs) now reports directly to the Commissioner on the same level as other Assistant Commissioners. Fully implemented. Updated response: An official from Customs’ Office of Planning and Evaluation stated that during the Office of Organizational Effectiveness’ (OOE) existence, all 51 of the panel’s recommendations were addressed and most of them were implemented. This official also said that OOE was created as a transition organization to implement the panel’s recommendations and ensure that they were institutionalized. The Assistant Commissioner, Office of Human Resources Management (HRM), who was the Acting Associate Commissioner, OOE, at the time OOE closed, said that the primary reason for OOE and its Associate Commissioner position being abolished in October 1994 (6 months sooner than planned) was because of Customs’ headquarters restructuring and reduction of headquarters staff pursuant to Customs’ reorganization plan. Customs’ reorganization was part of its September 1994 People, Processes, and Partnerships report. We categorized the recommendation as “fully implemented” because while OOE was in existence, Customs fully implemented the panel’s recommended actions. Customs abolished OOE in 1994 and devolved the responsibilities of OOE to Assistant Commissioners, such as the Assistant Commissioner, IA. Although the panel was silent on whether OOE and the Associate Commissioner position should have been temporary, in testimony during the 1991 congressional hearing on Customs’ blue ribbon panel investigation into allegations of wrongdoing within the agency, the panel’s Chairman stated that it was at the discretion of the Commissioner whether OOE was temporary or permanent. The Assistant Commissioner, HRM, said that when OOE was established, Customs intended for it to remain in existence for 3 years. They believed that 3 years was sufficient time to institutionalize the panel’s recommendations that were implemented throughout Customs. The Assistant Commissioner, HRM, believed that at the time OOE was closed, the implemented panel recommendations had been institutionalized throughout Customs. According to the Director, Office of Planning and Evaluation, Customs’ December 1992 report that described the progress Customs had made in implementing the panel’s recommendations was taken into consideration when the decision was made to close OOE. Management recommendation 2: Customs should establish a strong and viable management inspection program to evaluate and monitor all aspects of the organization. Office inspections should be comprehensive, covering both operations and resource management, and should occur at least every two years. In addition, the results of inspections should be factored into key managers’ performance evaluations. It is recommended that this function be placed in the newly established Office of Organizational Effectiveness. (See the Internal Affairs section for details of the proposed inspection program.) Written response: The Office of Management Inspection (OMI) was established under OOE in April 1992 with the mission of conducting periodic and comprehensive inspections of Special Agent in Charge (SAC) and District offices to evaluate: (1) management systems, practices, and effectiveness; and (2) compliance with laws, policies, and regulations. OMI’s primary goal was to ascertain the health of the organization through “independent” evaluation of effectiveness, i.e., mission performance, resource utilization, internal/external relations, and management controls. Relevant Blue Ribbon Panel issues such as managerial effectiveness, performance indicators, and supervisory, employee, and outside agencies (including U.S. Attorneys) concerns were incorporated into the inspection process. Partially implemented. Lack of resources has precluded implementation of comprehensive inspections at least every two years as recommended. However, each SAC office receives a comprehensive, spot-check or special assessment every two years. We determined that this recommendation was partially implemented because Customs did establish a management inspection program; however, it did not conduct comprehensive inspections at least every 2 years because Customs decided it would be too costly. Furthermore, OI does not factor the results of inspections into key managers’ performance evaluations. The abolishment of OOE placed OMI under IA and renamed it the Management Inspections Division (MID). MID efforts are now heavily concentrated on reviews of OI operations. MID operations must be re-evaluated in light of the transformation of the field structure from regions to CMCs, the implementation of new measurement systems, and the introduction of business process improvement techniques to analyze our processes. Updated Response: An Office of Planning and Evaluation (OPE) official said that there is no agency policy requiring that inspection results be compared to supervisory and managerial performance. An OI official said that at least for the time he had been in his position (since June 1994), OI had not used the management inspection reports when doing the SACs’ ratings. IA’s Director, Management Inspections Division, said that Customs is not yet factoring the results of inspections into key managers’ performance evaluations. He also stated that he did not believe that the comprehensive inspections needed to be done every 2 years. Management recommendation 3: Managerial and supervisory performance should be scrutinized carefully, objectively and openly. -Standards for supervisory performance should be communicated clearly and frequently. Professional conduct and managerial performance guidelines should be established and communicated, particularly within the Offices of Enforcement and Internal Affairs. -The newly recommended inspection process should include interviews with managers that cover subordinate supervisors’ performance, which should then be compared with annual performance ratings. -Results of the inspection should also be compared with supervisory and managerial performance ratings. Managers who have failed to address known performance deficiencies in subordinates should receive low ratings in applicable elements of their performance plan. -Identified performance problems should be dealt with openly. If necessary, managers who have lost effectiveness in their particular position (but whose performance may not warrant more severe action) should be reassigned out of their organization. Written response: Several memorandums have been distributed to Assistant and Regional Commissioners, District Directors, and mid-level managers which communicated the standards for supervisory performance. Partially implemented. IA’s Management Inspection Division reviews performance appraisal as a core area during comprehensive management inspections. This process aids in determining if the performance management system is working properly. Managers have been reassigned where it has been determined that they have lost effectiveness in their positions. Updated response: Regarding communicating standards for supervisory performance, Customs implemented a new agencywide performance management system for supervisors and managers effective April 1, 1996. The system is designed to encourage communication. The ratee’s performance is to be discussed at least three times a year. Discussion topics are to include (1) accepting and conducting “responsibilities in accordance with formally issued Customs values, ethics and integrity guidelines”; and (2) human resource management. The provision for communicating standards of supervisory performance and establishing and communicating guidelines for professional conduct and managerial performance was fully implemented. Also fully implemented was the portion of the recommendation dealing with identified performance problems and reassigning managers, if necessary, who have lost their effectiveness. The management inspection process does not automatically include a review of supervisors’ performance, including the ratings. If, when doing the preinspection survey work, the Office of Internal Affairs identifies a potential problem with performance appraisals, it will include them as part of its inspections; otherwise, it does not. It is not mandatory that inspection results be compared to supervisory and managerial performance. OI, for example, does not review management inspection reports when rating its SACs. The other two portions of the recommendation were not fully implemented. The review of the performance appraisals through the management inspection process was partially implemented because it was not done as a routine part of each inspection. The results of inspections were also not being used routinely in supervisory and managerial performance ratings. The performance management system implemented April 1, 1996, “stresses early intervention” so that “minor performance problems can be corrected . . . before they turn into more serious problems.” If that fails, then the supervisor is to develop and issue a written plan for improvement and clarify in writing the expectations the employee is not meeting. According to an official in OPE, while Customs does not have a policy to reassign managers out of their organization when they have lost their effectiveness, the agency can reassign staff and has done it for this reason. Management recommendation 4: Accountability measures and specific goals should be the cornerstone of performance plans and ratings of key managers. -Regional and Assistant Commissioners should review merit pay and SES performance plans for 1991/92 to determine if plans include sufficient elements to cover accountability for organizational performance, including management of all resources and assets, and effective communication with subordinate managers. W p I e F p i e a -Raters of key managers (e.g., Special Agents in Charge, District Directors) should solicit input from other appropriate parts of the Customs organization (e.g., those who are provided operational support or receive services from the manager). -The Commissioner should convene a balanced and impartial board, chaired by the Associate Commissioner for Organizational Effectiveness, to perform post-audit review of ratings issued on key managers (SACs, DDs) and senior executives. The board should review and compare ratings both within the executives’ respective hierarchies and across organizational and program lines, and report to the Commissioner on its findings. Written response: Customs has aggressively pursued corrective action to improve the performance evaluation system for executives and managers through a series of actions. Instructions were issued to ensure that performance plans for SES and merit pay employees were linked to Customs goals and objectives as presented in the Customs Five Year Plan. In addition, managers were instructed to include quantifiable performance criteria and milestones in their plans. Annual goals memorandums have instructed SES employees that mid and year-end self assessments must address each expected performance objective, and must describe achievements. In addition, Assistant and Regional Commissioners were instructed to review plans for these elements. Partially implemented. As a check to ensure that this accountability mechanism was implemented, SES plans for the 1991-92, 1992-93 and 1993-94 cycles were reviewed by a Performance Appraisal Review Committee. In addition, former merit pay employees were directed to include specific elements that addressed organizational performance and management of resources and assets. Additionally, a special task force of District Directors and representatives from the Office of Investigations (OI) and IA was convened to revamp and revitalize the merit pay performance standards for key managerial positions. One of the objectives of the task force was to ensure that the plans reflected Customs priorities and to provide a clear, consistent and objective framework for evaluation, which included quantifiable national standards. We categorized this recommendation as partially implemented because although the Regional and Assistant Commissioners were tasked with reviewing performance plans as recommended and were to ensure that the plans were linked to Customs’ goals and that they had sufficient accountability measures, the other portions of the recommendation were less than fully implemented. Customs disagreed with and did not implement the portion of the recommendation for raters to solicit input from managers in other parts of the organization. Also, while Customs developed a plan for the post-audit reviews, it did not implement it. One recommendation that was not adopted in this area is the requirement to solicit input from managers in other organizations for performance ratings of key managers. After careful review of this suggestion, Customs officials felt that such an approach would actually serve to diminish accountability within the managers’ chain of command. The subjectivity of various managers who are not ultimately responsible for the performance of the manager being rated would undermine efforts for objective and quantifiable evaluation against predetermined standards. Updated response: According to information provided by Customs at an April 1992 congressional hearing, the Regional and Assistant Commissioners were instructed to review performance plans to see if accountability measures were sufficient. The review was to cover organizational performance and management of resources. According to former OOE officials, although Customs officials had discussions about doing post-audit reviews, the reviews were never done. Management recommendation 5: The organizational structure of the Office of Enforcement should be realigned to provide a clear line of authority. -The Assistant Commissioner (Enforcement) should be in charge of all assets, including air, marine and human resources. W A b r i -When assets fall within the jurisdiction of a SAC office, they should be under the SAC’s control (e.g., boats, airplanes). The Panel considers this to be a basic tenet for effective law enforcement management, and recognizes that it requires a Servicewide review of the Customs Air Program field structure. Written response: A private contractor conducted a Service-wide review of the Customs Air Program. The study recommended that air and marine resources remain within OI, but did not support the Blue Ribbon Panel recommendation that air/marine resources report directly to the SACs. The Air Branch Chiefs and SACs work closely together to insure that the overall Customs enforcement mission is met. Partially implemented. Updated response: The Office of Enforcement’s organizational structure was realigned in October 1991 establishing direct line authority from the Assistant Commissioner to the SACs. According to an official in the OI Office of Policy and Oversight, the Assistant Commissioner for Investigations is in charge of the air and marine programs and human resource assets. The marine assets are under the SACs’ control; the air resources are not. We categorized this recommendation as partially implemented because, although the Assistant Commissioner for Investigations is in charge of the assets enumerated in the first part of the recommendation, the SACs do not have control of the air assets as recommended in the last part of the recommendation. Management recommendation 6: Regional and SAC office structures and reporting systems should be realigned. -SACs should report to the Assistant Commissioner (Enforcement), through subordinates if so designated by the Assistant Commissioner. W e l m a -The regional enforcement structure, as now constituted reporting to the Regional Commissioner, should be eliminated and substituted with the authority of the Assistant Commissioner (Enforcement), nationally. -The Assistant Commissioner (Enforcement) should review and redesign regional structures, SAC office designations and boundaries as necessary to insure a streamlined reporting system and to promote efficiency. Management recommendation 7: The selection process in OE for recruitment, promotion and reassignment should be revised to establish systems (e.g., career boards) which insure that personal relationships cannot be used as a basis for action or inaction. (See Office of Enforcement section.) Written response: The organizational structure of OI was realigned in October, 1991, establishing direct line authority from the Assistant Commissioner to the SACs. Regional layers of management and support personnel were phased out over a period of several months. Additional realignment to reduce the supervisor/employee ratios is ongoing with a reduction of SAC offices to occur in October 1995. Fully implemented. Updated response: In October 1995 the number of SAC offices was reduced from 27 to 20 to reflect Customs’ realigned field structure that became effective at that time. According to the OI Director, Office of Policy and Oversight, OI most recently completed a review of its field office structures in early 1996. Written response: The selection process in OI was revised by establishing a network of field recruiters. A centralized control process over the evaluation/selection process was established to ensure consistency in hiring practices. All selection decisions are currently made at the Headquarters level. The establishment of a career board is still under review by OI. Fully implemented. Updated response: According to the OI Director of Administration, the hiring process referred to in the written response involves a process in which panel members review applicants’ paperwork and make recommendations to the Assistant Commissioner for Investigations. According to the OI Director, Office of Policy and Oversight, promotion decisions to grade 13 are made by the Deputy Assistant Commissioner for Investigations. Promotion decisions to the GS-14 and GS-15 levels are made by the Assistant Commissioner for Investigations. Before OI got line authority, promotion decisions to the GS-13 level were made in the field. Reassignment decisions into and out of SAC offices are made by headquarters, not the SAC. SACs also cannot move staff within their approved office structure without the Assistant Commissioner’s approval. Management recommendation 8: With the change to line authority in the Office of Enforcement, Customs should examine the role of intelligence to assure that the intelligence product effectively serves all of the Customs components. In addition, Customs should ensure that its intelligence function is centrally controlled, professionalized, and effectively participates in and contributes meaningfully to intelligence products and activities at the national level. Written response: OI Field Area Intelligence Units were established in regional cities, under the line authority of the SAC and the functional authority of the Director of Intelligence to provide for national oversight with continuing intelligence support to regional organizations. Fully Implemented. Updated response: In 1992, Customs stated that following the institution of line authority, the Assistant Commissioner for Enforcement convened a multidisciplined group of managers to examine the role of intelligence and the impact of the new organizational structure. That document also stated that several actions were taken to provide greater professionalism within the intelligence function, including the development of a Basic Intelligence Analyst Training Course, an on-the-job training handbook for intelligence analysts, and performance standards for Intelligence Research Specialists. Customs officials told us in February 1996 that it hired an outside contractor to conduct a study on intelligence. According to OI’s Director, Office of Policy and Oversight, the study’s estimated completion date is April 1997. Management recommendation 9: [To assist in explaining our categorization of this recommendation, GAO added the (A) - (F) designations in the recommendation.] -(A) Allegations against managers should be investigated and resolved promptly. -(B) It is recommended that such allegations be reported to and acted upon by the Office of Internal Affairs. W r a m t I p m -(C) Managers who are the subject of allegations should be notified immediately and interviewed as a routine part of the investigation. -(D) Based on the nature and substance of allegations, managers may be temporarily removed from their position. -(E) Once the investigation is completed, the manager should be notified promptly of the results and the proposed action. U a s i i I m -(F) Unsatisfactory managers should be removed promptly from their position and locality. Permanent replacements should be assigned as quickly as possible. Written response: Criminal and serious misconduct allegations against managers are reported to IA and investigated promptly by IA or the Office of Inspector General (OIG) as appropriate. Allegations involving less serious instances of misconduct are referred to management for inquiry. As part of the IA investigation, managers are routinely notified of the investigation and disposition in accordance with new exoneration procedures [see INTEGRITY, recommendation (3)]. Decisions to detail or remove managers from their positions are made on a case-by-case basis depending on the nature of the mismanagement and the supporting evidence. Partially implemented. Updated response: The panel recommended that IA notify accused managers of allegations made against them. However, the Special Assistant Commissioner of IA stated that subjects of investigations are not always notified that they are being investigated. Our review of the IA Special Agent Handbook noted that for criminal investigations, upon the advice of the Assistant U.S. Attorney, IA interviews the accused. If the Assistant U.S. Attorney advises against such notification, IA does not interview the manager. A Customs directive dated November 5, 1993, included time frames for notification of investigation results. IA officials also told us that the Disciplinary Review Board was recently established to address disciplinary actions. Customs fully implemented parts of this recommendation; namely: (1) on the basis of the nature and substance of the allegations, managers may be temporarily removed from their positions; (2) once the investigation is completed, the manager should be notified promptly of the results and the proposed action; and (3) unsatisfactory managers should be removed promptly from their positions and locality. Permanent replacements should be assigned as quickly as possible. (See sections (D), (E), and (F) of the recommendation.) On June 24, 1996, Customs provided an additional written response regarding its implementation of this recommendation. Customs stated that it has an active disciplinary program, but removal of a manager because of inadequate performance requires a number of considerations. The process of removal or reassignment itself is a drawn-out procedure and the impact on the operations must be carefully weighed. The manager must also be given an opportunity to improve his performance, alternative actions must be considered, and a new place in the organization identified. This holds true whether the manager is just reassigned or outrightly removed from Customs. As a result, final action requires a lot of serious deliberation. However, other parts of this multifaceted recommendation were not fully implemented; namely: (1) Allegations of less serious instances of misconduct are referred to management, not IA, for inquiry. (See sections (A) and (B) of the recommendation.) (2) Subjects of investigations are not always notified that they are being investigated. IA will not notify and interview the manager about a criminal investigation against him if the Assistant U.S. Attorney (AUSA) advises IA not to do it. (See section (C) of the recommendation.) Management recommendation 10: With the change to line authority in OE, the Commissioner and senior Customs management should take steps to avoid perceptions of separateness and “elitism” between OE and other parts of the Customs organization. -Customs should examine grade and pay parity between SACs and DDs (taking into account the impact of recent pay reform legislation). -Each co-located SAC and DD should plan activities and programs to insure that employees in both organizations understand their counterparts’ jobs and priorities. W c e c f c p a -Assistant Commissioners, OE Headquarters Division Directors, and the respective Regional Commissioners should participate in the evaluation of SACs. Written response: A task force consisting of DDs, SACs and personnel specialists was convened to analyze pay disparities between the DDs and SACs. The task force ranked each district and SAC office using criteria such as staffing levels, operating sites, trade complexity and enforcement activity. As a result of the comprehensive analysis, the task force noted there were large, noticeable disparities in pay between DDs and SACs from comparably ranked offices. Customs requested Treasury upgrade a number of DD positions to SES. The impact of Customs reorganization to CMCs on this issue is unclear at this time. Partially implemented. OI managers have been directed to work with other Customs personnel in a coordinated team effort. Additionally, the Customs reorganization, through the introduction of process management and strategic problem-solving concepts, will encourage even greater integration of the disciplines. Customs fully implemented the first two specific portions of the recommendation. It examined grade and pay disparities between SACs and DDs, and, through the reorganization, has adopted processes that work to bring together OI and other parts of Customs. Customs did not implement the last portion of the recommendation, which was to have Assistant Commissioners, Headquarters Division Directors, and the respective Regional Commissioners participate in SAC evaluations. Customs did not accept the recommendation to have Assistant Commissioners, Headquarters Division Directors, and the respective Regional Commissioners participate in the Evaluations of SACs. Updated response: According to OI representatives, Customs’ reorganization has raised the consciousness about working together. Various aspects of the reorganization, including process management and the strategic problem-solving process, provide opportunities to encourage greater integration between OI and other parts of Customs. Additionally, Customs’ 1994 reorganization report states that one of Customs desired states is for there to be “a better understanding by all disciplines and employees of the goals of the organization, and the role that each discipline and organizational element plays in the achievement of those goals.” Management recommendation 11: Customs should ensure that any future implementation of the management philosophy promoted by the “Excellence” programavoids counterproductive side effects that undermine overall Customs organizational effectiveness. Management recommendation 12: If the Commissioner determines that there is a continuing need for a special and independent “cards and letters” program, these communications should be referred to the newly established Associate Commissioner for Organizational Effectiveness. The Associate Commissioner, who is not part of the agency appellate process, can administer the program on behalf of the Commissioner. Moreover, the availability and utility of existing systems for addressing concerns, complaints, and problems should be widely advertised throughout the Customs Service and promoted as the proper guarantor of Service integrity. Written response: The “Excellence” program that existed during the Blue Ribbon Panel review has been replaced by a nation-wide and government-wide move towards a “Partnership” between employees and management. Fully implemented. Updated response: Customs officials said that Customs now has its “partnership program,” which went far beyond the “excellence program.” Customs’ reorganization and, in particular, its “partnership” tenets include encouraging teamwork and involving all of Customs. Because the panel did not define much of this recommendation, we made our categorization assessment using the panel’s finding that identified the problems the recommendation was to address. Written response: During its existence, OOE managed the Commissioner’s “cards and letters” program. Additionally, other existing systems for addressing employee concerns were widely advertised and promoted. Since the abolishment of OOE, communication of this nature has been referred to management, IA or OIG as appropriate for inquiry or investigation. Fully implemented. Updated response: Customs officials told us that employees know who to contact now that the cards and letters program is nonexistent. If they send concerns, complaints, or problems to the Commissioner’s office, his staff will look into them or assign them to another office to review. Office of Enforcement recommendation 1: The organizational structure of the Customs Service should be realigned to provide a clear line of authority throughout the Office of Enforcement, beginning from the Assistant Commissioner through the Special Agent in Charge, to the Agent. Implementation of this reorganization is discussed in the following recommendations and in the Management section of this report. Office of Enforcement recommendation 2: Customs should establish a professional, national recruitment policy which provides for professional development, agent mobility and loyalty to the institution. Specifically, Customs should avoid home town initial assignments. Written response: See Management, Action (6) [Customs’ written response to Management recommendation 6 is copied below]. Fully implemented. The organizational structure of OI was realigned in October, 1991, establishing direct line authority from the Assistant Commissioner to the SACs. Regional layers of management and support personnel were phased out over a period of several months. Additional realignment to reduce the supervisor/employee ratios is ongoing with a reduction of SAC offices to occur in October 1995. Updated response: The reduction of SAC offices that was to occur in 1995, mentioned above, occurred as scheduled. SAC offices were reduced from 27 to 20. Written response: See MANAGEMENT, Action (7) [Customs’ written response to Management recommendation 7 is copied below]. Partially implemented. The selection process in OI was revised by establishing a network of field recruiters. A centralized control process over the evaluation/selection process was established to ensure consistency in hiring practices. All selection decisions are currently made at the Headquarters level. The establishment of a career board is still under review by OI. Additionally, home town initial assignments are avoided whenever possible depending on funding and the needs of the service. Updated response: According to an OI official, some other aspects of the recruitment process include formalized training for the field recruiters and use of a standardized interview procedure. We categorized this recommendation as partially implemented because OI established a national recruitment program and avoided home town initial assignments, only making an exception in a large metropolitan area. OI had taken action on but had not implemented the professional development and agent mobility programs. In addition to the recruitment process described above there is a draft Office of Investigations and Internal Affairs Career Development, Mobility, and Hardship Policy Handbook. The Handbook was about to be revised when we were doing our work, to reflect, among other things, Customs-wide and OI-specific reorganizations. The handbook outlines policies and procedures for a special agent career development program and a mobility program. According to the Assistant Commissioner for Investigations, OI has tested out the mobility/reassignment program but had not formally implemented it as of April 1996. According to OI’s Director of Administration, no written policy prohibiting hometown assignments exists. As a direct result of the panel’s report, however, OI would make a home town assignment only if it were to a large metropolitan area. Office of Enforcement recommendation 3: Customs should establish a mobility policy and career path that include the following features: -A newly hired agent should be required to sign a mobility agreement. -A three year assignment to the first post of duty should be required. The Service should make efforts to expose newly hired agents to all major investigative areas (fraud, strategic, smuggling and financial) by placing them in medium or large offices. -In the first year, a newly hired agent should be assigned to a senior agent mentor. -After three years, an agent should be placed in a central pool of agents eligible for transfer, and such transfer should be determined by the needs of the Service. -Subject to financial and program restraints, an agent should be transferred in his/her fourth year. -Journeyman agents should be given the opportunity to elect the “management career track.” Those who have opted for the management career track will be required to act as relief supervisors, and serve a tour in Headquarters OE and a separate tour in the Headquarters Office of Internal Affairs. Their progress will be continually reviewed by a central career board. Future OE managers must complete the management career track. Written response: A mobility policy incorporating the features set forth in the Blue Ribbon Panel Report has been drafted. Implementation of the policy has been delayed due to the high cost associated with such extensive mobility features and funding restrictions within the Customs Service. Partially implemented. A mentor program has been established whereby senior agents serve as mentors for all new agents and ensure appropriate training is received. The portion of the recommendation to assign a mentor to new special agents was fully implemented. All other portions of this recommendation have not been implemented, although several were under review or testing when we completed our work. Updated response: According to the OI Director, Office of Policy and Oversight, agents do not have to sign a mobility agreement. However, OI officials told us that the special agent vacancy announcements and position descriptions state that the agents are subject to relocation. There is no requirement for the first post of duty assignment to be for 3 years nor are new agents placed only in medium or large offices. According to OI’s Director of Administration, the two structured ways in which new agents are exposed to the major investigative areas are basic training and the mentor/on-the-job training program. The On-the-Job Training Handbook states that a senior agent mentor is to be assigned to each new agent. According to OI’s Director of Administration, agents are not moved after 3 years in their first post of duty. The mobility policy OI tested was based on office performance. The policy involved identifying, through OI’s performance measurement system, offices that should gain or lose staff, then soliciting volunteers to move to the gaining office, giving priority consideration to staff from offices identified to lose agents. OI’s draft mobility policy states that OI would fill vacancies through new hires, voluntary reassignments, and involuntary reassignments that would be used in the absence of qualified volunteers. According to OI’s Director of Administration, the office examined the costs of various mobility policies. It found that the office did not have the money to fund moves on a routine basis. According to OI’s Director of Administration, OI does not have a “management career track.” However, OI has proposed, in its draft handbook, a career development program. The draft program does not mandate a particular path. Among other things, it recognizes completion of specified career-enhancing assignments, including assignments to headquarters OI and Internal Affairs. The draft program entails the use of a Career Review Board in the agent promotion process. Office of Enforcement recommendation 4: The selection process and reassignment policy should incorporate the Panel’s recommendations for the new agent hiring policy, the mobility requirement, and a career path for managers. We also recommend developing a “career board” concept for selections of GM-14’s, based on SAC recommendations, the board and the Assistant Commissioner (Enforcement). Although the Executive Resources Board evaluates candidates for GM-15 positions, the Office of Enforcement should have the career board review the pool of applicants for SAC positions, including the career development needs of current GM-15’s, as well as promotion applicants, based on improved rating systems (described below), office inspections, prior enforcement experience and the management career path. Written response: Most employees hired as special agents are reassigned to locations other than where they grew up or had extended work experience, although lack of funding precludes transferring all new hires to new work locations. A career path for managers is in development, however, mobility and training are seen as key elements in that design. An aggressive training program has been designed with the assistance of National Louis University (NLU). In June 1995, 22 of OI key managers graduated with Masters Degrees in Science and Management from NLU. Not implemented—action taken. Mobility remains a problem within OI. In the past, Customs budget could not absorb the cost of the number of moves required for a well developed career path program. With the recent funding cuts, it is even less likely that we will be able to implement a true career path for managers. OI is not currently utilizing a “career board” for selection of GS-14 and GS-15 employees due to the extensive reduction of promotions to those levels. Updated response: According to an OI official, OI instituted a recruiting and hiring process that involved, among other things, field recruiters undergoing a formalized training course to recruit and interview prospective special agents and standardized interviews of potential hires in the field. Selections were made by a headquarters selecting official. The remainder of the recommendation was not implemented. OI, however, had either taken actions that responded to the recommendation or decided not to implement that part of the recommendation. OI took action by developing draft mobility and career development programs. The career development program included a career board concept for selection of GS-14s. OI decided not to develop a career board concept for the SAC positions. OI has drafted a mobility policy, as described under the Office of Enforcement recommendation 3 updated response. OI examined the cost of various rotation policies and determined it did not have the funding available for one with the features recommended by the panel, according to the OI Director of Administration. OI has drafted a Career Development Program, as described under the Office of Enforcement recommendation 3 updated response. It incorporates some of the ideas in the panel’s recommendations for a management career track. The draft program incorporates the use of a career review board that would make promotion recommendations to the Assistant Commissioners OI/IA for GS-13 and GS-14 positions. According to the OI Director, Office of Policy and Oversight, OI is not using and does not plan on using a career board concept for SAC positions. According to the draft Career Development Program policy, the process for promoting to the GS-15 and Senior Executive Service level will use an Executive Resources Board for selection of GS-15 and SES positions. Office of Enforcement recommendation 5: The Office of Enforcement should develop an aggressive outreach program to encourage career advancement for minorities. Written response: OI has recruited minorities and women to participate in NLU programs. Substantially implemented. Updated response: OI officials provided the following as additional examples of what OI has done to encourage career advancement for minorities. —In the 2-week recruiter training course it has emphasized hiring minorities. —It has ensured a representation of females and minorities on assignments to IA’s Management Inspection Division, headquarters OI, and the Discipline Review Board. Because this recommendation focused on developing by an “aggressive outreach program” and the panel did not define what the program should consist of, we could not determine the degree to which Customs had implemented the recommendation. We determined, however, that Customs took actions that appeared to be generally consistent with the purpose of the recommendation, and therefore we categorized the implementation as “substantially implemented.” Office of Enforcement recommendation 6: Employee Performance Appraisal System (EPAS) employees should be held accountable to their EPAS plans, and problem employees should be properly rated and given a Performance Improvement Plan (PIP) where appropriate. Managers should be rated based on a wide variety of input. The Management section of this report recommends a new OE field structure, and the rating system described here is an important part of that structure. A SAC’s performance should be rated under the new structure by the Assistant Commissioner (Enforcement) with input on a matrix type evaluation sheet from all Assistant Commissioners and Enforcement Division Directors, who in turn receive input from their staff. In addition, in order to maintain effective field relationships, the Regional Commissioner in whose specific area the SAC is located, his staff, and the District Director in that area should provide rating input. Correspondingly, the SACs should rate their subordinate managers, based on this matrix approach and with input from the District Director. Written response: Line authority has provided the basis for improved accountability through performance evaluation. Employees are placed on performance improvement plans when their performance warrants such action. Standardized performance plans for SACs place appropriate emphasis on national program objectives, management responsibilities and quantifiable performance standards. As indicated in MANAGEMENT, Recommendation (4), Customs chose not to adopt the recommendation to solicit input of other managers in the rating of SACS because it most likely would dilute the accountability and objectivity of the rating process. Partially implemented. We categorized this recommendation as partially implemented because, although Customs fully implemented the portion of the recommendation concerning use of improvement plans, it did not accept the recommendation for using a matrix approach to ratings or soliciting input from other managers in the rating of SACs. Updated response: According to an OPE official, under the new performance management system for EPAS employees (now called Employee Proficiency Review (EPR) employees) that Customs was implementing as we were doing our work, the employees will not have EPAS plans. These employees’ EPR forms cover four core competency areas—job knowledge, technical skills, professional application, and working with others—for which they will have more simplified plans. The employee and his/her supervisor are to discuss these areas at least three times during the year—at the performance planning meeting, one purpose of which is to establish a common understanding of performance expectations; at ongoing review meeting(s); and at the annual proficiency review meeting at the end of the review year. According to instructions for completing EPRs, supervisors are to recognize deficiencies in performance and determine the causes as soon as they become evident. When deficiencies continue, the supervisor is to develop and issue an Employee Proficiency Plan (EPP). Under the new system, the EPP has taken the place of a Performance Improvement Plan. Office of Enforcement recommendation 7: The Panel recommends that the Office of Enforcement inspection process be abolished and subsumed by the new inspection process recommended in the Management and Internal Affairs sections of this report. This process should focus on implementation of the changes called for in this report, such as hiring, mobility, career path and affirmative action. Written response: OI’s inspection process has been abolished and subsumed into MID’s inspection program. See MANAGEMENT, Recommendation (2). Partially implemented. The Management Inspection Division does not look at hiring, mobility, career path and Affirmative Action during field management inspections as those areas are largely centralized at the Headquarters level. The Office of Investigations has, however, made a large effort at the national level in the area of minority recruitment. Updated response: According to the Director, Management Inspections Division, Office of Internal Affairs, the inspections do not automatically cover each of these areas. IA could identify one of these areas as an issue in an inspection report if it determined it was the cause of a problem in a SAC office. IA does not have a policy to routinely conduct headquarters inspections. It does them at the request of an official at the Assistant Commissioner level or above. We categorized this recommendation as partially implemented because, although the Office of Enforcement inspection process was abolished and subsumed by the IA inspection process, IA’s process does not automatically focus on the other portions of the recommendation. Additionally, the panel recommended that the inspections be done at least every 2 years. According to the Director, Management Inspections Division, the SAC office inspections are scheduled for every 3 or 4 years, with follow-up inspections and spot-checks to be done after the comprehensive inspections. Internal Affairs recommendation 1: The Office of Internal Affairs must take a pro-active role in agency leadership to ensure the real and perceived institutional integrity of the Customs Service. Written response: The integrity of the Customs Service has been strengthened through the development and implementation of a comprehensive integrity training program which stresses the obligation of employees to report alleged misconduct to Internal Affairs. During FY 1992, over 92% of the Customs workforce received this training and every basic training course at the Customs Academy incorporates an integrity module. Integrity training is also included in supervisory and managerial training courses. Substantially implemented. Because this recommendation focuses on IA taking a proactive role in agency leadership and the panel did not define what such a role should consist of, we could not determine the degree to which Customs has implemented the recommendation. The Office of Internal Affairs request for funds to conduct updated integrity and ethics training was denied. Integrity and ethics training is given to new supervisors and front line operations positions at FLETC in Glynco. Updated response: IA’s Special Assistant Commissioner believed the following actions represented IA taking a “pro-active” role in agency leadership to ensure Customs’ integrity: (1) In July 1996, IA will be providing special HARDLINE training to the field that includes information on Customs’ integrity policy and how to report allegations. IA agents will be teamed up with OI and Field Operations personnel to be trained in the new course and then teach it throughout the field. Funding has been obtained to train Customs’ field office personnel from San Diego to Miami to Puerto Rico. (2) IA’s new mission statement states, in part, “every Customs employee has the right to work in an environment free of corruption, misconduct, or mismanagement.” (3) The Commissioner’s directives on guidelines reporting allegations to IA were issued to all supervisors and managers. (4) IA set up a 24-hour hotline for employees and the public to report any allegations. We determined, however, that IA has implemented a number of actions that could be construed as proactive and therefore generally appeared to be consistent with the purpose of the recommendation. Internal Affairs recommendation 2: The Office of Internal Affairs should be reinforced and restructured to ensure that the organization is designed to accomplish its expanded mission. In developing this structure, Customs should consider models for handling internal affairs that already exist in other law enforcement agencies, such as the FBI and DEA. This new organization should include an Office of Professional Responsibility and an Office of Inspections. Written response: The IA structure at HQ has been reorganized to provide the basis for stronger direction and more centralized control over investigations. This structure includes an Internal Investigations Division and an Investigative Programs Division, modeled on other law enforcement agencies’ IA organizations. Desk officer positions, which provide assistance to IA field activities and monitor investigations have been established. An intelligence function has also been established to analyze allegations, investigations and conduct threat assessments. Fully implemented. Updated response: In IA’s description of its intelligence group, it documented that in 1992 IA had an Office of Professional Responsibility (OPR) that is now referred to as its Internal Investigations Division. We determined that Customs fully implemented this recommendation even though it no longer has an OPR because the functions of OPR are carried out by the desk officers in IA’s Internal Investigations Division. In addition, Customs has a Management Inspections Division. An Internal Affairs official said that due to Customs’ reorganization and attendant downsizing, IA eliminated the layer of Director of OPR but retained its functions within IA’s Internal Investigations Division. Desk officers perform the oversight functions for IA’s OPR. Desk officers track case management activities, review for quality and comprehensiveness of cases, check on the timeliness of cases, and act as conduits for information. According to Customs’ organization handbook, the Office of Internal Affairs also has a Management Inspections Division responsible for “developing and coordinating a unified and broad-based approach to the implementation of management inspection and undercover audit programs. These programs gauge the effectiveness and efficiency of managers, processes, strategies, and special interest initiatives.” Internal Affairs recommendation 3: The Assistant Commissioner for Internal Affairs should report to the newly established Associate Commissioner for Organizational Effectiveness. Written response: During OOE’s existence, IA was reorganized so that it reported directly to the Associate Commissioner for OOE. Since OOE’s abolishment, the Assistant Commissioner (IA) reports directly to the Commissioner at the same organizational level as other Assistant Commissioners. Fully implemented. We categorized the recommendation as “fully implemented” because, during the existence of the Office of Organizational Effectiveness (OOE), the Assistant Commissioner for Internal Affairs did report to the Associate Commissioner of OOE. Customs abolished OOE, and IA’s Assistant Commissioner now reports directly to the Commissioner. Although the blue ribbon panel report was silent on whether OOE should be temporary, in testimony during the 1991 congressional hearing on Customs’ blue ribbon panel investigation into allegations of wrongdoing within the agency, the panel’s Chairman stated that he believed it was at the discretion of the Commissioner whether the Associate Commissioner position in OOE was temporary or permanent. Internal Affairs recommendation 4: The panel considers it critical that Customs establish a comprehensive and aggressive internal inspection program with responsibility placed in the Headquarters Office of Internal Affairs, under the Associate Commissioner for Organizational Effectiveness. -Inspections should cover management, operations, Customs agenda, personnel, internal controls and all other matters which affect the efficiency and integrity of the organization being inspected. -Routine inspections should be conducted every 18-24 months of all Customs offices (e.g., SAC, District, Region, Headquarters). T w i a g e a e ( -Ad hoc teams of investigators should be dispatched when allegations require such action. -Inspection findings should be communicated by the Associate Commissioner for Organizational Effectiveness to the responsible Regional/Assistant Commissioner, with a copy to the Commissioner. -The Associate Commissioner should follow up and monitor corrective action on behalf of the Commissioner to ensure compliance. T I o s t -Inspection findings should be a significant consideration in evaluating senior key manager performance. Written response: See MANAGEMENT, Recommendations (2 and 3) [Customs’ written response to Management recommendations 2 and 3 are copied below]. Partially implemented. The Office of Management Inspection (OMI) was established under OOE in April 1992 with the mission of conducting periodic and comprehensive inspections of Special Agent in Charge (SAC) and District offices to evaluate: (1) management systems, practices, and effectiveness; and (2) compliance with laws, policies, and regulations. OMI’s primary goal was to ascertain the health of the organization through “independent” evaluation of effectiveness, i.e., mission performance, resource utilization, internal/external relations, and management controls. Relevant Blue Ribbon Panel issues such as managerial effectiveness, performance indicators, and supervisory, employee, and outside agencies (including U.S. Attorneys) concerns were incorporated into the inspection process. We categorized Customs’ implementation of this multifaceted recommendation as “partially implemented” because Customs fully implemented portions of this recommendation, but it did not fully implement other parts. Lack of resources has precluded implementation of comprehensive inspections at least every two years as recommended. However, each SAC office receives a comprehensive, spot-check or special assessment every two years. The abolishment of OOE placed OMI under IA and renamed it the Management Inspections Division (MID). MID efforts are now heavily concentrated on reviews of OI operations. MID operations must be re-evaluated in light of the transformation of the field structure from regions to CMCs, the implementation of new measurement systems, and the introduction of business process improvement techniques to analyze our processes. Several memorandums have been distributed to Assistant and Regional Commissioners, District Directors, and mid-level managers which communicated the standards for supervisory performance. Customs fully implemented the panel’s recommended actions regarding (1) inspections covering management, operations, Customs’ agenda, personnel, internal controls, and all other matters that affect the efficiency and integrity of the organization being inspected; and (2) ad hoc teams of senior level investigators being dispatched when allegations require such action. IA’s Management Inspection Division reviews performance appraisal as a core area during comprehensive management inspections. This process aids in determining if the performance management system is working properly. Managers have been reassigned where it has been determined that they have lost effectiveness in their positions. On the basis of Customs officials’ statements, we determined that Customs did not fully implement two parts of this recommendation: (1) Routine inspections of all Customs offices are not conducted every 18 to 24 months. Customs conducts comprehensive inspections of offices every 3 or 4 years, not every 18 to 24 months. (2) Inspection findings are currently not a significant consideration in evaluating senior key manager performance. Updated response: In Customs’ written response to Integrity recommendation 1 it stated that Customs has trained groups of senior-level agents in the Office of Investigations known as flying squads to conduct high-priority investigations at locations throughout the country under the direction of IA. Commissioner followed up and monitored corrective action on behalf of the Commissioner to ensure compliance. In addition, Customs’ report to the panel in 1992 on its implementation of the panel’s recommendations stated that the “recommendation for a management inspection program in IA has been modified slightly to establish a separate Office of Management Inspections, reporting directly to the Associate Commissioner. The establishment of a separate office provides even greater independence and highlights the importance of the new inspection program. A comprehensive program for inspection of Customs district and SAC offices has been implemented, addressing management, operations, and compliance. The Commissioner, Deputy Commissioner, and senior managers have been briefed on the results of every inspection and have demonstrated continuing interest and commitment to the program. Just as significantly, the results of inspections have been well received by the inspected organizations. While resources have not permitted a two-year cycle for inspection, alternative means of evaluating field offices are now being explored. In addition to comprehensive on-site inspections, the Office of Management Inspections, (now Management Inspections Division) has been called upon to respond to specific allegations and concerns by conducting single-issue reviews.” Customs partially implemented one part of OOE was in existence. It established the internal inspection program. It placed responsibility for that program under OOE and not Internal Affairs, however. With the abolishment of OOE, responsibility for these inspections was placed in IA, as recommended. Inspection findings were communicated by the Associate Commissioner for Organizational Effectiveness to the responsible Regional/Assistant Commissioner, with a copy to the Commissioner, according to Customs’ 1992 report on its implementation of the panel recommendations. IA’s Director of Management Inspections Division (MID) said that there are over 300 offices to inspect and that decisions on which offices would get comprehensive inspections are based on IA’s input from its Intelligence Group within its Internal Investigations Division. Furthermore, he said he did not agree with the panel’s recommendation to do comprehensive inspections of all offices every 18 to 24 months. He intends to conduct comprehensive inspections for SAC offices every 3 or 4 years. He believed that doing such inspections for every office once every 3 or 4 years was sufficient because the comprehensive inspections are followed by follow-up inspections and spot-checks. He also said that if a problem exists at an office, MID conducts a comprehensive inspection sooner than the scheduled 3 or 4 years. Regarding communication of inspection findings, the Director of MID also said IA briefs the head of the office inspected right away and gives him/her a copy of the report on site. The office head has the opportunity to respond to any deficiencies. Then the MID official briefs the Assistant Commissioner of the office inspected. IA officials also brief the Commissioner within 2 weeks from completion of comprehensive inspections and follow up with a copy of the inspection report to the Commissioner, Deputy Commissioner, and IA Assistant Commissioner. An Office of Planning and Evaluation official said that there is no agency policy requiring that inspection results be compared to supervisory and managerial performance. The Director of MID said Customs does not yet use inspections findings when evaluating SACs’ performance. He believes the new appraisal system should help Customs move in that direction. (See Office of Enforcement Recommendation 6.) An Office of Investigations official also said that at least since June 1994, OI has not used the management inspection reports when doing SACs’ ratings. The Commissioner’s 1992 testimony before the Commerce, Consumer, and Monetary Affairs Subcommittee, House Committee on Government Operations, on the panel’s recommendations included responses on Customs’ implementation of the recommendations. A portion of Customs’ response stated OOE is responsible for inspection follow-up. Internal Affairs recommendation 5: The Office of Internal Affairs should be responsible for investigating matters relating to mismanagement, criminal misconduct and serious non-criminal misconduct. Customs should prescribe a policy that determines which non-criminal misconduct should be referred to management for investigation. Written response: New systems and procedures were implemented to ensure the effective management of all allegations of mismanagement and misconduct. In 1992, a new policy was issued which classified all types of allegations, ranging from criminal misconduct to mismanagement, and defined responsibility for investigations. These procedures have provided greater consistency in handling allegations and establishing investigative priorities. Allegations of misconduct and mismanagement are handled through a variety of approaches, including the use of IA investigators (who investigate all allegations of criminal conduct and serious misconduct), independent factfinders, Management Inspection staff and joint efforts with Assistant and Regional Commissioners. Fully implemented. With the abolishment of OOE, allegations of mismanagement are now referred by IA to the appropriate management official. Customs continues to provide sufficient training and instruction in integrity and mismanagement issues for managers to conduct inquiries into problems within their operations. This is consistent with National Performance Review recommendations requiring managers to continuously evaluate, correct and improve their own operations. Updated response: An IA official said that with the abolishment of the Office of Organizational Effectiveness, IA is the recipient of mismanagement allegations and determines if IA should conduct an investigation or should refer the allegation to management. IA’s Special Assistant Commissioner referred us to the IA Special Agent Handbook, which documents IA’s policy that fully implements the panel’s recommendation. Customs also had a directive dated November 18, 1993, that formalized the reporting and processing by managers and supervisors of allegations of misconduct and mismanagement. The IA official said that IA is working further on defining which allegations go to management and which ones stay with IA. He said managers have been told that when in doubt about where to refer an allegation (to IA or to management), send the allegation to IA for a determination of who should investigate it. Internal Affairs recommendation 6: All reports and allegations of criminal activity, misconduct and mismanagement should be reported to the Headquarters Office of Internal Affairs. -Customs should advise all employees to report allegations directly to IA. This does not preclude parallel reporting through the supervisory chain of command at the employee’s option. W t b a p i e b T -Investigators may be assigned to non-criminal matters, at the discretion of the Assistant Commissioner (IA), from outside of IA; however, IA must ensure that in the investigative process, only investigators with no prior association with the office under investigation are assigned to the case. Investigations should be concluded within six months; findings should be shared with the subject of the allegation and reported back to IA at Headquarters. Written response: Customs employees have been advised to report allegations directly to IA. IA Investigators are trained to recuse themselves when a relationship exists between the investigator and the subject. IA Desk Officers track and provide oversight on all field investigations. Customs policy referenced in INTEGRITY, recommendation (3), provides written notification to employees who were subject of an investigation that the investigation has been concluded and that no disciplinary action will be taken. In the event that disciplinary action will be taken, the subject of an investigation will be notified by the manager via disciplinary letter. IA tracks this information via the Disciplinary Action Tracking System utilized by Labor and Employee Relations. Partially implemented. We categorized this recommendation as partially implemented because several portions of the recommendation were fully implemented, while one provision was not. Updated response: Customs issued a directive regarding allegation reporting and processing on November 18, 1993, directing that allegations be reported directly to IA. IA’s Special Assistant Commissioner said that investigators from outside of IA are assigned to noncriminal matters. He said IA has to rely on the investigator’s integrity to disclose the need to recuse himself from the case if he has a prior association with the office under investigation. He said that IA ensures that such recusals are done by investigators because they are taught to do this in training and he believes recusal is included in their Special Agent Handbook. The provisions for (1) reporting all reports and allegations of criminal activity, misconduct, and mismanagement to IA; (2) advising employees to report allegations directly to IA; (3) ensuring that only investigators with no prior association with the office under investigation are assigned to cases; and (4) sharing investigation findings with the subject of the allegation were fully implemented. The IA official said that IA did not incorporate the panel’s recommendation to conclude investigations within 6 months. He said that the individual case dictates what needs to done, and the length of the investigation depends on what the case requires. IA’s case management system tracks the length of cases. The Special Assistant Commissioner of IA said that IA has a case tracking system with Customs’ Labor Employee Relations (LER) group. According to IA’s Special Agent Handbook, LER is responsible for advising Customs’ management on employee misconduct issues. LER prepares all related correspondence, i.e. notices of disciplinary action, grievance response for management. LER prepares written notification that is signed by the principal headquarters or field officer. LER officials inform subjects of investigations of IA investigation results. LER will notify employees when investigations have been completed. We noted that this process is documented in a November 5, 1993, Customs Directive on reports of investigations. IA did not fully implement the provision that investigations be concluded within 6 months. IA officials believe that superimposing such a time frame is not feasible, especially for complex criminal cases. IA has a case tracking system to monitor the length of investigations. Internal Affairs recommendation 7: The Treasury Inspector General (IG) and Customs must clarify their formal relationship to ensure that cases controlled by the IG are promptly investigated and the results communicated to Customs for timely resolution. IG investigations should be conducted by the Inspector General’s staff and not delegated to IA investigators. Written response: IA has taken several steps to improve coordination and cooperation with the OIG on investigations. In 1993 the Acting Associate Commissioner (OOE) issued a memorandum to the Assistant Inspector General for Investigations confirming an agreement regarding procedures for referral of allegations and investigative information from the OIG to the Customs Service. Additionally, both IA and the OIG have established desk officers to facilitate a better relationship. Further, an OIG report on cases referred to Customs management in excess of 90 days has contributed to more timely investigations. Fully implemented. Updated response: The Special Assistant Commissioner of IA said the Treasury IG’s Office always conducts investigations of IA, SESers, and GS-15s and above. The IG’s Office also has a hotline and standard referral program. With the hotline tips that the IG sends to IA, an IA desk officer determines where to send the allegations for investigation—IA or management. The Special Assistant Commissioner of IA said that there are certain investigations that OIG categorically has to do, but there are others the IG may refer to IA to do, such as administrative investigations. The Special Assistant Commissioner of IA said “OIG watches IA. It is the IG’s option to return investigations to IA.” We categorized this recommendation as “fully implemented” because the Treasury IG’s and Customs’ IA formal relationship regarding investigations was clarified. The Special Assistant Commissioner of IA said that the memorandum between IA and the IG’s Office referred to in Customs’ written response is in fact a Department of the Treasury Order dated May 16, 1989, that was generated by Treasury, not OOE. The Treasury Order stated that OIG can refer certain allegations to Customs’ Office of Internal Affairs. Treasury’s Senior Special Agent, Office of Investigations, OIG, who coordinates with Customs said that OIG’s policy and procedure are to refer allegations against Customs’ employees who are GS-14s and below to Customs’ Office of Internal Affairs, unless the employee works in IA. He said that IA determines if the allegation should be investigated by IA or Customs management. He also said that there is no policy regarding the duration of IG investigations; however, IG guidelines are that administrative investigations be completed within 90 days. There are no such guidelines for criminal investigations. He also told us that Treasury issued a Directive dated September 21, 1992, that covered IG referrals to IA. That Directive states that the IG’s Office can refer investigations that fall within its jurisdiction to IA for investigation by IA or Customs management. Internal Affairs recommendation 8: IA intelligence elements should be established, trained and dedicated to support the IA mission, with special emphasis on corruption threat indicators, to develop sources and methods to obtain information needed by IA investigators and supervisory personnel. Written response: An intelligence function has been established within IA to analyze allegations, investigations and conduct threat assessments. Fully implemented. Updated response: IA’s Special Assistant Commissioner said that the Intelligence Group was established in April 1992 with responsibilities much broader than those recommended by the panel. He provided a description of the Intelligence Group that stated in part: “The group’s efforts are devoted to four major areas: summary analysis; tactical targeting; liaison; and investigative enhancement. Summary analysis concentrates on trends in allegations and investigations. Tactical targeting provides specific corruption leads to field offices. Liaison includes contact with the intelligence community and other law enforcement intelligence entities for the development of data sources. Investigative enhancement consists of research and analysis in support of ongoing IA investigations.” IA’s Chief, Intelligence Group, said that IA’s Intelligence analysts in the Intelligence Group received training dedicated to support the IA mission, including training in corruption threat indicators to develop sources and methods to obtain information needed by IA investigators and supervisory personnel. Internal Affairs recommendation 9: An assignment in Internal Affairs should be included in the established management career paths in the Customs Service. With the exception of Headquarters service, these assignments should require geographical relocations. Moreover, IA should be staffed with senior experienced employees. Written response: Career paths for GS-1811 special agents are still under review. Budgetary restraints have limited Customs ability to effect Permanent Change of Station with all reassignments into IA. IA investigators are now journeymen GS-13. Partially implemented. Updated response: On June 24, 1996, Customs provided an additional written response to its implementation of this recommendation. Customs stated that about 2 years ago approximately 75 agents were rotated between IA and OI as an outgrowth of the panel’s recommendation. In addition, a conscious effort was made to make an assignment in IA career-enhancing and to help staff IA with senior agents (only GS-13s rotated from OI to IA). We categorized this recommendation as partially implemented because Customs fully implemented one portion of the recommendation, but it did not fully implement two other provisions of the recommendation. The provision for staffing IA with senior experienced employees was fully implemented. OI’s Director of Administration said that OI has not implemented the career track; however, it has a draft policy for a Career Development Program. Among other things, the program recognizes completion of various career-enhancing assignments, including ones to IA field offices. IA’s Special Assistant Commissioner said that IA wants agents with at least 5 to 10 years’ experience as investigators; therefore, IA recruits from OI because it also has a working knowledge of Customs. Customs did not fully implement the provision that (1) career paths should include an assignment in IA; and (2) with the exception of Headquarters service, these assignments should require geographical relocations. The Career Development Program is still under review. Training recommendation 1: The Customs Service should establish a formal training program tailored to agents operating on the Southwest border. The program should emphasize the integrity concerns, technical law enforcement skills, and professional development unique to operating on the Southwest border. Written response: Customs has emphasized a formal training program tailored to Special Agents on the Southwest border. Since October, 1991, seventy-five percent of all Southwest border agents have attended one or more training classes that have included surveillance, undercover operations, and basic and advanced technical training. Substantially implemented. Updated response: According to the OI Director, Office of Policy and Oversight, at least one training course was tailored for and given to some agents on the Southwest border after the panel’s report. According to the OI Director of Administration, OI does not now have classes tailored specifically for agents on the Southwest border. He said that OI is working on improving training for all the agents. OI has retooled its advanced training classes, the subjects of which mirror OI’s four major investigative areas. It is giving a larger number of these classes in the field than it has in the past so that there will be opportunities for more special agents to take them. According to the Director, Office of Policy and Oversight, each SAC office’s field training officer is to provide training quarterly that can be geared to the office. Customs did not have a formal training program tailored to agents operating on the Southwest border as of May 1996. However, OI is retooling and developing training programs that cover the areas identified in this recommendation—integrity, technical law enforcement skills, and professional development. Additionally, the Office of Internal Affairs will be giving a training program dealing with integrity geared to an enforcement operation along the southern border. We believe these efforts are generally consistent with the purpose of this recommendation but not responsive to the letter of the recommendation. OI also reworked its special agent refresher seminar and has been giving it in the field. Among the topics covered are integrity and ethics, interviewing, legal issues, professionalism, report writing, and undercover operations. According to the OI Director of Administration, OI is developing a leadership/management symposium that will be geared to those who have gone through supervisory training and have been supervisors for a couple of years. Topics to be covered include necessary skills for successful managers, how to lead teams and motivate people, and professional responsibility (including ethics and integrity issues). The Special Assistant Commissioner, IA, said that in July 1996 the Office of Internal Affairs will be providing training in concert with a Customs operation covering the southern border, Puerto Rico, and the Virgin Islands. The training will be given to special agents and others operating in those locations, and it will include information on Customs’ integrity policy. Training recommendation 2: The Customs Service should require Spanish language proficiency for all agents operating on the Southwest border. Written response: It is not economically or logistically feasible to require Spanish language proficiency of current employees. Not implemented—no action taken. Updated response: According to information provided by Customs at an April 1992 congressional hearing that included testimony on the implementation of the panel’s recommendations, in response to this recommendation, Customs stated that it was difficult to make Spanish language mandatory, and there were many downsides to requiring it in terms of hiring and retention. Customs is not requiring Spanish language proficiency for all agents operating on the Southwest border. OI is attempting to get funding to train certain agents sent to the Southwest border in the Spanish language. According to the OI Director, Office of Policy and Oversight, after the panel’s report Customs tried different types of Spanish language training for special agents. They found, however, that the agents were not reaching the proficiency level needed to interview informants and violators. According to the OI Director of Administration, OI is working with another office in Customs to get funding for Spanish language training for four SAC offices. This official also said that the San Diego office has a Spanish language program that it funded out of its own budget. Training recommendation 3: All new supervisors must successfully complete in-service supervisory training. Supervisors, at every level, must participate on a periodic basis in a continued supervisory training and performance assessment program. Written response: Customs policy requires that all employees selected for initial entry into a supervisory position attend a two week basic supervisory seminar. The seminar includes modern management principles, integrity awareness, internal controls, performance management, discipline and whistleblower concerns and workforce diversity issues. In addition, Customs offers supervisory refresher training designed for supervisors who have not attended Customs supervisory skills development training within the previous three years. Partially implemented. Updated response: The Director, Management Training Division, Customs Service Academy at the Federal Law Enforcement Training Center (FLETC), said that Customs has no policy that states supervisors must take refresher training. In April 1996, he said that Customs did provide supervisory refresher training at the Customs Academy at FLETC, up until 1 year ago, but now Customs has not funded such training at FLETC. Some supervisory refresher training is being provided by individual Customs offices. OI is working on a supervisory symposium that has not yet been implemented. The Assistant Commissioner, Human Resources, has an initiative under way to revamp all supervisory training at FLETC, according to an official in Customs’ Office of Planning and Evaluation. We categorized this recommendation as “partially implemented” because, although the provision for new supervisors to successfully complete in-service training was fully implemented, Customs did not fully implement the recommendation’s provision that supervisors, at every level, must participate on a periodic basis in a continued supervisory training and performance assessment program. On June 24, 1996, Customs provided an additional written response to its implementation of this recommendation. Customs stated that it is revamping the supervisory training at FLETC using a cross-functional team of high-level field managers that is reviewing a broad spectrum of supervisory and managerial training and development needs. Training recommendation 4: The Customs Service should establish a career track for supervisors, the first step of which requires participation in a to-be-developed “relief supervisor” program. Not implemented—action taken. Updated response: The Director, Office of Administration, OI, said that in 1994, OI had a career track for special agents who wanted to become SACs. The track, however, was not being adhered to, and OI drafted a Career Development Program in 1995. (See Office of Enforcement recommendation 3.) He said that with the change in OI’s structure, the drafted Career Development Program has to change. He also said that with the President’s cap on the number of GS-13 to GS-15s and SESers, the number of OI supervisory positions is affected. We categorized the implementation of this recommendation as “not implemented—action taken” because no part of this recommendation has been fully implemented, but some action has been taken toward establishing a Career Development Program (rather than a career track). OI’s Director, Office of Policy and Oversight, said that OI does not have a relief supervisor program. She noted that legally, she was not sure that they could tap the same person every time to be a “relief supervisor.” They would have to rotate 120-day supervisory details among the experienced staff. Training recommendation 5: The supervisory training and performance assessment program should be constantly re-evaluated to ensure sufficient emphasis on personnel management problems, “whistleblower” policies, integrity awareness, quality management, institutional loyalty and leadership values. Fully implemented. Updated response: Customs stated in its 1992 report on the implementation of the panel’s recommendations that in response to the recommendations of the blue ribbon panel, supervisory and management training has been examined and refined. According to the report, to set the tone for these training efforts, the Commissioner promulgated a management philosophy stressing the key themes of leadership, integrity, management accountability, and institutional loyalty. These principles form the foundation of all of the management training programs. The curriculum of the basic supervisory training course has been expanded to include emphasis in areas such as integrity awareness, internal controls, performance management, discipline and whistleblower concerns, and workforce diversity issues. The Commissioner’s April 1, 1992, testimony on the panel’s recommendations included Customs’ actions taken to implement the recommendations. Some of these actions were stated in the document as (1) Customs has recently completed a major review of all supervisory programs, (2) both supervisor and manager courses have been updated, and (3) whistleblower training was incorporated in supervisor and manager courses. The Director of the Management Training Division at FLETC said that the Assistant Commissioner, Human Resources, established a team that is specifically addressing this recommendation in terms of evaluating the effectiveness of supervisory/managerial training. He said that the team will be addressing such issues as how to evaluate the supervisory/managerial training programs. The team’s plan is to assess Customs’ core competencies and identify gaps in these competencies. The team will use assessment tools designed to identify the developmental needs and job strengths of Customs’ managerial pool and to assess the gap in managers’ and supervisors’ competencies. Training recommendation 6: The Customs Service should establish a new agent “mentor” program. Such a program should require that a senior agent mentor be assigned for at least a one year period to assist and advise new agents on all aspects of professional conduct. Written response: See OFFICE OF INVESTIGATION, recommendation (3) [the portion of the written response to OE recommendation 3 concerning the mentor program is copied below]. Fully implemented. A mentor program has been established whereby senior agents serve as mentors for all new agents and ensure appropriate training is received. Updated response: According to the Criminal Investigator On-the-Job Training Handbook, which embodies the new agent mentor program, the first phase of the training should begin immediately upon the special agent’s reporting to duty and should continue for 1 year. This handbook stated that one of the purposes of the program is to improve the professionalism and competence of Special Agent personnel assigned to SAC offices. One of the mentor’s responsibilities is to serve as a professional role model for the trainee. Training recommendation 7: The Customs Service should ensure that agents receive continuing formalized in-service training that addresses evolving issues in criminal law enforcement and reinforces adherence to professional standards. Written response: Customs agents currently receive training in law enforcement and professional standards. Fully implemented. Updated response: As discussed under the training recommendation 1 updated response, OI reworked its special agent refresher seminar and has been holding the seminar in the field. OI retooled its advanced courses in its four program areas and has been holding them in the field. They include evolving issues in law enforcement. Additionally, according to the OI Director, Office of Policy and Oversight, each SAC office’s training officer is expected to make arrangements for 8 hours of training on law enforcement issues to be given at the SAC office each quarter. Agents are encouraged but not required to attend the training. According to the OI Director of Administration, OI is requiring that the following agents in the field office in which the advanced training is given attend the training: those who are in the group dealing with that issue, who are projected to move into that group, or who have not had the training. The Assistant Commissioner, OI, said he set a goal for the Director of Enforcement Training to ensure that each operational nonsupervisory agent receive refresher or specialized training every 5 years. Whistleblowers recommendation 1: Customs employees should be informed of the various avenues for reporting problems that are available to whistleblowers. Alternative channels for reporting grievances should be similarly explained and the use encouraged. Written response: In September 1991, a memorandum was issued to all Customs employees concerning the Whistleblower Protection Act (WPA). Employees were also provided a booklet, published by the Merit Systems Protection Board, which contained information on the WPA and identified the employee rights and avenues available to them. Currently, employees make complaints to IA, the Office of Special Counsel or both. Fully implemented. Updated response: The Customs representatives of OPE, IA, Office of Chief Counsel, and HRM with whom we spoke believed this recommendation concerned avenues for reporting problems of retaliation against whistleblowers. Depending upon the type of retaliation alleged, Customs employees may direct their allegation through the agency’s grievance system (for union employees, this would be the grievance and arbitration procedures as provided in the union agreement); or to the Office of Special Counsel (OSC). Furthermore, according to an OPE official, reporting through one avenue does not preclude the employee from reporting through another avenue. Information on the grievance system is in Customs’ Policies and Procedures Manual. Discussion of the availability of the grievance and arbitration procedures is in the bargaining unit contract section on “Protection Against Prohibited Personnel Practices.” According to an OPE official, a copy of the contract is to be given to each union employee. This section also states that the employee may raise the matter under a statutory procedure. According to an IA official, claims of retaliation that are sent to Internal Affairs are read by one individual to determine if the claim appears to be a whistleblower retaliation claim. If it appears to be so, this individual asks the relevant Internal Affairs agent to tell the complainant that he/she needs to file the complaint with the Office of Special Counsel for investigation. Customs did not have a written policy on this practice. Customs formerly investigated these complaints but no longer does the investigations. (See whistleblowers recommendation 4.) Customs issued a memo in June 1996 to managers in the Internal Affairs field offices telling them that when an employee makes a whistleblower retaliation claim, the employee should be advised to contact OSC. Whistleblowers recommendation 2: Comprehensive procedures should be issued to supervisors for dealing with employees who are designated whistleblowers. Written response: In September and December 1991, a memorandum was issued to all Assistant and Regional Commissioners and Supervisors and Managers respectively expressing the Commissioner’s commitment to ensuring all employees who made whistleblower disclosures were protected from retaliation and reprisal. Fully implemented. Updated response: According to a Customs training manager, the procedures for dealing with employees who are designated whistleblowers also have been included in supervisory training since 1992. New supervisors are supposed to receive supervisory training within the first year of their appointment to the supervisory position, according to the former chief, Employee Relations and Benefits Policy Branch. Whistleblowers recommendation 3: The Associate Commissioner for Organizational Effectiveness should be designated as the agency point to receive whistleblowing disclosures (although employees may elect other channels for this purpose). Written response: During OOE’s existence the Associate Commissioner was designated as the agency point to receive whistleblowing disclosures. Since OOE’s abolishment, Headquarters IA and the Office of Chief Counsel serve as the agency contact for employees and the Office of Special Counsel respectively. Substantially implemented. Updated response: Although there is no one designated agency point to receive whistleblowing disclosures, Customs officials stated there were a variety of avenues through which employees could report whistleblowing disclosures. In addition to headquarters IA, the Office of Chief Counsel, and the Office of Special Counsel, as mentioned above, Customs representatives of OPE, IA, Office of Chief Counsel, and HRM told us other avenues are available. They said these avenues included the employee’s management chain, IA’s hotline, field IA offices, the Commissioner’s office, and the Department of the Treasury Office of Inspector General. During OOE’s existence, this recommendation was fully implemented. With the closure of OOE there is no one office designated as the agency’s whistleblowing contact point; however, there are a number of avenues through which an employee can make a whistleblowing disclosure. Hence, we believe that while the situation is not responsive to the letter of the recommendation, it is generally consistent with its purpose. An official from OPE stated that ways in which employees know about one or more of these avenues included: —whistleblowing segments in training courses, such as supervisory and basic inspector training; —a directive in Customs Policies and Procedures Manual on allegation reporting and processing; —the union contract; and —periodic reminders issued by the Department of the Treasury Office of Inspector General. Whistleblowers recommendation 4: If the Associate Commissioner receives a complaint and determines that the complainant is a whistleblower, investigation of the whistleblower’s allegations must be given priority in accordance with strict timelines for expeditious completion of the investigation. Written response: Determinations on Whistleblower allegations are made by the Office of Chief Counsel and referred to IA for expeditious investigation. Substantially implemented. Updated response: The Customs representatives of OPE, IA, Office of Chief Counsel, and HRM with whom we spoke believed this recommendation concerned complaints of retaliation. After the panel’s report, Customs hired an investigator formerly with the Office of Special Counsel to do in-house investigations of whistleblower retaliation complaints. According to that individual, he did those investigations expeditiously because they were his only priority. As a result of Customs’ reorganization, in November 1994 Customs ceased to accept whistleblower retaliation complaints for investigation. He said IA does not have the assets to do the investigations. The practice in Customs then was to have the individual who formerly investigated these complaints review any retaliation complaints that came through IA to determine if they appeared to be whistleblower retaliation complaints. If they did, he requested the IA agent who logged the complaint into IA’s tracking system to notify the complainant that he/she needed to file the complaint with the Office of Special Counsel. Customs did not have a written policy on this practice. We categorized Customs’ implementation of this recommendation as “substantially implemented” because, according to an IA official, during OOE’s existence, the whistleblower complaints were given priority and completed expeditiously. OOE no longer exists, however, and the whistleblower complaints are investigated by the Office of Special Counsel (OSC). Because OSC is an agency external to Customs, Customs cannot ensure that Customs employees’ whistleblower allegations are given priority. Customs issued a memo in June 1996 to managers in the Internal Affairs field offices stating that if an employee made a whistleblower retaliation claim, the employee should be advised to contact OSC. Whistleblowers recommendation 5: No administrative action may be taken against a designated whistleblower until the whistleblower’s allegations have been resolved unless there is no nexus between the whistleblower’s allegations and the alleged misconduct of the whistleblower. Determinations as to nexus should be made by the Associate Commissioner, with the advice of Counsel. Written response: Customs has provided a comprehensive training program to supervisors and managers on whistleblower rights and protections. Beginning at the top with Assistant and Regional Commissioners, and working through mid-management levels to first line supervisors, a major training initiative was accomplished: virtually every supervisor in the Customs Service has been trained in whistleblower rights and protections. By March 1, 1992, over 2,500 supervisors were trained by attorneys in Customs Chief and Regional Counsel offices. Whistleblower training has also been incorporated into every supervisory and mid-manager training course. Substantially implemented. Updated response: A former OOE official said that OOE established a policy that no administrative action could be taken against a designated whistleblower until the whistleblower’s allegations were resolved, unless there was no connection between the whistleblower’s allegations and the alleged misconduct of the whistleblower. We categorized Customs’ implementation of this recommendation as “substantially implemented” because during OOE’s existence the policy on administrative actions against whistleblowers was consistent with the recommendation. OOE no longer exists, however, and the Office of Special Counsel determines whether to request that Customs not take an administrative action. We believe that this action is generally consistent with the purpose of the recommendation. In the absence of an Associate Commissioner for Organizational Effectiveness, if an employee were to take a whistleblowing retaliation allegation to the Office of Special Counsel, OSC would determine if there was a potential connection. If OSC thought there might be a nexus, it would ask Customs either formally or informally to stay the action until it determined if there was a connection. Whistleblowers recommendation 6: Strong sanctions should be applied to supervisors who engage in any retaliation against whistleblowers. Discipline recommendation 1: Sanctions should be imposed against supervisors and managers who fail to take appropriate disciplinary actions. Written response: The agency Table of Offenses and Penalties was strengthened to establish strong sanctions against managers or supervisors who retaliate against whistleblowers. Fully implemented. Updated response: According to an official in the Employee Relations and Benefits Policy Branch, the sanctions for retaliation remain in the Table of Offenses and Penalties and they are being used. Written response: In December 1991 three changes were made to the agency Table of Offenses and Penalties incorporating sanctions against supervisors and managers who fail to report instances of serious misconduct or who fail to take appropriate disciplinary action. Fully implemented. Updated response: According to an official in the Employee Relations and Benefits Policy Branch, the sanctions for failing to report instances of misconduct or to take appropriate disciplinary actions remain in the Table of Offenses and Penalties and they are being used. Discipline recommendation 2: Managers must report all instances of misconduct in accordance with the recommendations contained in this report (see Internal Affairs section). Discipline recommendation 3: Sanctions should be imposed against supervisors who fail to report instances of misconduct. Written response: In February 1993 a memorandum for all supervisors and managers was issued emphasizing their obligations to report instances of misconduct. Fully implemented. Updated response: Customs’ February 1993 memorandum stated that managers and supervisors are required to refer all allegations of employee misconduct, except administrative misconduct, to the Office of Internal Affairs. Administrative misconduct may be referred to management for investigation. In addition, the Commissioner of Customs issued a memorandum on December 20, 1991, to all supervisors and managers regarding Customs’ Table of Offenses and Penalties. The memorandum stated in part, “I want to ensure that all supervisors and managers understand their responsibilities” in reporting employee misconduct. The memorandum attached a new category of misconduct for inclusion in the Table of Offenses and Penalties for managers and supervisors who fail to report instances of misconduct. Specifically, failure to report criminal and/or serious misconduct to Internal Affairs and/or any act or failure to act which undermines the discipline process: first offense: written reprimand to 14-day suspension; second offense: 14-day suspension to removal; and third offense: 30-day suspension to removal. Written response: See DISCIPLINE, recommendation (1) [Customs’ written response to Discipline recommendation 1 is copied below]. Fully implemented. In December 1991 three changes were made to the agency Table of Offenses and Penalties incorporating sanctions against supervisors and managers who fail to report instances of serious misconduct or who fail to take appropriate disciplinary action. Updated response: According to an official in the Employee Relations and Benefits Policy Branch, the sanctions for failing to report instances of misconduct remain in the Table of Offenses and Penalties and they are being used. Discipline recommendation 4: The brown book system as it currently exists should be eliminated. Investigations of misconduct should be assigned by the Assistant Commissioner (Internal Affairs) in accordance with the recommendations contained in this report. Under no circumstances should managers investigate reports of serious misconduct within their own chain of command. The results of all investigations should be forwarded to the Headquarters office of Internal Affairs to establish accountability. Written response: See INTERNAL AFFAIRS, recommendation (5) [Customs’ written response to Internal Affairs recommendation 5 is copied below]. Fully implemented. New systems and procedures were implemented to ensure the effective management of all allegations of mismanagement and misconduct. In 1992, a new policy was issued which classified all types of allegations, ranging from criminal misconduct to mismanagement, and defined responsibility for investigations. These procedures have provided greater consistency in handling allegations and establishing investigative priorities. Allegations of misconduct and mismanagement are handled through a variety of approaches, including the use of IA investigators (who investigate all allegations of criminal conduct and serious misconduct), independent factfinders, Management Inspection staff and joint efforts with Assistant and Regional Commissioners. With the abolishment of OOE, allegations of mismanagement are now referred by IA to the appropriate management official. Customs continues to provide sufficient training and instruction in integrity and mismanagement issues for managers to conduct inquiries into problems within their operations. This is consistent with National Performance Review recommendations requiring managers to continuously evaluate, correct and improve their own operations. Updated response: Customs’ February 1993 memorandum stated that “managers and supervisors are required to refer all allegations of employee misconduct, except administrative misconduct, to the Office of Internal Affairs.” According to information provided by Customs at an April 1992 congressional hearing, (1) the “brown book” system has been eliminated; (2) managers will no longer investigate misconduct; (3) most routine administrative issues, such as leave problems, are still within the purview of management; and (4) IA will control all investigations. The Special Assistant Commissioner of IA said that IA’s statistical reports show that IA, not management, is conducting misconduct investigations. Customs’ Directive on reports of investigations issued by the Office of Internal Affairs on November 5, 1993, states that “upon completion of an investigation, the Assistant Commissioner, IA or the Regional Director, IA will forward the original report of investigation to the concerned headquarters or field officer.” The following notes were added by GAO. Darryl W. Dutton, Assistant Director Mary Lane Renninger, Senior Evaluator Wendy C. Graves, Evaluator Walter L. Raheb, Senior Evaluator David P. Alexander, Senior Social Science Analyst Amy E. Lyon, Senior Evaluator Pamela V. Williams, Communications Analyst Michelle Wiggins, Administrative Assistant The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the U.S. Customs Service's implementation of blue ribbon panel recommendations concerning allegations of corruption and mismanagement by Customs employees. GAO found that Customs: (1) accepted the findings of the panel and took steps to rectify the problems identified by the panel; (2) restructured its Office of Internal Affairs to address issues of political corruption, mismanagement, and serious noncriminal misconduct; (3) established a direct line of authority in its Office of Enforcement to eliminate confusion among the many competing lines of authority noted by the panel in its recommendations; (4) strengthened its policy on managerial accountability; (5) created internal inspection programs to ensure that employees' ethical conduct is in compliance with the panel's recommendations; and (6) implemented its policy for protecting whistleblowers, as recommended.
During the course of our review Defense changed its strategy for materiel management systems. We therefore refocused our review to include evaluating the risks associated with the new migration strategy and the extent to which it will facilitate improvement of DOD’s materiel management operations. We reviewed DOD’s directives, instructions, and guidance for developing and implementing systems under the CIM initiatives, as these projects relate to the materiel management business area. We interviewed officials responsible for the old and new materiel management strategies as well as those in charge of developmental testing and those who participated in the early deployment of some applications. We also analyzed system design documents, program assessments, acquisition methodologies, and strategies. Our audit was performed from January through May 1996 in accordance with generally accepted government auditing standards. We performed our work primarily at the offices of the Deputy Under Secretary of Defense for Logistics (DUSD(L)) in Washington D.C.; the Joint Logistics Systems Center (JLSC) at Wright-Patterson Air Force Base, Ohio; and the MMSS prototype site at the Marine Corps Logistics Base, Albany, Georgia. Appendix I details our scope and methodology. The Deputy Under Secretary of Defense for Logistics provided written comments on a draft of this report. These comments are discussed at the end of this report and presented, along with our evaluation, in appendix II. Materiel management involves determining the type and amount of consumable and reparable items needed for daily military operations, such as ammunition, fuel, paint, and spare parts; purchasing these materials from private vendors or manufacturing agencies within DOD; and tracking materials from their purchase to end use. Materiel management business operations incorporate four major business activities—asset management, requirements determination, supply, and technical data support. Annually, the Department spends about $19 billion on materiel management operations, and it currently manages a reported 6.3 million inventory items valued at a reported $73.6 billion. DOD’s worldwide logistics operation includes about 1,400 warehouses at 27 distribution depots and other locations that provide supplies such as electronics, construction, and industrial items to the military services. The military services use large amounts of these items to maintain and repair weapon systems and other equipment. For example, the three services operate a total of about 25 facilities that perform detailed, time-consuming maintenance of major weapon system components such as radars, navigation aids, and various types of communication equipment. The services also store supplies (known as retail inventory) in warehouses at or near these maintenance facilities. From these warehouses, supplies are distributed to the mechanics or end-users when needed. The mechanics also hold some of these same items in nearby storage bins. We have previously reported that DOD’s large inventory levels reflect the management practice of buying and storing supplies at both wholesale and retail locations to ensure that they are available to customers—sometimes years in advance of when actually needed. DOD often stores inventories in as many as four different layers between suppliers and end-users. Storing inventory at many different locations results in inventory that turns over slowly, which produces large amounts of old, obsolete, and excess items. DOD’s multilayered supply system also increases the amount of supply on hand and drives up the cost of holding inventory. As discussed in previous reports in 1993 and 1994, this is a philosophy that private firms have moved away from in an attempt to lower the cost of doing business, provide better service, and remain competitive. For example, during the past decade, the private sector instituted a logistics management philosophy that provides a sharp contrast to DOD’s methods of managing and distributing inventories. Some private sector companies have avoided inventory management problems by using modern, “just-in-time” business practices that shift responsibilities for storing and managing inventory to suppliers. In fact, companies that are using the most aggressive practices no longer store inventory in intermediate locations at all; their suppliers deliver inventory to them only when needed. Some companies have achieved large savings for certain items by standardizing items being used, eliminating bulk storage locations and, most importantly, relying on prime vendors to deliver small quantities when and where needed. A major characteristic of these new logistics practices is the way the companies buy supplies. Companies have reduced the number of suppliers they use by establishing long-term agreements with only a few key suppliers. Typically, suppliers are contracted to provide a company’s supplies for a particular commodity. Thus, most of the management responsibilities are shifted from the company to the suppliers. The suppliers take on these responsibilities because they are promised a long-term relationship with the company. Other steps companies have taken to change their inventory management practices include using direct delivery programs, primarily through the use of a prime vendor . By using direct delivery programs, companies bypass the need for intermediate storage and handling locations. Once the end-users order supplies, the suppliers deliver the items directly to the user’s facility close to the time when the items are needed. Also, to facilitate communication channels with the suppliers, electronic ordering systems and the use of bar coding are often used to eliminate paperwork and speed up the ordering process. DOD is beginning to move away from its current multilayered inventory management philosophy by developing initiatives, similar to those adopted in the private sector, which involve some combination of long-term contracting agreements, direct delivery of items from suppliers to the services, and electronic data interchange for streamlining the ordering process. The use of these initiatives is allowing DOD to (1) decrease procurement lead times, (2) increase accuracy in forecasting future item demands, (3) reduce paperwork, and (4) reduce inventory levels. While DOD has used these commercial practices, the initiatives generally have been limited in scope and represent only a small portion of its overall operations. Defense wants to develop and deploy materiel management systems to improve business operations and processes nationally for DOD materiel management and reduce the costs associated with inventory, personnel, and inefficient systems. Historically, while the services and DLA have had similar logistics activities, they employed widely different processes and supporting information systems. Currently, Defense relies on over a reported 500 legacy systems to carry out wholesale logistics operations. As these systems become fragmented, outdated, and inefficient, they require billions of dollars in maintenance costs. According to Defense, because today’s materiel managers do not have access to timely, accurate, and reliable logistics information, they increasingly make unnecessary requisitions, which, in turn, result in excess inventory and waste. In addition, according to Defense, fragmented systems and the lack of current technology have severely affected the ability to achieve greater asset visibility; quickly adjust requirements for materiel supply items to support military operations; standardize planning, requisitioning, and inventory control; and provide greater support with fewer resources. By embarking in 1992 on a strategy to develop the materiel management standard system (MMSS), Defense sought to replace hundreds of service-unique legacy systems being used to acquire, manage, move, and maintain inventory items with nine standard systems. These are the (1) Central Secondary Item Stratification System, (2) Configuration Management Information System, (3) Deficiency Reporting System, (4) Initial Requirements Determination/Readiness Based Sparing System, (5) Maintenance Planning and Execution System, (6) Production Definition Support System, (7) Provisioning Cataloging Technical Support System, (8) Requirements Computation System, and (9) Stock Control System. The specific functions of each of these nine systems are described in appendix III. Generally, these systems are intended to improve business operations in the following ways: Asset management—provide greater asset visibility from the time of purchase to use and the capability to track and monitor product quality using automated deficiency reports during the wholesale process. Requirements determinations—better define initial and repair requirements for supply items based on readiness scenarios and automate the computation of repair schedules and budgets. Supply and technical data—automate paper copy guidebooks, procedures, and regulations needed to catalog new inventory items and provide managers with greater configuration control of inventory items. Defense referred to this program as the “Big Bang” strategy because it involved installing the entire suite of applications at each of the 17 inventory control points, rather than deploying each application as it was developed. The new systems were to be integrated so that the services and DLA could communicate and exchange data with each other and across business activities outside materiel management, such as finance, procurement, personnel, and logistics. JLSC planned to field MMSS across all 17 inventory control points and have the system be fully implemented in 7 to 8 years. JLSC expected this standard system to save billions of dollars in logistics costs by consolidating and streamlining management operations, improving the responsiveness, accuracy, and timeliness of data, and eliminating the cost of maintaining some information systems that support the same business processes. From 1992 to late 1995, Defense spent about $714 million developing standard systems with minimal results. During that time, there were dramatic changes in the goals and expectations for the program and only one application was partially deployed. Because of changes in objectives and scheduling and problems in development, prospects for achieving the original objective of implementing a standard suite of integrated materiel management systems appeared dim. At the same time, the services and DLA were asking for quicker system deployments. As table 1 shows Defense began the migration strategy in 1992 with the intent of implementing an integrated MMSS system in 7 to 8 years. But only a year later, it decided to implement the system in 3 years. About 2 years after that, the program was completely rebaselined because of funding cuts, cost overruns, schedule slippages, and poor contractor performance. During the same time period, the objectives of the program changed: at the start of the program, business process improvements were to be identified while systems were under development; in 1993, improvements were to be identified after implementation. Taken together, these changes raised serious concerns about MMSS among the services and DLA. For the one application that was deployed (the Stock Control System), the development and scheduling problems were particularly evident. As appendix V further details, JLSC shifted system testing onto the users in order to meet milestone dates. One official told JLSC that its scheduling of testing actually extended time frames and resulted in a loss of confidence from users. This, coupled with other SCS problems related to resolving deficiencies discovered during user testing, poor training, and inadequate system documentation, prevented the application from providing the benefits originally anticipated. Because of the development and scheduling problems, the services and DLA reported serious reservations about implementing several of the new systems because they believed that some of their existing legacy systems were better than the planned standard systems. They concluded that in some cases, the new systems, such as the Stock Control System and the Initial Requirements Determination/Readiness Based Sparing system, either would not meet their operational requirements or lacked the necessary functionality to allow them to shut down existing legacy systems as planned. Nevertheless, the services and DLA claimed that some of their legacy systems were quickly deteriorating and that they could not fund necessary upgrades. Therefore, they demanded deployment of the new systems as quickly as possible based on their individual service needs. In April 1995, a Defense evaluation team, comprised of representatives from the Office of the Secretary of Defense, JLSC, DLA, the military services, and independent contractors, reviewed selected DOD inventory control point processes and concluded that the migratory systems approach to standardizing and upgrading materiel management automated data processing systems in DOD is not workable. The team recommended that JLSC discontinue its current efforts to develop MMSS and advised the Secretary of Defense to redirect JLSC, the services, and DLA toward a long-term effort to develop a unified automated data processing supply system using an independent contractor to design, develop, and prototype the system. Nevertheless, in December 1995, because of pressure from the services and DLA and the problems they were experiencing with the MMSS migration strategy, DOD dramatically changed the MMSS scope and implementation approach. Program officials believed that if systems were not deployed quickly, the entire materiel management system program would be vulnerable to additional funding cuts, thus jeopardizing the entire program and risking total failure. The commander of JLSC stated that if customers did not see immediate results, his organization would be in danger of “going out of business.” In order to accelerate deployments under the revised approach, JLSC no longer plans to deploy a standard materiel management system. Instead, it will deploy applications incrementally as they are developed. In December 1995, Defense embarked on an accelerated deployment strategy of the nine applications that make up MMSS in order to meet services and DLA priorities and to realize operational benefits sooner than originally planned. Under this strategy, JLSC will no longer deploy an integrated suite of standard MMSS systems. Rather, it now plans to individually deploy each of the nine system applications as they are developed at selected sites from fiscal year 1996 through fiscal year 1999. The services and DLA will choose which applications they want, when and where they will be deployed and, as a result, some inventory control points may never receive new systems. Deployment will be constrained by available funding. JLSC and DUSD(L) refer to the new strategy as “deploy or die” since program officials believed that unless these systems were deployed quickly, the entire materiel management system program would be vulnerable to additional funding cuts, thus placing the program in serious jeopardy of total failure. The current deployment schedule of these systems is provided in appendix IV. Table 2 reflects the differences in the two strategies. In turning to the new materiel management systems strategy, Defense is now intent on delivering new applications to customers as soon as possible. But this haste puts the new materiel management systems development at higher risk than the previous one. As the following sections discuss, Defense will begin deploying new systems before it clearly defines its approach, ensures adequate oversight, and plans for economic and technical risks. Defense will also begin deployments without considering the effects of major upcoming changes to materiel management operations. In addition, Defense will be deploying all applications before critical necessary testing is complete. We believe that these steps are all critical to ensuring that Defense gets the most from each dollar it invests in materiel management systems. If Defense neglects to address them, it will likely incur substantial additional costs associated with maintaining legacy systems, interfacing them with the new systems, funding the rework to correct problems surfacing after deployment, and adapting its approach to expected dramatic changes in operations and systems. Under the new strategy, services will be keeping their legacy systems longer than anticipated and many will not be shut down. This is a major departure from Defense’s previous goal of eliminating hundreds of redundant legacy systems and varied business processes in order to move to standard integrated systems and processes. Yet for such a significant change in direction, Defense first did not conduct assessments that would ensure that the strategy would be cost-effective and beneficial. It also did not incorporate into the strategy plans to consolidate and privatize operations and other alternatives being considered to enhance existing systems. In addition, the change was not justified within the Department’s own oversight process, nor were documents critical to defining the programs objectives, costs, goals, and risk mitigation strategies prepared. JLSC is proceeding with the new strategy without first conducting critical economic and risk assessments that would estimate project costs, benefits, and risks and evaluate system choices based on these analyses. Without these assessments, DOD has no assurance that the best or most cost-effective systems are selected for migration nor can it plan actions designed to avoid or lessen the potential for project delay, overspending, or failure. These evaluations are particularly important at this time because, according to program officials, the estimated MMSS lifecycle costs and expected benefits in the July 1995 economic analysis do not reflect the most recent strategy change. These evaluations would also help DOD in planning to mitigate some of the additional costs associated with maintaining legacy systems that will be incurred as a result of the new strategy. The benefits of DOD using these analyses, for example, could have been realized in choosing the systems to deploy first in the new strategy. As table 3 shows, three of the four systems (the Configuration Management Information System (CMIS), the Product Definition Support System (PDSS), and the Deficiency Reporting System (DRS)) scheduled for deployment in fiscal year 1996 have very low projected benefits. The benefits listed in table 3 were taken from the July 1995 economic analysis. According to program officials, IRD/RBS, CMIS, PDSS, and DRS were chosen for deployment first in the new strategy because they are further along in development. However, we believe that had Defense analyzed costs, benefits, and risks associated with all selections, it would have had to seriously consider whether the benefits associated with RCS, SCS and MP&E made it imperative to concentrate on their development first. Additionally, by not analyzing and anticipating costs and risks associated with the new strategy, JLSC officials told us that they do not know how much it will cost to maintain the legacy systems that will remain under the new strategy and what it will cost to interface the new applications with the legacy systems. Because these systems will not be deployed as an integrated suite at all inventory control points, the services and DLA will have to operate many of their legacy systems for a substantially longer period of time. In turn, the large number (and complexity) of interface designs is likely to increase development and deployment costs significantly and delay implementation schedules. In November 1995, JLSC reported that the identification and development of more than 3,000 interfaces to existing legacy systems in support of multiple deployments is the prime technical risk facing the program. Finally, economic and risk analyses would reveal potential conflicts between available funding and planned scheduling of deployments. According to the MMSS program manager, the number of actual deployments for both fiscal years 1996 and 1997 will be contingent on available funding. In April 1996, the manager reported that the revised fiscal year 1997 schedule is too ambitious given the funding projected to be available. In addition to not assessing economic and technical risks, Defense has not assessed the impact that a number of potential changes under consideration for material management operations and systems could have on the program. These changes, and their implications, include the following. Recent DOD initiatives focus on privatizing materiel management operations or consolidating inventory control points. For example, the Commission on Roles and Missions of the Armed Forces recommended, in May 1995, that Defense outsource materiel management activities relating to cataloging, inventory management, and warehousing. If outsourcing occurs, Defense may end up spending millions of dollars on systems for functions that are later outsourced or on inventory control points that are later consolidated. As discussed in the background section of this report, DOD is beginning to move away from its multilayered inventory management philosophy by embarking on initiatives similar to those adopted in the private sector, which involve some combination of long-term contract agreements, direct delivery of items from suppliers to the services, and electronic data interchange for streamlining the ordering process. These initiatives have not been a part of the system migration strategy; however, as they are expanded, they will significantly impact the processes the systems support. According to program officials, Defense is considering implementing a “data-focused approach” to materiel management systems starting in fiscal year 1998, which would enhance interoperability and logistics modernization efforts through the use of “middleware” software. Middleware permits an application to see the data stored in other applications as if they were in a single, logical data repository. In doing so, it precludes the need to radically redesign the legacy systems and implement data standardization. If pursued, the middleware alternative could extend deployment schedules and drive up maintenance costs for existing systems. It also will not result in the consolidation or elimination of legacy systems. Through its own oversight process for major information system projects,DOD has established a basis by which decisionmakers—who make up the Major Automated Information System Review Council (MAISRC)—can ensure that sound business practices are followed for major information technology system investments. Under MAISRC guidelines, a project should be reviewed and approved at each of five decision milestones before substantial funds are obligated. An important aspect of the review process is that it lays the groundwork for ensuring that major initiatives are clearly defined, user requirements will be met, and sound acquisition and testing strategies are in place. Documents that the Council reviews—such as the mission needs statement and the acquisition and test plans—justify the program’s existence and define economic and technical risks. In January 1996, the Deputy Assistant Secretary of Defense for Command, Control, Communications, and Intelligence placed the MMSS project under the MAISRC oversight review process for the first time. However, the decision to make such a drastic change for the materiel management strategy in the first place was never presented to MAISRC. As a result, key Defense decisionmakers did not have a chance to evaluate the program in order to decide whether to continue the current program, make minor changes, redirect, or terminate the program before it began. When JLSC entered the review process in April 1996 (before a working-level team versus the high-level Council), it requested approval to continue incrementally fielding individual applications, but did not have strategic plans and other required documentation that the Council could use to reach a decision. Based on the working-level team’s review, the Council withheld authorization to proceed until certain documents were submitted and approved. In its May 13, 1996, decision memorandum, the Council directed the MMSS program manager to prepare the basic documents required for MAISRC review over the next 180 days. These include a mission need statement, which sets the goals of the program and defines projected capabilities and needs in broad operational terms; an acquisition strategy to guide the entire acquisition process throughout the system development life cycle and serve as the framework for planning, directing, and managing the program; an operational requirements document to document user objectives and minimum acceptable requirements for systems and to become the basis for operational performance criteria and testing; and a plan for preparing an economic analysis. Primarily to enable DOD to meet contractual obligations, the Council plans to hold another working-level session to evaluate and approve the acquisition and test and evaluation strategies before the end of fiscal year 1996. According to the Council, if these strategies are approved, JLSC will be authorized to proceed with deployment. However, when this decision is made, the other critical documents, such as the plan for preparing an economic analysis and the operational requirements document, will not be available for the Council’s review. Therefore, JLSC may be authorized to proceed with deployment before key decisionmakers at Defense have reviewed a cost-benefit analysis, funding profile, and other important information that would shed light on the risks and costs associated with the new strategy. We believe that the risks associated with the new strategy and problems experienced with the old strategy warrant a full, high-level Council review of all MAISRC-related documents before deployments proceed. The new deployment schedule for materiel management systems does not accommodate the time required for testing the new systems. In fact, all systems scheduled to be deployed in fiscal year 1996 and 1997 will only have met a minimal testing level, that is, developmental testing by the contractor. As a result, the risk is greatly increased that Defense will experience problems associated with shifting testing to system users and curtailing the levels of testing normally done. This has already been the case with the one application that was deployed at the MMSS prototype site—the Stock Control System. The four applications scheduled for deployment in fiscal year 1996 are still in the latter stages of development and have yet to complete required developmental and integration testing. In an attempt to meet the revised deployment schedule, JLSC is shifting developmental testing responsibilities from the development contractors to system users where the application is to be initially deployed. In some instances, JLSC will also forgo system qualification and integration testing altogether—which are critical to determining whether the applications will work as planned. According to program officials, the intent is to demonstrate interoperability of the database concept in a customer environment and to obtain customer feedback more quickly. However, not successfully completing these tests prior to deployment increases the risk that software problems will go undetected until the later phases of the system lifecycle. According to the Test Director, it is much more expensive and time consuming to correct errors once the applications are operational. Program officials also acknowledge that problems detected at the user sites will be more expensive to fix and could offset or exceed up-front investment savings. In the absence of an approved test and evaluation master plan, JLSC is negotiating memorandums of agreement (MOA) with respective users to define test conditions, assumptions, and responsibilities. Our review of the one approved MOA covering the Navy’s deployment of the CMIS application in fiscal year 1996 raises concerns about the test program. For example, the MOA shows that the application will be an “as is” version which has not been accepted by JLSC and will not be interfaced with any legacy systems during the test period. Although the Navy is required to prepare a lesson-learned report after it completes testing, no formal test plans or test reports are required under the testing process. Without these key documents, JLSC has little assurance that all necessary tests will be completed and that problems encountered with the system are thoroughly documented. The early deployment of the Stock Control System application under the previous migration strategy illustrates problems associated with shifting testing to users. As discussed in appendix V, the first MMSS prototype site, the Marine Corps Logistics Base, Albany, Georgia, was activated on May 1, 1995. At that time, JLSC deployed an early version of the SCS application, which had only about 50-percent functionality of the asset management software. The Marine Corps expected that this release would resolve its core asset management system deficiencies and demonstrate operational functionality and practical business process improvements. As of May 1996, according to system users, the system has failed to provide substantial improvement over the legacy applications being used at Albany. Because the project contractor delivered the application basically untested, with very limited functionality and inadequate user documentation, the Marine Corps has had to perform extensive and costly amounts of rework, debugging, and on-site testing. The Marine Corps has initiated 65 changes to correct major functional deficiencies; however, only 42 have been funded to date. Until the remaining deficiencies are corrected, SCS will not be able to meet all of the Marine Corps’ requirements. Because of continuous problems in defining requirements and schedule slippages, JLSC stopped all development work on SCS in December 1995. At the time, SCS development was about 55 percent complete. In May 1996, the Logistics Management Institute (a contractor hired by JLSC to provide technical support) recommended that JLSC terminate SCS development and maintain legacy asset management systems rather than invest an additional 2 years and as much as $100 million to correct the problems. JLSC still plans to deploy SCS; however, it will limit additional functional enhancements and will deploy the system only to the Marine Corps and the Air Force. To provide service on demand, Defense made a major change in its materiel management migration system policy. In doing so, it is clearly on a course to accelerate system deployments before critical steps are taken that would help ensure that good business decisions are made and that risks are minimized. As a result, Defense may likely deploy systems that will not be significantly better than the hundreds of legacy systems already in place, and it could waste millions of dollars resolving problems that result from the lack of developing and implementing a clear and cohesive strategy. Before proceeding with any new strategy, it is imperative that Defense take the necessary steps to fully define its approach, plan for risks, ensure adequate oversight and complete testing of the new systems. We recommend that the Secretary of Defense stop the materiel management system development and deployment until (1) DUSD(L) completes an economic analysis and a comprehensive implementation plan, including actions to be taken, schedules, milestones, and performance measures, and a technical risk plan and (2) the full MAISRC reviews and approves these plans. The Department of Defense provided written comments on a draft of this report. The Deputy Under Secretary of Defense for Logistics generally agreed with our findings but disagreed with our recommendations. Defense’s specific comments are summarized below and presented, along with our rebuttals, in appendix II. In commenting on a draft of this report, Defense effectively acknowledged that the first materiel management strategy failed and it agreed on the need to mitigate risks confronting the new strategy. However, it did not agree with our recommendations to stop materiel management system development and deployment until it takes necessary steps to define its approach, plan for risks, and ensure adequate oversight. Instead, Defense believes it is addressing the concerns expressed in our report under the logistics business systems strategy it is currently developing. The latest strategy focuses on creating a common operating environment for logistics. According to Defense, under the common operating environment, guidelines and standards specifying how to reuse existing software and build new software will facilitate system interoperability and allow for continually evolving computer capabilities. We agree with the Department’s contention that its materiel management strategy has failed and commend it for pursuing alternative strategies, such as privatization and developing a common operating environment based on commercial off-the-shelf (COTS) systems. Such alternatives may well solve some of the past problems associated with materiel management systems. However, we disagree that the common operating environment strategy being developed will address our recommendations. Without first conducting required economic analyses, Defense has no assurance that the systems it is currently deploying, which were selected under the failed strategy, will fully support the new strategy or by themselves still be good investments. In addition, by not conducting these analyses, decisionmakers will lack necessary information to make sound, informed decisions for selecting the best among competing alternatives and understanding how upcoming major changes to materiel operations will impact their strategy and alternatives. These shortcomings led to the failure of the first strategy, and we believe that, unless they are addressed, Defense risks failing a second time. Further, conducting economic and risk analyses and providing for adequate oversight over system development, is not only required by Defense’s own regulations but also by the recently enacted Information Technology Management Reform Act (ITMRA), which took effect August 8, 1996. The intent of this legislation is to prevent failures similar to the material management standard system strategy. Under ITMRA, DOD is required to design and implement a process for selecting information technology investments using criteria such as risk-adjusted return-on-investment and specific criteria for comparing and prioritizing alternative information system projects. If implemented properly as part of the new strategy, Defense can have a means for senior management to obtain timely information regarding progress in terms of costs, capability of the system to meet performance requirements, timeliness, and quality. Without implementing an effective investment process for the new strategy, Defense will continue to risk encountering unmanaged development risks, low-value or redundant information technology projects, and too much emphasis on maintaining old systems at the expense of using technology to redesign outmoded work processes. We are sending copies of this report to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight; the Secretaries of Defense, the Army, the Navy, the Air Force; the Director of the Defense Logistics Agency; the Director of the Office of Management and Budget; and other interested parties. Copies will be made available to others on request. If you have any questions about this report, please call me at (202) 512-6240 or Carl M. Urie, Assistant Director, at (202) 512-6231. Major contributors to this report are listed in appendix VI. As mandated by the National Defense Authorization Act for Fiscal Year 1996 (Public Law 104-106), we reviewed the Department of Defense’s Materiel Management Standard System (MMSS). The original objectives of our review were to determine (1) the mission and the economic and technical basis for selecting MMSS as the migrating system and (2) the extent to which this strategy has or will improve DOD’s materiel management operations. To accomplish our objectives, we (1) interviewed program officials and contractors responsible for developing, implementing, and managing MMSS projects, (2) reviewed pertinent program and contractor documentation such as cost performance reports, metrics, quarterly reports on major automated information systems, economic analyses, implementation and migration plans, and the test and evaluation plan, (3) examined system design documents, program assessments, and acquisition methodologies and strategies to support the MMSS, and (4) interviewed senior Defense officials responsible for approving and directing the MMSS development and acquisition regarding their efforts to minimize MMSS development risks and improve materiel management operations. However, shortly after we began the review, Defense stopped its strategy to develop a standard materiel management system and started making plans to separately deploy individual applications. Consequently, we refocused our review to include evaluating the risks associated with this new migration strategy and the extent to which it will facilitate improvements in materiel management operations. We interviewed DOD and program officials to determine the rationale behind the strategy change and the potential economic and technical risks threatening the successful implementation of the new strategy. We obtained and examined budgetary and cost data, reviewed project status reports, and pertinent program documents such as the revised deployment schedule, test procedures, and program decision papers. For applications scheduled to be deployed in fiscal year 1996, we compared the test procedures with deployment schedules to evaluate potential program risks of deploying software applications before successfully completing required testing. We were hampered in our attempt to assess the potential improvements to DOD’s materiel management operations because critical strategic documents such as the revised economic analysis, acquisition strategy, and mission need statement had not been completed. To determine if field locations had experienced problems resulting from insufficient testing, we also interviewed officials at the Marine Corps Logistics Base, Albany, Georgia who had participated in the early deployment of some MMSS applications. Our work was performed from January 1996 through May 1996 in accordance with generally accepted government auditing standards. We performed our work primarily at the offices of the Deputy Under Secretary of Defense for Logistics, Washington D.C.; the Joint Logistics Systems Center, Wright-Patterson Air Force Base, Ohio; and the Marine Corps Logistics Base, Albany, Georgia. 1. The wording in the paragraph cited has been modified but addresses the same issue. 2. Defense states that it is documenting a revised logistics business systems strategy at the same time that the first aspects of the strategy are being executed. The revised strategy is to be based on the new common operating environment (COE) approach for building interoperable systems, a collection of reusable software components, a software infrastructure for supporting mission area applications, and a set of guidelines and standards. The standards will specify how to reuse existing software and how to build new software to facilitate system interoperability. We are currently reviewing the COE. While it may address technical infrastructure problems associated with logistics business systems, we believe that Defense must still incorporate into the new strategy the essential ingredients that ensure sound decisionmaking as we recommended in our report: conducting required economic and technical risk analyses and providing adequate oversight for the systems currently being deployed. In doing so, Defense can ensure that the systems being deployed now will be compatible with systems or work processes developed under the common operating environment approach. Further, it can ensure that sound business decisions are being made as it finalizes the new approach. Additionally, while the military services and Defense agencies have requested materiel management deployments, because the systems have not been fully tested, they may not be an improvement over existing legacy systems. In addition, in the near-term, these systems will require the services and agencies to continue to maintain their legacy systems. We believe that before these systems are deployed to the services and the agencies, Defense needs to ensure that testing is sufficient. This should help reduce the risks of software problems surfacing in later phases of the system lifecycle. 3. As stated in our report, the economic analysis completed in 1995 was not reflective of the new approach to materiel management systems. An analysis for the new strategy may well have identified the additional risks that have not been addressed, such as those associated with maintaining the legacy systems that will now remain under the new strategy. In addition, developing an economic analysis after deployments have begun will not give Defense decisionmakers an opportunity to ensure that good business decisions are being made before funds are committed. Both ITMRA and the Office of Management and Budget’s November 1995 guide for evaluating information technology investments call for such analyses prior to making information technology investments as it allows senior managers to examine trade-offs among competing proposals and to ensure that each project is cost-effective and beneficial. 4. We support Defense’s efforts to communicate its intentions for materiel management systems with management across the services and within logistics operations. Obtaining support of decisionmakers is critical to the success of the new strategy. However, because Defense has not complied with its own regulations for ensuring sound decisionmaking, these managers still do not have the information necessary for making informed decisions. Further, the Logistics Information Board, which Defense has established to review execution of the new strategy is no substitute for a full MAISRC review. For example, while the Logistics Information Board consists of members who participate in logistics business operations, MAISRC comprises high-ranking officials separate from logistics business who have a more independent perspective in reviewing the strategy. Further, while the Logistics Information Board can play an effective role in development and implementation of the strategy, MAISRC plays a vital role in developing and rigorously verifying the cost-benefit and alternative analyses that are fundamental to making investment decisions. 5. Defense contends that the systems it is developing will be flexible enough to support private contractors assuming responsibility for materiel management operations if privatization is pursued. This contention, however, presumes that contractors will want to use these systems rather than acquire their own systems and that these systems will support new work processes adopted by contractors. Further, because most of the migratory systems being deployed are based on an out-of-date system architectures, there is no assurance they will facilitate interoperability between the legacy and COTS environments, as Defense contends. 6. The testing strategy DOD describes is a common practice in the commercial world. However, it should be noted that this practice is intended to mitigate risk and reduce costs prior to full production of the system. In DOD’s case, the strategy is being employed during full production of the system. Additionally, under DOD’s approach, the services will be required in many cases to continue spending operational funds on their legacy systems to make up for the lack of full functionality in the fielded new systems. Further, Defense historically has encountered significant cost increases to software-intensive systems as a result of fielding them before they are adequately tested. As our report discusses, these problems were especially evident with the Stock Control System. In its comments, Defense did not dispute that the Marine Corps has had to make extensive and costly changes to the system chiefly because the application was delivered basically untested and with very limited functionality. Finally, we do not believe it is appropriate to deploy these systems until the testing strategy is approved by the full MAISRC. 7. We disagree with Defense’s contention that the costs to maintain legacy systems will remain the same. Since a bare bones approach to legacy maintenance has been sustained for the past several years, we believe that these systems will require more maintenance as they get older. Additionally, we believe the maintenance costs for legacy systems will increase since fewer systems will be terminated under the new strategy than anticipated under the original strategy. As discussed in our report, these remaining systems will also require costly interfaces with new systems. Further, we are not recommending that Defense delay moving forward with a new logistics business systems strategy. Rather, we are recommending that Defense delay continuing to implement pieces of an admittedly failed migration system strategy until it can be assured that the systems it wants to deploy are good investments. 8. We agree that the old strategy is not viable. However, we disagree that there are no alternatives other than the current strategy. Our report, in fact, discusses alternatives to the new strategy currently being considered by Defense, such as privatizing materiel management functions. Also, by conducting required economic analyses, Defense would be able to fully identify the available alternatives and consider their associated costs, benefits and risks. 9. Until Defense completes the documentation associated with its own oversight process—which includes a complete definition of the new strategy; an analysis of costs, benefits, and alternatives; and a test plan—its decisionmakers will not have assurance that they are choosing the best system solutions. Further, the contention that the full MAISRC review of major information system investments merely adds “another level of review” goes against Defense’s original intention in implementing this process: ensuring that the essential ingredients to making sound business decisions are incorporated into all major technology investment decisions, and that senior managers are making the final decisions and held accountable for them. Central Secondary Item Stratification (CSIS): Stratifies the requirements computed in the other systems across financial programs and is the basis for budgeting and funding allocations. Configuration Management Information System (CMIS): Provides configuration identification, configuration status accounting, electronic change control, and configuration audits. Deficiency Reporting System (DRS): Collects, processes, and stores quality deficiency and discrepancy data on weapon systems and equipment. Initial Requirements Determination/Readiness Based Sparing (IRD/RBS): Computes initial spare requirements for new systems and computes requirements based on readiness scenarios. Maintenance Planning & Execution (MP&E): Manages repair requirements and monitors the performance of maintenance facilities. Product Definition Support System (PDSS): Creates and moves a complete requirements package from the requirements determination system to the contracting system. Provisioning Cataloging Technical Support System (PCTSS): Supports the selection of items for new end items/weapon systems, obtains and maintains national stock numbers and associated data. Requirements Computation System (RCS): Provides demand-based requirements computations for recoverable and consumable items. Stock Control System (SCS): Provides asset visibility through requisition processing, receipt processing, and inventory processing. Tables IV.1 and IV.2 below show the deployment schedule for fiscal years 1996 and 1997. Four system applications—the Configuration Management Information System, Deficiency Reporting System, Initial Requirements Determination/Readiness Based Sparing, and the Product Definition Support System—will be deployed in fiscal year 1996 at an estimated cost of $3.1 million at selected sites across three of the five services based primarily on need. These systems will be delivered to the user organizations by JLSC “as is,” that is, with limited functionality and system testing. According to the MMSS program manager, the number of actual deployments for both fiscal year 1996 and 1997 will be contingent on available funding. In April 1996, the materiel management program manager reported that the revised fiscal year 1997 schedule is likely too ambitious given the funding projected to be available. Defense hired a contractor to conduct site surveys at each deployment site to determine the physical plant and architectural requirements, that is, communication, electrical, and computer hardware and software configurations needed to support applications. As of May 31, 1996, the contractor had completed 11 of the 21 required site surveys. According to JLSC officials, to meet their deployment schedule, some applications will be deployed in fiscal year 1996 even though the site surveys may not be done. By fielding the Stock Control System (SCS), JLSC expected to achieve immediate benefits and to demonstrate major progress in support of DOD objectives. The benefits to the Marine Corps—the first service to receive the system—included replacing an outdated system and moving from a batch system to on-line processing. The monetary benefits were expected to exceed $56.7 million with an implementation cost of $27.2 million.MCLB-Albany was selected as the SCS production site in August 1993, and the system was declared operational in April 1995. However, SCS was not fielded with full functionality. In addition to the challenges of learning a new system, users continue to experience problems with the system’s operability. They cited the following reasons for the difficulties encountered with the SCS: The majority of users believed that the initial testing of SCS was inadequate. Since the system’s deployment, users have experienced problems that should have been detected during the system’s testing. Most users reported not receiving timely training prior to the system’s deployment. Given that training was provided up to 8 months before system implementation, the majority agreed that the training was too early for them to retain the knowledge they needed to operate the system. An MCLB-Albany official stated that training was conducted so far in advance because the deployment date was officially scheduled to be 6 months earlier. Three of the users stated that training was ineffective because instructors were unfamiliar with Marine Corps processes or too general in their presentation. Recognizing the need for additional training, MCLB-Albany conducted a refresher class just prior to the implementation of SCS, which some of the users thought was beneficial. Most of the users believe that the SCS system documentation is insufficient and several thought the manual was useless and ever changing. One inventory manager never even received a user’s manual. All of the users seek answers to their problems by consulting with designated SCS system analysts or other users. All of the users stated that the SCS no longer allows them to perform certain job related tasks. In specific instances, some of the users reported going to a separate, manual source to complete these tasks. An MCLB-Albany official anticipates resolving this problem with later versions of system. Most of the users have experienced problems in accessing SCS, which has been unavailable for periods of time ranging from a few minutes to several days. Given that users spend as much as 90 percent of their day in SCS, this problem could inhibit their ability to do their jobs. MCLB-Albany officials stated that the inaccessibility of SCS is often due to problems with their local area network rather than problems with the SCS. An official with MCLB-Albany’s Defense Accounting Office reported that financial data is not able to pass directly from the SCS to the accounting system. All nine inventory managers interviewed liked that SCS gave them immediate, on-line processing of information. The legacy system employed batch processing, making users wait until the day after they input information to receive the results. Needing a success story, JLSC set unrealistic milestone dates and pushed the system through testing and onto the users. One official told JLSC that its scheduling of testing actually extended time frames and resulted in a loss of confidence from the users. This official and a lessons-learned report emphasized that the manner in which the system tests were conducted has exacerbated the problems with SCS. They cited that some problems were not identified in testing; the tests only addressed one area of the system at a time, as if in a vacuum; the tests were conducted on a different operating environment from the Marine Corps’; some areas of SCS were not tested; and insufficient testing at contractor facility led to additional on-site testing to correct problems. Albany has devoted many resources to implement interim solutions to resolve interface and operating environment problems. Analysts have resorted to these measures because SCS has on-line processing capabilities, while the legacy systems use batch processing. Additionally, SCS uses a different operating environment from the legacy systems. Even with these interim solutions, MCLB-Albany cannot ensure that data is carried from one operating environment to the next. Additionally, because MCLB-Albany has SCS and legacy systems operating in tandem, it spends more time and money maintaining and reconciling the systems and to purify and convert data. Although MCLB-Albany officials acknowledge that a cost exists for all of these interim solutions, they were unable to determine the cost of addressing interface problems. According to Program Managers, JLSC ceased to provide feedback to MCLB-Albany on SCS monthly activity reports. At the beginning of the effort, JLSC required status reports from MCLB-Albany, but they later directed Albany to discontinue these reports. Considering that SCS still has unresolved problems, JLSC’s instructions to cease the flow of progress reports is puzzling. MCLB-Albany officials later informed us that informal communication with JLSC has subsequently improved with the new direction of MMSS; however, they still receive no feedback on their progress reports. MCLB-Albany continues to generate the status reports for their own benefit. Although the users continue to experience problems using SCS, program management at MCLB-Albany believes that SCS will be its asset management system into the future. MCLB-Albany is scheduled to receive an update to SCS in 1996, which includes an upgrade to a modernized language because the older version will no longer be supported by the commercial market. The new language is expected to alleviate incompatibility problems in the operating systems. The Marines consider this update a top priority. Although officials at MCLB-Albany continue to request the functions that they did not get when SCS was deployed, JLSC has not responded. These missing functions are now under initiatives of other DOD agencies. Carl L. Higginbotham, Senior Advisor Christopher T. Brannon, Senior Evaluator Valerie A. Paquette, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) Joint Logistics Systems Center's (JLSC) development and deployment of standard materiel management systems. GAO found that: (1) DOD development of nine integrated materiel management systems will cost more than the $5.3 billion originally estimated; (2) DOD plans to deploy each system individually at a selected site; (3) DOD is embarking on a new materiel management strategy to ensure that the additional funds spent on the systems are well invested; (4) DOD has not conducted economic or risk assessments of the new systems, or incorporated efforts to improve, consolidate, and privatize logistics operations; (5) DOD has failed to define the objectives, costs, and risks of its new materiel management strategy, thus denying DOD decisionmakers the opportunity to review the systems before deployment; (6) DOD is proceeding with its scheduled deployments without allocating necessary time for systems testing; (7) this action will increase the likelihood of DOD experiencing problems during systems testing; and (8) DOD will incur significant costs in operating and maintaining the legacy system due to existing deficiencies within the system.
Congress chartered Fannie Mae and Freddie Mac as for-profit, shareholder-owned corporations in 1968 and 1989, respectively. They share a primary mission, which is to enhance the liquidity, stability, and affordability of mortgage credit. To accomplish this goal, the enterprises purchase conventional mortgages that meet their underwriting standards, known as conforming mortgages, from primary mortgage lenders such as banks or savings associations. Mortgage lenders sell mortgages to transfer risk (especially interest rate risk in the case of fixed-rate mortgages) or increase liquidity. They can use the proceeds from selling mortgages to the enterprises to originate additional mortgages, or they may exchange a pool of mortgages for enterprise-backed MBS, which they can keep or sell. The enterprises package mortgages they purchase into MBS, which are sold to investors in the secondary mortgage market. In exchange for a fee (the guarantee fee), the enterprises guarantee the timely payment of interest and principal on MBS that they issue. These fees are typically incorporated into the interest rates charged to borrowers. The charter requirements for providing assistance to the secondary mortgage markets specify that those markets are to include mortgages on residences for low- and moderate-income families and require the enterprises to support mortgage financing in underserved areas. HERA established FHFA as an independent agency responsible for the safety and soundness and housing mission oversight of the enterprises. FHFA took over the oversight of the enterprises from the Office of Federal Housing Enterprise Oversight, formerly an independent entity within the Department of Housing and Urban Development (HUD). FHFA has a statutory responsibility to ensure that the enterprises operate in a safe and sound manner and that the operations and activities of each regulated entity foster liquid, efficient, competitive, and resilient national housing finance markets. Additionally, the Emergency Economic Stabilization Act of 2008 directed FHFA and other agencies to implement plans seeking to maximize assistance for homeowners and encourage mortgage servicers to take advantage of available programs to minimize foreclosures. HERA authorized the Director of FHFA to appoint FHFA as conservator or receiver for the enterprises for the purpose of reorganizing, rehabilitating, or winding up their affairs. As conservator, FHFA was authorized to take such action as may be necessary to put the regulated entity in a sound and solvent condition, as well as such action as may be appropriate to carry on the business of the regulated entity and to preserve and conserve the assets and property of the regulated entity. Upon placing the enterprises into conservatorships, FHFA succeeded by operation of law to the authority of the enterprises’ management, boards of directors, and shareholders during the period of the conservatorships. However, according to FHFA, it does not manage every aspect of the enterprises’ operations. Instead, FHFA reconstituted the enterprises’ boards of directors in 2008 and charged them with overseeing management’s day-to-day operation of the enterprises, subject to FHFA review and approval on certain matters. Fannie Mae and Freddie Mac retain their government charters and continue to operate legally as business corporations. FHFA initially outlined its understanding of its conservatorship obligations and how it planned to fulfill those obligations in a 2010 letter to Congress. In February 2012, FHFA sent Congress a strategic plan that set three strategic goals for conservatorship and elaborated on how FHFA planned to meet its conservatorship obligations. Most recently, under the current Director, whose term began in January 2014, FHFA issued an updated strategic plan in May 2014 that reformulated its three strategic goals. Using its authority provided in HERA, Treasury provides capital support to the enterprises while in conservatorships through senior preferred stock purchase agreements. Under these agreements, Treasury committed to provide up to $445.6 billion in assistance to the enterprises, of which the enterprises have drawn $187.5 billion to date. In exchange, the enterprises pay quarterly dividends to Treasury. Under the current terms of the agreements as amended, Fannie Mae and Freddie Mac must pay to Treasury as dividends all of their quarterly positive net worth amount (if any) over a specified capital reserve amount upon declaration of dividends. However, the agreements reduce this capital reserve amount to zero in January 2018. From 2008 through 2013, the federal government directly or indirectly supported over three-quarters of the value of new mortgage originations in the single-family housing market. Mortgages with federal support include those backed by the enterprises as well as mortgages insured by the Federal Housing Administration (FHA), which has experienced substantial growth in its insurance portfolio and significant financial difficulties. In light of developments concerning the enterprises and FHA, in 2013 we identified the role played by the federal government in the housing finance system as a high-risk area for the government. Subsequently, Congress considered a number of legislative proposals to make significant changes to the housing finance system. Three proposals—the Housing Finance Reform and Taxpayer Protection Act of 2014, S. 1217; the FHA Solvency Act of 2013, S. 1376; and the Protecting American Taxpayers and Homeowners Act, H.R. 2767—were reported out of committee during the 113th Congress (January 2013– January 2015), but no further action was taken. From 2003 to 2006, the enterprises saw the share of total first-lien mortgage originations they securitized into MBS decline from 51 percent to 32 percent (see fig. 1). This decrease coincided with the rapid expansion of nonprime lending and private-label MBS. However, there have been very few new issuances of private-label MBS since 2007. As that segment of the market virtually disappeared, the enterprises’ market share increased to a high of 65 percent of total originations in 2008 (even as the dollar volume of originations they securitized decreased from 2007) and remained nearly at that level for several years. Meanwhile, the share of first-lien mortgage originations that banks held in their portfolios generally decreased, from 34 percent in 2002 to a low of 12 percent in 2009. However, the percentage of loans held in banks’ portfolios increased from less than 20 percent in 2013 to more than 30 percent in 2014, where it remained through the first half of 2016. Simultaneously, the percentage of loan originations that the enterprises packaged into MBS and guaranteed dropped from 62 percent in 2013 to less than 50 percent in 2014, and fell further to 43 percent during the first half of 2016. While the three goals FHFA outlined in its 2014 strategic plan for the conservatorships are similar to those in the previous 2012 plan in a number of ways, there are key differences that reflect a shift in priorities for the conservatorships and changing market conditions. Whereas in the 2012 plan FHFA stated that its goals were aimed at establishing a foundation for a new housing finance system in the future, FHFA stated that the 2014 plan and goals emphasize overseeing and managing the enterprises in their current state in accordance with statutory mandates. This shift in priorities is evident in the goals’ relative importance as indicated by the weight given to each goal, as well as changes to the wording of the three goals between the 2012 and 2014 strategic plans (see table 1) and the actions FHFA is taking to further these goals. In addition, the previous strategic plan was produced while the enterprises were generating losses and the outlook for future losses was highly uncertain, according to FHFA, but the 2014 plan was issued after a string of profitable quarters for both enterprises. In the 2014 plan, FHFA indicated it was placing greater emphasis on its goal for maintaining credit availability and foreclosure prevention options (Maintain goal). FHFA increased the weight given to this goal in its annual scorecards for the enterprises, from 20 percent in 2012 and 2013 to 40 percent in 2014 through 2016. Additionally, FHFA changed the wording of the Maintain goal from “maintain foreclosure prevention activities and credit availability for new and refinanced mortgages” to “maintain, in a safe and sound manner, foreclosure prevention activities and credit availability for new and refinanced mortgages to foster liquid, efficient, competitive, and resilient national housing finance markets.” The new wording more closely aligns the goal with FHFA’s responsibilities outlined in statute and Congress’s stated purpose for the enterprises. Many of the activities that were identified in the 2012 plan under the Maintain goal continue to be stressed in the 2014 plan. These activities include the following: Representations and warranties framework. FHFA and the enterprises undertook a multiyear effort to develop a new framework governing representations and warranties—the assurances lenders provide that mortgage loans sold to the enterprises comply with the standards outlined in the enterprises’ selling and servicing guides, including underwriting and documentation. The objective of the framework is to enhance transparency and certainty for lenders by clarifying when a mortgage loan may be subject to repurchase. This clarity may give lenders more confidence to lend, which helps maintain borrowers’ access to credit. For example, currently lenders are eligible for relief from certain representations and warranties when borrowers make 36 consecutive payments with no more than two delinquencies of 30 days or less. The enterprises categorized loan origination and servicing defects and the appropriate remedies available to address them in the framework and established an independent dispute resolution program to resolve contested loan- level disputes about repurchase requests. The final piece of the framework was put in place in February 2016. Loss mitigation and foreclosure prevention. In an effort to help borrowers avoid foreclosure, FHFA worked with the enterprises to align their servicing policies and develop loss mitigation tools that included loan modifications, streamlined refinance options, and foreclosure prevention actions including short sales and deeds-in-lieu of foreclosure. FHFA and the enterprises also made enhancements to requirements related to foreclosure alternatives, unemployment forbearance, and rate-reset notifications. For example, the enterprises announced in June 2014 that mortgage servicers could approve eligible borrowers for extended unemployment forbearance without obtaining prior written authorization from the enterprises. Finally, the enterprises have used sales of nonperforming loans to transfer pools of severely delinquent loans to new buyers and servicers with the goal of providing more favorable outcomes for borrowers while also reducing losses to the enterprises and, therefore, to taxpayers. In addition, FHFA identified a number of new activities, some of which the enterprises have begun implementing, that serve to expand its efforts to maintain credit availability and foreclosure prevention as market conditions improve, including lowering the minimum down payment from 5 percent to 3 percent; providing for principal forgiveness for an estimated 30,000 borrowers in default; issuing proposed rules outlining the enterprises’ duty to serve certain underserved segments of the market; and transferring funds to a statutory housing trust fund that will be distributed through grants made to the states. FHFA also reframed the enterprises’ actions in multifamily production to focus on maintaining credit availability rather than on reducing their market share. Beginning in 2013, FHFA imposed a production cap on the enterprises’ multifamily business. However, since 2014 FHFA has allowed the enterprises to exclude from the caps affordable housing loans, loans to small multifamily properties, and loans to manufactured housing rental communities. In addition, FHFA increased the 2016 multifamily lending caps for the enterprises from $31 billion to $36.5 billion. The adjustment for 2016 was based on increased estimates of the overall size of the 2016 multifamily finance market due to continued high levels of property acquisitions and deliveries of newly constructed apartment units, as well as record levels of maturing loans that required refinancing. FHFA’s second goal in the 2014 plan, like the corresponding goal in the 2012 plan, is aimed at transferring risk from the enterprises (and taxpayers) to private investors. However, the two goals are worded differently due to different approaches to decreasing risk. In 2012, the goal was to “gradually contract the enterprises’ dominant presence in the marketplace while simplifying and shrinking their operations” (Contract goal). The 2014 plan rephrases the goal as to “reduce taxpayer risk through increasing the role of private capital in the mortgage market” (Reduce goal) and shifts away from decreasing the enterprises’ role in the housing market. In addition, FHFA reduced the weight that activities under the second goal were given in FHFA’s annual scorecards, from 50 percent in 2013 to 30 percent in 2014 through 2016. As FHFA noted in the 2014 strategic plan, its statutory responsibilities as conservator do not include making policy decisions on the future of housing finance reform. FHFA officials told us that the current Director did not believe that shrinking the enterprises’ dominant presence in the market was FHFA’s decision to make because Congress had not yet acted on housing finance reform. Additionally, FHFA noted that it was concerned about the effects of shrinking the enterprises’ operations on mortgage market liquidity and the availability of mortgage credit, which it was seeking to support through its Maintain goal. In other words, there may have been tension between the Contract goal and the Maintain goal in the 2012 plan, and shifting risk from taxpayers to private market participants without contracting the enterprises’ presence in the market helped eliminate this tension. As a result, one of the actions outlined in the 2012 plan related to efforts to contract the enterprises’ role—continued gradual increases in the enterprises’ guarantee fee pricing—was eliminated. Another action was reframed: the multifamily production activities that in the 2014 plan focus on maintaining access to credit and are therefore included in the Maintain goal, as discussed previously. In its efforts to support the new focus of the Reduce goal, FHFA has taken several actions, including the following: Credit risk transfers. FHFA directed the enterprises to transfer a portion of the credit risk they face on the mortgages they securitize to private investors. These transfers of risk can occur either before (“front-end”) or after (“back-end”) the enterprises purchase mortgages. The enterprises primarily employed back-end risk transfers in the first 3 years of the initiative (2013–2015), but recently they have been trying various structures, including some front-end transfers. Under the debt issuance structure for credit risk transfers (the structure the enterprises have used most), the enterprises sell debt to investors and receive payment up front at the time of the sale. The enterprises repay the debt based on the performance of a reference pool of mortgages, in which the investor earns a higher return if the mortgages perform well and a lower return should they perform poorly. From 2013 through 2015, the enterprises completed 70 transactions that transferred credit risk totaling $30.6 billion on single- family mortgages with an unpaid principal balance of about $838 billion. In June 2016, FHFA issued a request for input on proposals for the enterprises to adopt a number of front-end credit risk transfer structures. Private mortgage insurance standards. FHFA and the enterprises updated eligibility requirements for private mortgage insurers seeking to insure loans that are eligible for purchase by the enterprises. These requirements help to ensure the stability of mortgage insurance companies that are counterparties of the enterprises, reducing counterparty risk to the enterprises and, by extension, risk to taxpayers. Among other things, the requirements establish financial standards for private mortgage insurers to demonstrate adequate resources to pay claims and operational standards relating to quality control processes and performance metrics. The enterprises began implementing the requirements in the second half of 2015, and all the revised requirements were effective December 31, 2015. FHFA narrowed the focus of its goal of building a securitization infrastructure (Build goal) from creating a new secondary mortgage market infrastructure to primarily addressing the enterprises’ current operational needs. But FHFA kept the weight assigned to this goal in the annual scorecards it issued for the enterprises at 30 percent (which is the weight it was given each year from 2012 through 2016). The reduction in scope is evident in the changed wording of the goal, from “build a new infrastructure for the secondary mortgage market” to “build a new single- family securitization infrastructure for use by the enterprises and adaptable for use by other participants in the secondary market in the future.” One example of refining the goal’s scope was discontinuing the effort to develop a model contractual and disclosure framework. Since 2012, the enterprises had been working to develop the framework to align the contracts and data disclosures that supported fully guaranteed MBS issued by the enterprises and craft a set of uniform contractual terms and standards for transparency for MBS that carried no or only a partial federal guarantee and that could be broadly accepted by issuers and investors. To develop this framework, FHFA was incorporating input it received in 2012 on a proposal for a standardized pooling and servicing agreement. According to FHFA, by the end of 2013 the enterprises had made progress toward developing preliminary recommendations for the framework. However, the 2014 strategic plan did not mention the framework. FHFA officials said that the effort to develop the framework was distracting FHFA from focusing on addressing the enterprises’ securitization infrastructure needs, particularly those related to the implementation of a Single Security (discussed later). They said discontinuing the work on the framework was part of a strategy to mitigate risk through managing the scope of the infrastructure they were building. Additionally, the officials said that private industry groups commented to FHFA that such an effort should be left to the private sector. Despite the change in goal’s scope in the 2014 plan, FHFA is continuing efforts begun earlier to create a new securitization infrastructure—most notably, a common securitization platform. The common securitization platform is a technology and operational platform that will perform many of the enterprises’ current securitization functions for single-family mortgages on behalf of the enterprises. FHFA directed the enterprises to develop a common platform to replace the information technology platforms at each enterprise that support their securitization activities. While the common platform is designed to meet the current securitization activities of the enterprises, it is being built using open architecture and industry standard software, systems, and data requirements with the goal of being adaptable for use by other market participants in the future. According to FHFA officials, focusing first on meeting the known requirements of the enterprises made the most sense for defining the scope of the work and managing the project. Nonetheless, they said the ultimate goal of building an infrastructure that can be used by other market participants remains an important part of the Build goal and has informed many decisions FHFA has made. FHFA expects the platform to be implemented in two releases: Implementation of the first release is scheduled to occur before the end of 2016 and should allow Freddie Mac to use the platform to issue its current single-class securities. In preparation for the first release, Freddie Mac and Common Securitization Solutions, LLC, a joint venture of the enterprises that was formed to operate the platform, successfully completed system-to-system and end-to-end testing of the functionality of the platform in early 2016. The second release is scheduled for 2018, when both enterprises plan to use the platform to issue a Single Security, which FHFA is developing to replace the different MBS they currently offer. Unlike the enterprises’ current products, the new securities will have the same features, and the goal is for the market to treat them as fungible irrespective of the enterprise that issued them. In addition, the enterprises will be able to commingle first-level and second-level securities from either enterprise in the second-level securities they issue. As of July 2016, FHFA had finalized the features of the Single Security after soliciting and incorporating input from the public. FHFA also updated loan-level disclosures and announced that the enterprises will begin issuing these securities in 2018. To support the Build goal, FHFA and the enterprises continue to develop mortgage data standards for the single family loans they purchase through the Uniform Mortgage Data Program. This program has provided lenders with common and consistent definitions and specification for various mortgage data, including appraisal, loan delivery, mortgage loan application, and closing disclosure information. The enterprises currently collect standardized appraisal and loan delivery data, and expect to implement a data collection system for the closing disclosure dataset in the third quarter of 2017. As part of a new mortgage loan application, the enterprises released technical requirements and an associated dataset to electronically capture loan application information to the industry in the third quarter of 2016. Implementation of the new loan application and associated dataset is likely to be in the first quarter of 2019, according to FHFA. Prior to 2014, FHFA was taking explicit steps to shrink the enterprises’ role in the secondary market. These actions included gradual increases in guarantee fees and strict caps on the total amount of multifamily loans that the enterprises could purchase. As discussed earlier, these actions were stopped by the current Director and FHFA’s adoption of the 2014 strategic plan. However, other actions that serve to reduce the depth and breadth of the enterprises’ activities and that are written into the enterprises’ agreements with Treasury continue. These actions include reducing retained mortgage portfolios and reducing the enterprises’ capital bases to $0 by January 2018. FHFA stated that it changed its strategy for the conservatorships and took actions to maintain the enterprises’ current state and role in the secondary market in the absence of congressional direction on housing finance system reform. At the same time, FHFA officials told us that the strategy was intended to be neutral in terms of the enterprises’ future structure and left all reform options open. Our analysis comparing FHFA’s actions with legislative proposals to reform the enterprises’ structure found that some proposals continue or build upon actions FHFA has taken, such as the common securitization platform and credit risk transfers. However, proposals that incorporate the same future structure for the enterprises do not consistently build upon the same actions. For example, one legislative proposal that replaces the enterprises with a single federal agency builds upon FHFA’s credit risk transfer initiative, but other similar proposals do not explicitly address credit risk transfers. As a result, FHFA’s actions are not necessary to transitioning to the particular structures in any of these proposals, such as a single governmental agency or fully privatized companies. In addition, we found that the same actions were included in multiple proposals that envisioned different future structures for the enterprises. Industry stakeholders generally said that FHFA’s recent actions have not advanced or constrained any of the future structures for the enterprises outlined in legislative proposals. Representatives from two industry associations told us that FHFA’s recent actions have been neutral as to the future structure. However, other industry stakeholders who are members of a third industry association noted that FHFA had taken steps prior to 2014 to harmonize the enterprises’ policies, procedures, and products, some of which continued after the publication of the 2014 plan, such as the development of a Single Security and the common securitization platform. Taking these steps could facilitate (but would not require) merging the enterprises into a single entity. But whether that single entity would be a government agency, government corporation, or private entity was unclear to these stakeholders. Officials from another industry association said that instead the enterprises could be recapitalized as competitors to one another but that it was unclear how they would be able to balance competing against each other with working together to ensure the common securitization platform works well. As outlined in the 2012 strategic plan, FHFA set out to address a number of barriers to entry into the secondary mortgage market. By creating a common securitization platform using open architecture, a model pooling and servicing agreement (which evolved into the contractual and disclosure framework discussed previously), and standardized mortgage data, along with contracting the enterprises’ presence through increased guarantee fees, FHFA sought to make entry into the secondary market easier for private entities. However, over time most of these actions were scoped down or eliminated, resulting in a reduced emphasis on addressing barriers. Some industry stakeholders said that FHFA’s shift in direction sent mixed messages to market participants and increased uncertainty about the role private entities should be playing in the secondary mortgage market. For example, some industry stakeholders we spoke with perceived a technological barrier to entry into the secondary mortgage market. The shift in the Build goal from 2012 to 2014 related to the common securitization platform does not clearly address this barrier as it is less clear whether private entities will be able to use the platform. According to FHFA officials, many decisions remain to be made about whether, when, and for what purposes private entities will be able to use the platform, which adds to uncertainty. As another example, FHFA and the enterprises began developing a contractual and disclosure framework but decided to halt the effort in 2014. The framework had the potential to help address some of the governance issues that some industry stakeholders said were holding back the secondary mortgage market’s private-label MBS. These governance issues include a lack of alignment of interests among parties to a securitization transaction and processes for holding servicers for the underlying mortgages accountable for their performance. Some industry stakeholders we spoke to said that completing the framework, while potentially helpful in addressing some issues, would have been unlikely to fully address the barriers that continue to prevent the private-label MBS market from growing. According to FHFA officials, the private sector has worked on developing its own framework, and this effort is ongoing. Further, changes to FHFA’s Maintain goal from the 2012 plan to the 2014 plan have expanded the enterprises’ reach to put them in competition with other market participants. Rather than addressing barriers to entry for private entities, these actions may enhance rather than lessen the enterprises’ existing advantages, which serve as barriers to entry and add to market participants’ uncertainty about their role in the market relative to the enterprises. For example, allowing down payments as low as 3 percent expands the market segments the enterprises serve. According to an industry stakeholder, doing so could push other market participants out of these segments because the enterprises have built-in advantages—such as lower cost of funding and a government guarantee—that may make their products more attractive. Another stakeholder said the proposed rule on the enterprises’ duty to serve underserved markets, could also increase the enterprises’ competition with private entities, depending on future decisions. Increases in guarantee fees that occurred in the first few years of the conservatorship began to address the barrier posed by the enterprises’ pricing advantage. According to the Urban Institute, other options for lenders, such as holding certain loans in portfolio, made more financial sense as the cost of selling loans to the enterprises increased. However, not continuing the increases as envisioned in the 2012 plan under the Contract goal could make these options less attractive than selling to the enterprises, according to an industry stakeholder. As a result, lenders may focus on making loans that can be sold to the enterprises and place less emphasis on reaching segments of the population that do not qualify for those loans. On the other hand, lower guarantee fees keep mortgages affordable, which has been the aim of the Maintain goal in both the 2012 and 2014 plans. In addition, the Reduce goal includes developing new front-end credit risk transfers to increase the amount of risk borne by private entities. Some industry stakeholders expressed concern that possible options for front- end credit risk transfer transactions could increase barriers to entry for mortgage originators, depending on how the transactions are structured. FHFA officials noted that they would be reviewing comments on how to structure these front-end transactions after the close of the comment period on October 13, 2016. FHFA has taken some actions that could increase the likelihood of drawing on Treasury’s capital commitment under the agreements as well as other actions that could have the opposite effect, and the net effect of these actions is uncertain. Some of FHFA’s newer actions supporting the 2014 plan could increase credit risk and therefore the likelihood of needing further assistance from Treasury. For example, one industry stakeholder said allowing the enterprises to purchase riskier mortgages, such as those with a 3 percent down payment and expanding the enterprises’ service to certain underserved segments of the market, could increase the likelihood of a draw on Treasury under the agreements. However, the underwriting requirements for these mortgages and the fees the enterprises collect help offset the increased risk. In addition, FHFA officials noted that these actions currently represent a small portion of the enterprises’ business and therefore would have a minimal impact on the likelihood of drawing on Treasury’s funding commitment. Several actions FHFA is taking, such as credit risk transfers and the Single Security, would reduce the likelihood of needing additional Treasury support, according to FHFA. These and other actions, including private mortgage insurer standards, the representations and warranties framework, and loss mitigation and foreclosure prevention activities, reduce credit risk or counterparty risk to the enterprises. While FHFA has directed the enterprises to engage in a growing set of credit risk transfers including front-end transfers, such as deeper mortgage insurance, the enterprises (as of 2016) have mostly conducted back-end risk transfers. With back-end transfers, the enterprises hold the credit risk until they complete the credit risk transfer transactions. In front-end transactions, private entities agree to take on a portion of the credit risk before or at the same time the loans are delivered to the enterprises. FHFA has stated that the enterprises need to issue a large enough volume of transactions to ensure a liquid market for credit risk transfer products. But FHFA has also stated that the enterprises need to avoid an excess supply of any particular product to the market that, for example, causes investors to abandon the market because the value of their existing holdings is reduced. Some industry stakeholders we spoke with noted that the enterprises have not been particularly attuned to investor demand when determining the timing, volume, and pricing of credit risk transfer transactions. As a result, investors’ demand for credit risk transfer transactions may not meet the volume offered, and the enterprises could find themselves retaining more risk than planned. The enterprises’ results from their annual stress tests have improved each year since 2014 (see fig. 2). Although linking the changes in results to specific actions they and FHFA have taken is difficult, these results suggest that draws from Treasury due to negative economic conditions are less likely than they were several years ago. FHFA officials stated that loss mitigation actions, sales of illiquid assets from retained portfolios, and better credit quality in newer loans all contribute to the improved results, along with improved market factors such as house price appreciation. According to FHFA, credit risk transfers would also have an impact on the stress test results, even though the overall effect has been small given that the program is in the relatively early stages. However, the stress tests show that both enterprises would still need capital support from Treasury under a severely adverse economic scenario. As noted earlier, in January 2018 the enterprises’ capital reserve amount will fall to $0 as required by the agreements with Treasury, meaning any quarterly losses—including those due to market fluctuations and not necessarily to economic conditions—will require additional draws from Treasury under the agreements. In the 8 years since the enterprises were placed in conservatorships, Congress has not enacted legislation that establishes objectives for concluding the conservatorships and the future structure of the enterprises. One of the long-standing principles we have identified that should serve as a guide for providing government assistance to private market participants is setting clear objectives. In this case, clarity on issues related to comprehensive housing finance system reform is needed in order for the enterprises to exit conservatorship. According to FHFA, setting objectives for the conclusion of the conservatorships should be left to Congress. In a 2014 report, we outlined a framework consisting of nine elements that Congress could use to assess or craft proposals as it considers changes to the housing finance system. These elements are clearly defined and prioritized housing finance system goals; policies and mechanisms that are aligned with goals and other adherence to an appropriate financial regulatory framework; government entities that have capacity to manage risks; protections for mortgage borrowers and reductions in barriers to protection for mortgage securities investors; consideration of cyclical nature of housing finance and impact of housing finance on financial stability; recognition and control of fiscal exposure and mitigation of moral hazard; and emphasis on implications of the transition. As noted earlier, the 113th Congress considered a number of proposals for reforming the housing finance system, but none were enacted. Other proposals have been introduced in the 114th Congress but have not yet been passed by either the Senate or the House of Representatives (see app. I). These include the Financial Regulatory Improvement Act of 2015, S. 1484; Mortgage Finance Act of 2015, S. 495; Housing Finance Restructuring Act of 2016, H.R. 4913; and Partnership to Strengthen Homeownership Act of 2015, H.R. 1491. Given the unknown duration of the conservatorships, without Congress providing explicit direction for the future of the enterprises, a change in leadership at FHFA could again shift priorities for the conservatorships and set the enterprises on a new path with another vision for their role and future structure. Such changes in direction could send mixed messages to market participants and add to existing uncertainty. Eight years after entering conservatorship, the enterprises’ futures remain uncertain and billions of taxpayer dollars remain at risk. Although FHFA has established goals for the conservatorships, its goals have been somewhat in tension with each other. In addition, the actions taken by FHFA to implement its goals have lacked a consistent direction over time, and FHFA has not clarified how to balance different priorities. As we have previously found, the federal government should set clear goals and objectives when providing financial assistance to private market participants. However, Congress has yet to establish objectives for the future of the enterprises after conservatorship or the future federal role in housing finance. Without Congress providing explicit direction for the future of the enterprises, the conservatorships will continue. Prolonged conservatorships and a change in leadership at FHFA could again shift priorities for the conservatorships, which in turn could send mixed messages and create uncertainties for market participants and hinder the development of the broader secondary mortgage market. By setting a clear direction for the future of the housing finance system, Congress would enable FHFA to use the conservatorships of the enterprises to facilitate the transition to a new structure. To reduce uncertainty and provide FHFA sufficient direction for carrying out its responsibilities as conservator of the enterprises, Congress should consider legislation that establishes objectives for the future federal role in housing finance, including the structure of the enterprises, and a transition plan to a reformed housing finance system that enables the enterprises to exit conservatorship. We requested comments on a draft of this product from FHFA, Treasury, and the previous FHFA Acting Director. The Acting Deputy Director of the Division of Conservatorship provided us with oral comments, stating that FHFA agreed with our overall findings. He also provided some technical clarifications, which we incorporated as appropriate. The previous FHFA Acting Director also provided us with technical comments in an e-mail, which we incorporated as appropriate. Treasury did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Director of FHFA, and the Secretary of the Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. This appendix discusses the legislative proposals that were introduced in the United States Senate and the House of Representatives between March 2013 and July 2016 that addressed the future structure of Fannie Mae and Freddie Mac and the secondary mortgage market. In addition to the contact named above, Karen Tremba (Assistant Director), Don Brown (Analyst-in-Charge), Giselle Cubillos-Moraga, Rachel DeMarcus, Davis Judson, Risto Laboski, Marc Molino, Jennifer Schwartz, Mathew Scirè (retired), Tyler Spunaugle, and Jessica Walker made significant contributions to this report.
In 2008, FHFA used its authority under the Housing and Economic Recovery Act to place Fannie Mae and Freddie Mac into conservatorships out of concern that their deteriorating financial condition threatened the stability of the financial market. Eight years later, the enterprises remain in conservatorships. However, FHFA says the conservatorships were not intended to be permanent. FHFA has issued two strategic plans for its conservatorship of the enterprises, one in 2012 and another in 2014. GAO was asked to examine FHFA's actions as conservator. This report addresses (1) the extent to which FHFA's goals for the conservatorships have changed and (2) the implications of FHFA's actions for the future of the enterprises and the broader secondary mortgage market. GAO analyzed and reviewed FHFA's actions as conservator and supporting documents; legislative proposals for housing finance reform; the enterprises' senior preferred stock agreements with Treasury; and GAO, Congressional Budget Office, and FHFA inspector general reports. GAO also interviewed FHFA and Treasury officials and industry stakeholders. The Federal Housing Finance Agency (FHFA) issued a new strategic plan for the conservatorships of Fannie Mae and Freddie Mac (the enterprises) in 2014 with reformulated goals and supporting actions that reflect a shift in priorities and changing market conditions. While the three goals in the 2014 strategic plan are broadly similar to those in the previous plan issued in 2012, FHFA changed the weight and wording of the goals (see table) to align the plan more closely with FHFA's statutory responsibilities. Specifically, compared with the 2012 plan FHFA (1) increased its emphasis on maintaining credit availability and foreclosure prevention options; (2) shifted away from shrinking the enterprises as a way to reduce taxpayer risk (focusing instead on transferring credit risk to private investors, for example); and (3) reduced the scope of the securitization infrastructure being built, such as a new technology platform for securitizing mortgages, to focus on meeting the enterprises' current needs. In the absence of congressional direction, FHFA's shift in priorities has altered market participants' perceptions and expectations about the enterprises' ongoing role and added to uncertainty about the future structure of the housing finance system. In particular, FHFA halted several actions aimed at reducing the scope of enterprise activities and is seeking to maintain the enterprises in their current state. However, other actions (such as reducing their capital bases to $0 by January 2018) are written into agreements for capital support with the Department of the Treasury (Treasury) and continue to be implemented. In addition, the change in scope for the technology platform for securitization puts less emphasis on reducing barriers facing private entities than previously envisioned, and new initiatives to expand mortgage availability could crowd out market participants. Furthermore, some actions, such as transferring credit risk to private investors, could decrease the likelihood of drawing on Treasury's funding commitment, but others, such as reducing minimum down payments, could increase it. GAO has identified setting clear objectives as a key principle for providing government assistance to private market participants. Because Congress has not established objectives for the future of the enterprises after conservatorships or the federal role in housing finance, FHFA's ability to shift priorities may continue to contribute to market uncertainty. Congress should consider legislation that would establish clear objectives and a transition plan to a reformed housing finance system that enables the enterprises to exit conservatorship. FHFA agreed with our overall findings.
According to the July 2012 National Strategy for Biosurveillance, biosurveillance is the ongoing process of gathering, integrating, interpreting, and communicating essential information related to all- hazards threats or disease activity affecting human, animal, or plant health, for the purpose of (1) achieving early detection and warning, (2) contributing to overall situational awareness of the health aspects of the incident, and (3) enabling better decision making at all levels. As defined in the NBIC Strategic Plan, biosurveillance integration is combining biosurveillance information from different sources and domains (e.g., human, animal, and plant health; food and environmental safety and security; and homeland security) to provide partners and stakeholders with a synthesized view of the information, and what it could mean. The goal is to create new meaning—that is, to provide insights that cannot be gleaned in isolation, leading to earlier warning of emerging events and shared situational awareness. Example of a biological event monitored by the National Biosurveillance Integration Center: Middle East respiratory syndrome coronavirus (MERS-CoV) Since first recognized in September 2012 in Saudi Arabia, MERS-CoV has been detected in nearly 30 countries, including the United States. As of July 2015, there have been more than 1,300 confirmed cases and over 500 deaths, the vast majority of which have been in Saudi Arabia. Two cases have been detected in the United States from patients who had recently travelled to Saudi Arabia. MERS-CoV is characterized as a potentially severe respiratory illness and symptoms may include fever, cough, shortness of breath, congestion of the nose and throat, and diarrhea. Camels are considered the likely source for human infections. As of June 2015, human-to-human transmission has been limited and the risk of infection to travelers visiting the Arabian Peninsula is considered to be low. made in a shared risk environment that considers all domains. According to the NBIC Strategic Plan, shared situational awareness across the biosurveillance community is achieved cooperatively by entities that integrate mission essential, overlapping portions of their individual situational awareness for a unified purpose, leading to a common picture or understanding of potential and ongoing biological events. Further, the plan notes that shared situational awareness of the broader biological domain may provide insights that cannot be gleaned in isolation, and thus enhance the likelihood of identifying an event earlier and with more certainty. The importance of biosurveillance integration has also been described by key national planning documents. In July 2012, the White House issued the National Strategy for Biosurveillance, which describes the U.S. government’s approach to strengthening biosurveillance. Although the strategy does not specifically identify roles for NBIC, it does emphasize the need for integration across disparate information sources, including data derived from intelligence, law enforcement, environmental, plant, animal, and other relevant areas. In June 2013, the White House’s Office of Science and Technology Policy issued the S&T Roadmap. Building upon the National Strategy for Biosurveillance, the Roadmap identifies biosurveillance capability needs and key research and development priorities, including those related to integration. For example, the roadmap proposes the development of a national, interagency biosurveillance data- sharing framework that integrates data and information from disparate sources, as well as the development of tools that enhance the efficient manipulation of large data sets, including social media. As shown in table 2, the 9/11 Commission Act outlines certain requirements for NBIC. Drawing upon these requirements as well as the July 2012 NBIC Strategic Plan, we identified three main roles that NBIC, as a federal-level biosurveillance integrator, must carry out to achieve the duties and outcomes described by NBIC’s authorizing legislation. Senior NBIC officials agreed that these three roles are consistent with the center’s responsibilities. These roles are not mutually exclusive and can reinforce one other. For example, NBIC’s efforts as an Innovator might result in the in the development of data that could enhance its role as an Analyzer by providing the center with another dataset to review. Example of a biological event monitored by the National Biosurveillance Integration Center: Ebola virus disease (EVD) Since late 2013, the World Health Organization (WHO) has reported a cumulative over 27,000 suspected, probable, and confirmed cases of EVD and over 11,000 related deaths as of June 2015.The vast majority of cases have been in West Africa, but there have also been cases in the United States. Eleven cases of EVD have been treated in the U.S., of which 9 recovered and 2 patients died. Of the 11 cases, 9 were presumed to have been contracted in West Africa and 2 were presumed to have been contracted at a Texas hospital by nurses treating an infected patient. EVD symptoms typically develop 2 to 21 days after exposure to Ebola virus. Symptoms include fever, headache, joint and muscle aches, impaired liver and kidney function, stomach pain, and death. Although the WHO has classified the West African EVD epidemic as a Public Health Emergency of International Concern, the outbreak is considered to be unlikely to significantly affect U.S. public health. Example of a biological event monitored by the National Biosurveillance Integration Center: Measles in the United States From December 2014 through February 2015, state and local health departments reported 171 measles cases across 20 states and the District of Columbia. Most of these cases had been associated with an ongoing outbreak associated with Disneyland theme parks in California. Measles is a highly contagious viral illness that can spread rapidly in communities without proper vaccination. Symptoms include high fever, cough, runny nose, watery eyes, rash, and death. Measles was officially declared eliminated in the U.S. in 2000 and cases in the U.S. have primarily the result of international travel to countries experiencing outbreaks. Outbreaks in the U.S. have persisted mainly due to the increase in unvaccinated people. DHS to assist in achieving NBIC goals. The board is to meet at least twice a year and is to be chaired by DHS’s Chief Medical Officer, with a cochair that is to be rotated annually among the federal partners by a majority vote. The NBIC Advisory Board members are to provide formal recommendations to the Advisory Board Chair and Cochair on: (1) identifying, prioritizing, and addressing NBIC and other appropriate operational and programmatic needs; (2) reviewing draft guidance and other supporting documents related to national biosurveillance strategy and policy, as appropriate; and (3) improving communications and collaboration among local, state, tribal, territorial, and federal interagency partners. The NBIC Interagency Working Group is to provide support and respond to taskings from the Advisory Board to assist in addressing NBIC operational, programmatic, and scientific issues. The working group is to consist of senior-level federal officials from NBIS member departments and agencies and the Executive Office of the President who are authorized to make recommendations on behalf of their organizations. Each agency is to have at least one working group member, but can have more based on the relevance to their missions of the topics to be covered by the working group or subworking groups. The working group is to meet as needed, but generally more frequently than the Advisory Board, according to NBIC officials. The NBIS is a consortium of federal partners that was established to rapidly identify and monitor biological events of national concern and to collect; analyze; and share human, animal, plant, food, and environmental biosurveillance information with NBIC. The NBIS community predated the enactment of the 9/11 Commission Act. Beginning in 2004, DHS coordinated the NBIS community and developed an information technology (IT) system to integrate other agencies’ biosurveillance information, an effort that was moved among several DHS directorates, including DHS’s Science and Technology Directorate. In 2007, DHS created the Office of Health Affairs, headed by the DHS Chief Medical Officer, to lead DHS’s biodefense activities and provide timely incident- specific guidance for the medical consequences of disasters. At that time, DHS placed the responsibility for coordinating the NBIS in the Office of Health Affairs. Shortly after that, the 9/11 Commission Act created NBIC and gave it responsibility for coordinating the NBIS. NBIC has remained in the Office of Health Affairs since that time. Example of a biological event monitored by the National Biosurveillance Integration Center: Porcine epidemic diarrhea virus (PEDv) Since Spring 2013, there have been 11,364 confirmed samples of PEDv from 35 states as of April 2015. As a result, 5 countries and the European Union issued trade restrictions against U.S. swine imports, some of which have since lifted those restrictions. Hog and pig farming is a multi-billion dollar industry. However, despite earlier predictions, the economic decline due to PEDv has not been as drastic as predicted. PEDv is a highly infectious virus specific to swine and does not affect humans or other species. Symptoms in pigs include diarrhea, vomiting, and anorexia, and is particularly deadly to young pigs. In June 2014, the U.S. Department of Agriculture issued a Federal Order requiring mandatory reporting of all novel swine enteric coronavirus diseases, including PEDv. Commission Act outlines a number of responsibilities for member agencies. For example, the member agencies are to use their best efforts to integrate biosurveillance information into NBIC and connect their biosurveillance data systems to the NBIC data system under mutually agreed protocols. Further, per the act, member agencies are to provide personnel to NBIC under an interagency personnel agreement and consider the qualifications of such personnel necessary to provide human, animal, and environmental data analysis and interpretation support to NBIC. We surveyed and interviewed officials from 19 federal departments and their component agencies across 13 of the 14 departments and agencies that compose the NBIS. On the basis of their roles and responsibilities related to biosurveillance, we categorized the NBIS partners agencies into three groups: Primary biosurveillance agencies: Primary biosurveillance agencies have major biosurveillance mission responsibilities that include collecting or analyzing biosurveillance information for the purposes of detecting, monitoring, or responding to biological events. These agencies generate information and develop subject matter expertise in pursuit of their missions that is directly relevant to disease detection and monitoring. In addition, they consume information from multiple sources—including nonfederal sources—to help achieve their missions. Examples of primary biosurveillance agencies include HHS’s Centers for Disease Control and Prevention (CDC) and USDA’s Animal and Plant Health Inspection Service (APHIS). Eleven of the 19 NBIS partners we interviewed and surveyed are primary biosurveillance agencies. Support biosurveillance agencies: Support biosurveillance agencies do not have missions that directly involve disease detection and monitoring; however, they collect data and information or have subject matter expertise that may be useful to efforts to detect, monitor, or respond to biological events. For example, the Department of Commerce’s National Oceanic and Atmospheric Agency collects meteorological data that may be used by NBIC to help inform the officials about progression of an outbreak based on weather patterns. Five of the 19 NBIS partners we interviewed and surveyed are support biosurveillance agencies. Biosurveillance information consumers: Biosurveillance information consumers generally do not have missions that directly involve disease detection and monitoring and generally do not produce information that is useful for biosurveillance. However, they consume such information because biological events can affect their main mission and they may have a particular role to play in responding to an event. For example, officials from DOT stated that their department consumes biosurveillance information because biological events can affect the national transportation system and transporting people and items through a contaminated area can further exacerbate a biological event. Three of the 19 NBIS partners we interviewed and surveyed are biosurveillance information consumers. Figure 1 and appendix I describe the missions and biosurveillance responsibilities of the 19 NBIS partners we interviewed and surveyed. Click on highlighted departments or agencies for more information. Click  to close. For a printer-friendly version, see app. I. Primary biosurveillance agencies: Have major biosurveillance mission responsibilities that include collecting or analyzing biosurveillance information for the purposes of detecting, monitoring, or responding to biological events. Support biosurveillance agencies: Do not have missions that directly involve disease detection and monitoring; however, they collect data and information or have subject matter expertise that may be useful to efforts to detect, monitor, or respond to biological events. Biosurveillance information consumers: Generally do not produce information that is useful for biosurveillance, but consume such information because biological events can affect their main mission and they may have a particular role to play in responding to an event. To fulfill its Analyzer role, NBIC develops a variety of products to enable early warning and enhance situational awareness of biological events, but the center faces challenges related to its ability to develop products that contribute meaningful information and has difficulty obtaining biosurveillance data. NBIC’s efforts to fulfill its Analyzer role include a variety of products and activities designed to enable early warning and shared situational awareness. As part of its daily analytic process, NBIC analysts review two main types of information: (1) open source, such as media reports, foreign, national, state, and local government agency websites, and industry and professional association reports and websites, and (2) partner-provided. First, to identify relevant open-source information, NBIC uses both automated and manual methods. For example, in addition to conducting manual searches of media, NBIC analysts also access commercial open source data feeds such as HealthMap and DOD’s National Center for Medical Intelligence’s (NCMI) Arkham data feeds, which provide open-source information in more than 80 languages that are translated automatically into English. Second, NBIC also relies on finished analytical products from NBIS partners, which may be obtained directly from partners or are available publically on agency websites. These products are usually received or obtained as written reports that represent the agency’s analysis and interpretation of the raw data that it collects on a routine basis or for a specific event. NBIC analysts may also make requests for information to NBIS partners for additional information. 24 hours a day: Department of Homeland Security’s Office of Health Affairs Watch Desk evaluates open- source biosurveillance information using a variety of tools and sources, which will inform the development of future products. NBIC analysts review watch desk information and reports from federal partners to identify items of potential significance. Late morning: Analysts conduct a daily internal NBIC discussion to determine items of significance and decide additional actions that are required. Afternoon: The Daily Biosurveillance Review is distributed to internal recipients including agency liaisons, and serves as a tool that generates a record of what was known, at what time, and from what source. Late afternoon: Monitoring List is distributed via e-mail to federal partners and other domestic and international stakeholders to update them on items being monitored, as well as other reports published by NBIC. NBIC produces a variety of regular products to enable early warning and enhance situational awareness, including its daily Monitoring List, Biosurveillance Event Reports, and Special Event Reports, as well as by responding to requests for information from its partners. NBIC’s Monitoring List is a daily e-mail that contains brief summaries on acute, ongoing biological events of concern or interest to the NBIS partners. Biosurveillance Event Reports provide additional detail on specific events. For example, throughout 2015, NBIC produced such reports on the Middle East respiratory syndrome coronavirus, highly pathogenic avian influenza, Ebola virus disease, measles, and porcine epidemic diarrhea virus, among others. NBIC has also produced Special Event Reports at the request of state and local authorities in advance of mass gathering events, such as the Super Bowl and the Little League World Series. NBIC also responds to requests for information from the NBIS partners and other stakeholders. According to agency officials we spoke with and strategic documents we reviewed, NBIC faces challenges in implementing its Analyzer role, including its limited ability to develop products that contribute meaningful information to its partners and difficulty obtaining biosurveillance data. Products That Provide New Meaningful Information Primary biosurveillance agencies generally reported that NBIC’s products do not provide them with meaningful information because those products contain information that they either already generate themselves or could obtain directly from other NBIS partners more quickly. As illustrated in figure 2, during our structured interviews, 8 of 11 primary biosurveillance agencies reported that NBIC products and activities help their agency identify potential or ongoing biological events (i.e., perception) to little or no extent. For example, officials from 2 of these agencies stated that much of the information in NBIC’s products related to their respective domains do not inform their biosurveillance activities because this information generally originates from reports that their agencies release publically. partners reported that NBIC’s products contain much information of which they are already aware or could access regardless of their participation with NBIC. For example, as illustrated by figure 3, EPA officials reported that their agency obtains information that enhances all three elements of situational awareness from seven agencies, including APHIS, CDC, and NBIC, among others. Further, EPA officials reported that they obtain information that enhances their comprehension and projection of biological events from DOI’s Office of Emergency Management and NCMI. NBIC Monitoring List: a daily e-mail to inform partners of new and ongoing events that NBIC is currently monitoring. These emails are sent to (1) federal; (2) state, local, tribal, and territorial; and (3) Biosurveillance Event Report: a more detailed report focused on a specific event. These reports provide basic event details (e.g., pathogen, location, affected populations, and event progression) and describe interagency actions, among other things. These reports are distributed via e-mail as well as the Homeland Security Information Network, among others. Special Event Report: a report requested by government partners, such as state and local governments, to provide a public health assessment for a selected event. Requests for Information: biosurveillance information collection and gathering technique through analyst-to- analyst communications that can be submitted through the Department of Homeland Security’s National Operations Center, e-mail, phone calls, or a biosurveillance information-sharing portal known as Wildfire. Move mouse over agency names to show the sources from which that agency obtains biosurveillance information. The size of the circle for an NBIS partner reflects the number of times other primary and support biosurveillance agencies identified that agency as an information source that enhances situational awareness of a biological event. For a printer-friendly version, see app. II. However, agencies with more limited roles in biosurveillance, such as the biosurveillance support agencies and information consumers, had more favorable views on NBIC’s products and activities. For example, as also illustrated by figure 2, 5 of the 8 biosurveillance support agencies and information consumers stated that NBIC’s products and activities help their agencies identify potential or ongoing biological events (i.e., perception) to a moderate extent. Officials from some of these agencies reported that they leveraged NBIC products because their own agencies lacked time, capacity, or the infrastructure to regularly review disparate sets of information across multiple agencies and domains. For example, officials from a support biosurveillance agency reported that because the agency did not have the capacity to review all of the relevant biosurveillance information that it collected, NBIC’s products filled a critical information gap in its intelligence. Some NBIS partners suggested that NBIC’s reports might be useful for state and local entities that might not have access to the same breadth of information or the capacity to integrate biosurveillance information themselves. Further, as illustrated by figure 4, 5 of the 8 biosurveillance support agencies and information consumers stated that NBIC’s products and activities help their agencies understand the nature and scope of emerging biological events (i.e., comprehension) to a great or moderate extent. Officials from these agencies generally stated that NBIC’s products were easy to understand and provided useful context for events outside their scope of expertise. For example, officials from a support biosurveillance agency praised one of NBIC’s Biosurveillance Event Reports on the recent outbreak of highly pathogenic avian influenza, which included information from CDC, APHIS, and USDA’s Food Safety and Inspection Service (FSIS) on how the disease will affect the food chain, if it could cross to the human population, and what information is known locally. Regardless of their role in biosurveillance, partners noted that NBIC’s products and activities do not generally contribute new meaning or analysis—that is, insights that cannot be gleaned in isolation. For example, as shown in figure 5, 10 NBIS partners stated that NBIC’s products and activities enhance their agencies’ ability to carry out their biosurveillance roles and responsibilities to little or no extent, 4 responded to a moderate extent, and 5 responded that they did not have a basis to judge. Generally, partners that responded to little or no extent noted that NBIC products and activities do not, for example, identify trends and patterns or describe potential impacts of a biological event. For example, officials from a primary biosurveillance agency stated that NBIC’s products and activities do not “connect the dots” between dissimilar information, provide novel synthesis of information, or recommend possible courses of action. Further, as shown in figure 6, 11 of the 19 NBIS partners stated that NBIC’s products and activities help their agencies understand how emerging and ongoing biological events are likely to progress into the near future (i.e., projection) to little or no extent. Officials noted that forecasting and projection are inherently difficult, but suggested that NBIC could develop other kinds of analysis that would be useful for the projection element of situational awareness. For example, officials from a primary biosurveillance agency suggested that NBIC could integrate more data and information from other DHS components into its reports, which would help to provide a homeland security perspective on biological events. Officials from another agency stated that NBIC could combine information across multiple domains on a local disease outbreak with known travel and weather patterns to predict how a disease might spread. NBIC officials stated that the center is working to improve its products and its ability to contextualize the information it collects from open sources, and has sought partner input to do so. For example, beginning in late June 2015, partly on the basis of feedback the center received from its November 2014 Federal Stakeholder Survey, NBIC modified its daily Monitoring List to include an up-front summary that identifies the status of ongoing biological events as worsening, improving, unchanged, or undetermined. During our interviews with the NBIS partners, several agency officials suggested that the center make a similar change to this product because it would help them to more quickly scan the report to determine which events might be worth further examination. Although we are not able to analyze the effect this change had on partner views because the change took place after our interviews, it appears to be a positive step in response to one issue that partners raised. Further, NBIC officials noted that the center is also working to better integrate forecasts and projections into its products and activities. Specifically, NBIC is participating in a working group led by the Office of Science and Technology Policy to support the priorities articulated in the S&T Roadmap by developing a common interagency vision for specific federal capabilities and practical next steps leading to the application of reliable infectious disease forecasting models in decision-making processes. Data that NBIC could use to identify and characterize a biological event of national concern using statistical and analytical tools, as called for in the 9/11 Commission Act, are limited. Apart from searches of global news reports and other publically available reports generated by NBIS partners, NBIC has been unable to secure streams of raw data from multiple domains across the biosurveillance enterprise that would lend themselves to near-time quantitative analysis that could reveal unusual patterns and trends. NBIC acknowledged in its strategic plan that the data required to carry out its mission as envisioned in the 9/11 Commission Act either do not exist or are subject to a variety of information sharing challenges that make a large information technology-centered solution less feasible than originally imagined. NBIC and NBIS partners noted that there were several kinds of data that could be useful for this kind of biosurveillance integration, but these data may not exist or may not be in a usable form. For example, EPA officials stated that under the existing statutory framework, the federal government does not collect real-time data on water quality and contamination from drinking water utilities. Instead, water systems report violations of drinking water standards to EPA on a quarterly basis. In addition, officials from CBP and DOI’s U.S. Geological Survey (USGS) reported that there is a significant gap in the availability of animal health data, particularly data on wildlife disease, which makes it difficult to fully understand the dynamics of zoonotic diseases. NBIC officials also noted that other kinds of data are maintained in formats that make them difficult to analyze, such as paper health records. Further, the S&T Roadmap noted that many livestock health records are held by private industry and are not broadly accessible or standardized in a manner that would make such data usable. In our survey, few—5 of 19—NBIS partners reported that they shared raw data with NBIC, and during structured interviews NBIS partners discussed a variety of challenges they faced in sharing certain data with NBIC. Some agencies are reluctant to share their data with NBIC because they are unsure how the information will be used. For example, officials from a primary biosurveillance agency stated that the agency does not share some data with NBIC because sharing such information too broadly might have substantial implications on agricultural trade or public perception of safety. Further, officials from another primary biosurveillance agency noted that there is sometimes reticence to share information and data with components of DHS because, given the department’s roles in law enforcement and national security, the information might be shared outside of the health security community in a way that lacks appropriate context and perspective. Other agencies stated that they are unable to share data for regulatory or legal reasons, or because appropriately protecting the data would take too long. For example, officials from HHS’s Food and Drug Administration (FDA) stated that their agency is unable to share some of its data on food and drug contamination because this information is confidential commercial information that FDA is restricted from sharing outside the agency. According to CDC officials, their agency receives electronic data from state, territorial, local, and tribal sources for a variety of programs and purposes that are covered by data use agreements that do not allow CDC to share the data outside the terms of those agreements and as allowed or required by applicable federal laws, such as the Privacy Act of 1974 and the Freedom of Information Act. Pursuant to federal law and the terms of these agreements, CDC may share aggregated information as long as it protects an individual’s privacy. However, according to CDC officials, some of these data cannot be shared without extensive, time- consuming work to appropriately redact the data to ensure that individuals may not be identified and that privacy is protected, which results in the release of the data being postponed to the point that the data are no longer actionable. Further, officials from VA noted that the Health Insurance Portability and Accountability Act of 1996 and its implementing regulations also restrict their ability to share some data because it requires appropriate safeguards to protect the privacy of personal health information, and sets limits and conditions on the uses and disclosures that may be made of such information without patient authorization. Concerns over data are a long-standing issue with NBIC and the federal biosurveillance integration mission. We have previously reported that scant availability of data throughout the federal government, a lack of trust, and partners’ concerns over sharing sensitive information with NBIC were major barriers in NBIC’s ability to obtain the data and other information that it needed to support data integration. NBIC officials recognize that these barriers inhibit the ability of their partners to share some data with the center, but noted that they are trying to work with some of their partners to address these issues. For example, NBIC is currently developing a project with VA to determine how the center can use VA’s data for biosurveillance purposes while ensuring that sensitive data are properly managed. To fulfill its Coordinator role, NBIC has established procedures that occur daily, weekly, and as emerging or significant biological event occur, but the center faces challenges related to participation of NBIS partners in the center’s activities, the provision of partner personnel to NBIC, and competing structures for convening NBIS partners. in. Biweekly Calls: Until recently NBIC hosted a Weekly Reporting Call to present and discuss the most significant biosurveillance events from the previous week. In response to feedback the center received from its November 2014 Federal Stakeholder Survey, NBIC changed the format of the weekly call in January 2015 to a biweekly call with a rotating responsibility among the NBIS partners to provide a featured speaker on a relevant issue, as well as an opportunity to pose questions to NBIC and the other partners on ongoing or potential biological events. National Biosurveillance Integration System (NBIS) Protocol: a mechanism that brings federal partners together on a short-notice teleconference to provide information sharing on an emerging or significant biological event. Daily Analysts’ Call: a daily teleconference in which NBIC analysts and other participants discuss newly identified potential and active hazards. Bi-weekly Reporting Call: NBIC hosts the Interagency Biosurveillance Presentation Series bi-weekly via teleconference with NBIS Partners presenting and discussing biosurveillance project findings of interest. interagency information-sharing portal— housed within the National Center for Medical Intelligence—through which participating entities can request information among a trusted subset of interagency subject matter experts within the federal biosurveillance community. Jointly Developed Products: NBIC brings together its multidisciplinary partners to develop joint products that enhance understanding of new or potential biological events. multidisciplinary partners to develop joint products that enhance understanding of new or potential biological events. For example, NBIC coordinated with APHIS to facilitate a study by DHS’s Homeland Infrastructure Threat and Analysis Center that modeled the potential biological and economic impacts of the Kudzu bug, a pest that presents a potential risk to the U.S. soybean crop if pesticide applications failed. NBIC has also developed joint classified intelligence products developed by DHS’s Office of Intelligence and Analysis by, for example, describing the scope and context of a biological agent in these products. According to agency officials we spoke with and strategic documents we reviewed, NBIC faces challenges that affect its ability to implement its Coordinator role, including the limited participation of NBIS partners in NBIC activities, limited provision of partner personnel to NBIC, and competing structures for convening NBIS partners. Although NBIC has implemented its Coordinator role through a variety of interactions and procedures, partner participation in key NBIC activities has generally been limited. For example, as shown in figure 7, about half of the NBIS partners reported in our survey that they regularly participated in NBIC’s Weekly Reporting Calls (9 of 19) during the time period between August 2012 and December 2014, and even fewer reported regularly participating in the Daily Analyst Calls (2 of 19). Some of the agencies that reported not regularly participating in the daily and weekly calls are primary biosurveillance agencies that are generally considered to be among the lead generators of biosurveillance information in their respective domains. Officials from one of these agencies noted that much of the information presented during the daily and weekly calls was already provided in the daily Monitoring List e-mail, and therefore provided relatively little new information. The National Strategy for Biosurveillance notes that in a reduced resources environment, it is important to pursue activities that add value for all participants, and officials across the NBIS noted that the modification to the weekly call was a positive step. For example, officials from a primary biosurveillance agency stated that the change provided them with an opportunity to advertise the services their agency provides. Officials from another primary biosurveillance agency noted that the new presentation-focused format is more likely to benefit all partners across the NBIS. NBIC officials stated that they plan to request feedback from the partners in the future on the new format of these calls to determine what, if any, additional changes are needed. Limited Provision of Partner Personnel Another challenge faced by NBIC concerns its ability to obtain personnel from its partners as originally envisioned in the 9/11 Commission Act. NBIC officials told us that effective biosurveillance depends on subject matter experts to interpret events and place them in context. Although all of the NBIS partners provide key points of contact for NBIC, few (3 of 19) partners currently provide a dedicated liaison as of July 2015. Officials across the NBIS partners provided various reasons for why their agencies did not provide a liaison. For example, officials from one primary biosurveillance agency stated that for their agency, and likely other agencies as well, it is difficult to provide personnel to NBIC on a full- or part-time basis because of their own resource constraints. Further, officials from a support biosurveillance agency noted that the lack of clarity about NBIC’s value to its partners is a barrier to providing the center with detailees. In order to obtain more personnel from its partners, NBIC has agreed to partially fund some of the liaisons. For example, according to NBIC officials, the center already funds liaisons from VA, DOI, and USDA’s APHIS and is working to establish a liaison with CDC. .According to NBIC, liaisons have provided great benefit to the center such as by providing special knowledge of their agency’s roles and areas of responsibility and providing NBIC with the critical ability to reach-back into their respective agency or department. According to the officials, NBIC would like to more fully leverage the capabilities of its partners and obtain a liaison from each NBIS partner; however, budget constraints currently prohibit NBIC from obtaining fully funded liaisons from each partner. Competing Structures for Convening Partners Federal partners noted that they were unclear about the differences between two of the major structures used for convening federal stakeholders to discuss emerging biological events. The NBIS Protocol, as previously identified, is managed by NBIC, while the other, the Biological Assessment Threat Response (BATR) Protocol, is managed by the White House’s National Security Council Staff. According to the NBIC Strategic Plan, the BATR Protocol is a national-level interagency consultation process with mid-to-high level decision makers that is designed to achieve coordinated action and desired outcomes to prevent, protect from, and respond to high-consequence bioterrorism and biosecurity threats. According to NBIC, each of the protocols is designed to serve a different purpose for a different set of participants according to their respective roles in the recognition of, and response to, a biological event. The NBIS Protocol is a mechanism to bring together federal analysts and operators for information sharing early in a biological event’s discovery and development phase; whereas the BATR Protocol is designed to enable the most senior level of federal leadership to achieve situational awareness to effectively coordinate available resources for incident response. However, although we did not ask a specific question about the two protocols in our structured interviews, about a quarter of the NBIS partners (5 of 19) we interviewed were unclear about the differences between the two protocols. For example, in structured interviews, officials from two of the agencies noted that the protocols appeared to serve the same purpose or were attended by the same officials. To fulfill its Innovator role, NBIC has funded several pilot projects, sought new data sources, and made efforts to enhance its IT system, but faces challenges related to its limited resources and the varying needs of its partners. rapid characterization and mitigation of disease outbreaks. The resulting product, which was completed in October 2014, provides biosurveillance analysts with procedures for selecting and applying uncertainty methods as well as a standardized format for reporting information. National Collaborative for Bio- Preparedness (NCB-Prepared): a pilot project sponsored by NBIC and the University of North Carolina at Chapel Hill, among others. According to NBIC, a September 2014 prototype was capable of real-time analysis of health data in a geographic format, enabling users to search data, for example, on clinical symptoms and text within health records, using data from Emergency Medical Services, 911 phone calls, and Poison Control Centers. According to NBIC, this pilot program is intended to be offered to state and local governments and the private sector. Social Media pilot projects: NBIC has conducted several pilot projects to examine the extent that social media can augment existing biosurveillance detection and analysis. The pilot projects assessed the feasibility of using commercial and government off-the-shelf systems to aggregate social media information for biosurveillance. The most recent pilot, initiated in fiscal year 2012, funds the Department of Defense’s Naval Surface Warfare Center to develop analytical techniques to improve the use of social media data for biosurveillance. NBIC plans to conclude the project at the end of fiscal year 2015, and transition the project’s algorithms for operational use. The 2012 NBIC Strategic Plan also identified a number of pilot projects designed to assess the extent to which such projects could be adopted full-scale. According to the plan, each pilot project is intended to improve collaboration or information sharing. According to NBIC, these pilots are routinely assessed and evaluated to determine what is most helpful and effective, and those that prove successful will be integrated into normal operations, while those that are not will be discontinued. For example, NBIC has jointly funded the National Collaborative for Bio-Preparedness pilot project to develop a comprehensive, state-level system to analyze public health trends and detect emerging biological incidents by using data analytics and anomaly algorithms. Further, NBIC has also funded three pilot projects examining the feasibility of using open-source data from various social media applications in order to identify possible health trends. NBIC has completed two of these pilots and one is ongoing. NBIC has been seeking new sources of data and information in order to fulfill its mission for early warning and shared situational awareness of acute biological events, including data and information from other DHS components and NBIS partners, as well as classified information. First, in September 2013, NBIC analyzed the usefulness of department-wide absenteeism data from DHS’s Office of the Chief Human Capital Officer, which could be an indicator of an emerging epidemic. Based on an analysis of 20 months of DHS workforce data from 2012 and 2013, the study concluded that absenteeism data could be a useful component in biosurveillance, as understanding differences between normal leave behavior and expected rises in leave behavior during peak flu seasons would help in establishing baseline values for comparison. In August 2014, NBIC, working with DHS’s Science and Technology Directorate and Lawrence Livermore National Laboratory, evaluated the usefulness of DHS components’ data systems as potential biosurveillance data sources. The assessment identified two Customs and Border Protection (CBP) databases as the most useful to NBIC’s mission, and in June 2014, NBIC funded a part-time liaison to CBP’s Office of Intelligence to determine the extent to which NBIC can use CBP databases for biosurveillance purposes. Second, NBIC has also sought to obtain new sources of data from NBIS partners and other stakeholders. For example, as of July 2011, VA has provided NBIC a liaison to, among other responsibilities, identify ways NBIC can use VA’s patient healthcare information to support its early detection and situational awareness mission. Finally, according to NBIC officials, the center has enhanced its process for analyzing intelligence information and reviews various intelligence sources to supplement, corroborate, or provide additional context to the biosurveillance items identified through other sources. For example, an NBIC intelligence analyst reviews all source intelligence information to identify potential topics of interest, such as indications of novel infectious disease or terrorism, and if necessary, reaches back to partners in the intelligence community for further information. NBIC officials noted that the center’s recent focus is on building its internal IT infrastructure, rather than pilot projects. For example, through its current Biofeeds project with the Pacific Northwest National Laboratory, NBIC is seeking to build a visual and text analytics capability that to enable the center to more effectively and efficiently identify relevant information in open source data. Officials also noted that NBIC is partnering with the Defense Threat Reduction Agency on the Biosurveillance Ecosystem project to build a collaborative analytic workbench for the center. Further, officials stated that NBIC has obtained an IT program manager as a detailee from the Transportation Security Administration to help build the center’s internal IT program. According to agency officials we spoke with and strategic documents we reviewed, NBIC faces challenges that affect its ability to implement its Innovator role, including its limited resources and the varying needs of partners. Although we did not ask a specific question about resource limitations, officials from 9 of the 19 NBIS partners identified it as a challenge NBIC faces in developing new biosurveillance tools and technology. From fiscal year 2012 through 2015, NBIC’s budget ranged from $10 million to $13 million annually. Officials from 2 primary biosurveillance agencies noted that NBIC’s budget limits its ability to enhance its existing technology systems to invest in innovations such as disease event modeling. Further, officials from 3 primary biosurveillance agencies more generally expressed concerns regarding the imbalance between the size and nature of NBIC’s mission, including its role as an innovator, and the resources that it had available to achieve it. NBIC officials stated they have never requested a budget increase, because their larger DHS office, the Office of Health Affairs, has experienced budget reductions, and an increase for NBIC would require a decrease for another program. However, NBIC officials noted that they would likely use any increase in its budget to help develop more analytical tools for itself and its partners. Related to its limited resources, NBIC also faces challenges prioritizing its innovation efforts because its partners have diverse, and sometimes conflicting, needs. The S&T Roadmap noted that active collaboration for biosurveillance presents challenges because stakeholders have varying missions and roles. As previously noted, NBIC asked its partners to identify existing capability gaps. The 13 submissions covered a wide variety of biosurveillance issues and domains, such as wildlife disease surveillance, integration of pharmacy data, and analysis of Medicare claims data. However, although NBIC asked partners to prioritize the 13 submissions identified as existing capability gaps, the proposal that was selected had been ranked third by the partners, and officials from a primary biosurveillance agency stated that it was unclear why the higher ranking proposals were not selected. Further, officials from a primary biosurveillance agency suggested that NBIC conduct its own needs analysis to determine what tools and technology NBIC could invest in. NBIC officials noted that the third ranked proposal was selected because it was the highest ranked proposal that was “shovel ready”, thereby allowing funding to be applied when funding was available, whereas the top two proposals were not. According to NBIC officials, future investments will be informed by the S&T Roadmap. Although 13 of the 19 NBIS partners stated in our structured interviews that the concept of having a federal entity whose mission is to serve as the integrator of national biosurveillance information across agencies and disease domains is very or moderately important, some also expressed doubts about the feasibility and practicality of this mission. Specifically, although we did not specifically ask a question about the practicality of NBIC’s mission, about a third of the NBIS partners (7 of 19) expressed skepticism and doubts about the feasibility of NBIC’s mission, including whether federal integration of biosurveillance information could actually achieve early warning and situational awareness of biological events. Among the specific reasons officials cited for the skepticism was their uncertainty that the current model of biosurveillance integration was the most effective investment for strengthening the national biosurveillance capability. For example, officials from one agency noted that while the concept makes sense intuitively, there is no reliable evidence, such as a peer-reviewed study, that has confirmed the viability of the concept, nor has there been a large-scale biological threat that has been detected through integration; moreover, such a system—by virtue of its being federally-based—would lack timely detection and response capabilities because events occur at the local level. Officials from another agency questioned the feasibility of NBIC’s mission because the data and technology that are currently available do not provide for the accurate projection of biological events or facilitate the provision of early warning. Additionally, an NBIC official told us that the ability to achieve early detection of emerging events—especially unexpected or novel events—is dubious because most of the tools and techniques used in surveillance rely on contrasting current conditions with known baseline trends and patterns, but as an event emerges, surveillance practitioners are not necessarily going to be focused on those patterns and trends until something prompts their attention. Moreover, when a biological event is novel, its patterns and trends are not yet known. We have previously reported on skepticism on the part of some of the NBIS partners regarding the value of the federal biosurveillance mission as well as NBIC’s role in that mission. In our 2009 report, most of the NBIS partners we interviewed at that time expressed uncertainty about the value of participating in the NBIS or confusion about the purpose of NBIC’s mission. For example, officials from 1 of the partners stated that it was unsure whether NBIC contributed anything to the federal biosurveillance community that other agencies were not already accomplishing in the course of carrying out their biosurveillance-relevant missions. We, the NBIS partners, and other major stakeholders in the biosurveillance community acknowledge that no single problem limits NBIC’s mission to integrate biosurveillance data. Rather, over the years, several long-standing problems have combined to inhibit the achievement of this mission as envisioned in the 9/11 Commission Act. Most notably, to operationalize the federal biosurveillance integration concept requires the simultaneous sharing and consideration of information from vastly disparate domains, including health, law enforcement, intelligence, and international partners. However, as noted in the S&T Roadmap, the sharing of this information is limited and is often not possible. The challenges previously described illustrate that NBIC faces significant obstacles in implementing its roles as a biosurveillance integrator as originally described in the 9/11 Commission Act. Below, we discuss options for policy or structural changes that could help better fulfill the biosurveillance integration mission. We identified these options and their benefits and limitations, on the basis of the roles of a federal-level biosurveillance integrator we identified in the 9/11 Commission Act, NBIC’s strategic plan, and the perspectives of the NBIS partners obtained during our structured interviews. These options are not exhaustive, and some options could be implemented together or in part. In developing these options, we did not evaluate the financial implications of implementing each option, to the extent it is knowable, but we acknowledge they are likely to result in an increase, decrease, or shifting of funding based on the changes described. Developing meaningful information not otherwise available: This option would address some of the challenges NBIC faces in implementing its Analyzer role, such as access to data from the NBIS partners, and would better position the center to develop meaningful information that could not be gleaned in isolation, potentially leading to earlier warning of emerging events and shared situational awareness. Capitalize on new data sources and analysis techniques: Focusing on providing the resources, infrastructure, and frameworks for data sharing may provide the foundation to capitalize on future advancements in data analytics, including big data analysis and electronic health records, to mine data for emerging patterns. Uncertain need: The probability that a disease event with significant national consequences occurring in such a way that it would be detected more quickly by overlaying various data streams and applying statistical and analytical tools to them is not known. Uncertain data availability: There may not be a significant amount of meaningful data available that is not already being provided to facilitate advanced analytical techniques. For example, although partners identified other potential data sources that could contribute to a more robust integration tool, such as water contamination and wildlife disease data, it is unknown whether such data could be collected and managed to make a meaningful contribution, and if they could, at what cost. Unproven concept: Even with access to more data, it is unclear whether a federal biosurveillance integrator would be able to identify patterns or connections that would lead to earlier warning of emerging events or reduce the time it takes to discover, prevent, or respond to a potentially catastrophic event, or that it would merit the associated costs. Finding patterns and trends without knowing specifically what to look for is challenging, and about a third of the NBIS partners (7 of 19) expressed skepticism and doubts about the feasibility of NBIC’s mission, including whether federal integration of biosurveillance information could actually achieve early warning and situational awareness of biological events. Unknown impact of earlier detection: If NBIC were able to discern signals that gave warning of an emerging event, there is no guarantee that it would significantly decrease the amount of time it would take federal partners to confirm the warning and implement response actions. Increased costs: Creating the enterprise architecture both within NBIC and across the NBIS that would facilitate transfer and computer- aided analysis of data would likely require a significant investment in technology, as well as skilled personnel with data analytic, legal, and regulatory expertise. Although we have not specifically assessed the costs of these options, such costs, at least in the near term, would likely exceed NBIC’s current annual budget. Clear leadership: This option would create clear leadership across the interagency for developing and implementing biosurveillance policy in general and in response to specific biological events, which may also encourage partners to more fully participate in NBIC activities, such as regularly attending NBIC’s Daily Analysts’ and Bi- weekly Reporting calls. Better institutional connection: NBIC officials have stated that the current liaisons have provided great benefit to the center. Ongoing interaction among more dedicated liaisons from various agencies may strengthen biosurveillance subject matter expertise and could enhance communication across all the agencies. Routine, institutionalized channels to monitor for emerging trends and patterns: Clarifying the federal integrator’s role in routinely convening and drawing on the analytical capacity of the various pockets of federal expertise across the NBIS could enhance the ability of NBIC to go beyond daily surveillance and monitoring activities to recognize connections and generate meaningful insights that may not be gleaned in isolation. Enhanced accountability for implementing the National Strategy for Biosurveillance: Formally vesting a federal entity with responsibility for leadership of the national biosurveillance enterprise would fill a longstanding need to institutionalize and create accountability for common goals and deliberate, results-driven, risk- based investment across the enterprise. Because the mission responsibilities and resources needed to develop a national biosurveillance capability are dispersed across a number of federal agencies, efforts could benefit from a focal point to provide sustained leadership that helps direct interagency efforts to invest in and implement new and existing programs in a way that ensures generation of meaningful data with the potential to discover emerging biological events with potentially catastrophic consequences. Role conflict: Some of these responsibilities overlap with responsibilities that have historically been the purview of the National Security Council Staff, and legislative direction to assume these responsibilities could create more role conflict and confusion unless authority, roles, and responsibilities were very clearly designated. Authority and legitimacy: It may be difficult for an agency at NBIC’s level to successfully influence decision making across the interagency. For example, discussions we had with some NBIS partners demonstrated that both DHS and NBIC have encountered and may continue to encounter issues with perceived legitimacy in the health security arena. New tools and technology: NBIC could foster the development of tools and technology that benefit multiple federal partners and other members of the NBIS (e.g., state and local health agencies), thus enhancing the overall national biosurveillance capability. For example, the 2013 S&T Roadmap identified the need to strengthen detection by developing new modeling and ecological forecasting approaches that could enhance current ways of predicting disease outbreaks and determining likely impacts when a threat is detected. Specifically, this should be accomplished by developing methods that integrate traditional monitoring (i.e., pathogen, environmental, and health) with background data (i.e., meteorological and population dynamics). Coordinate research and development efforts: The S&T Roadmap notes that there are dozens, and possibly hundreds, of biosurveillance initiatives and pilot projects that have been implemented at local, state, regional, and national levels, NBIC would be well positioned to help coordinate and deconflict biosurveillance research and development across the interagency, which would help to avoid any unnecessary duplication, overlap, and fragmentation of effort. Further, the S&T Roadmap identifies 14 research priorities, many of which would benefit from coordination across the federal government, as well as with state, local, and private entities. For example, one of the research priorities is to develop multilateral communication mechanisms among the various levels of government and the private sector to enable timely decision making. Effectively addressing such a research priority would likely require the collaboration of multiple federal and nonfederal partners, including HHS, USDA, and DHS, as well as healthcare providers and international partners, among others. Increased costs: Although we have not specifically assessed the costs associated with the options, supporting the development of new tools and technology would likely exceed NBIC’s current annual budget. More research and development expertise: Although NBIC has engaged in some pilot projects that develop tools and technology, a national integrator that focuses on innovation would likely need to acquire more expertise in research and development. Significant restructuring: In comparison with its other roles, NBIC’s role as an Innovator is the least well defined in the 9/11 Commission Act, and NBIS partners noted that the center’s current budget limits its ability to fulfill this role. Focusing attention on this role may represent a significant mission shift from the status quo, and may require very different sets of resources and procedures. NBIC has made progress and may continue to do so: Although most (10 of 19) federal partners stated that NBIC has limited impact on their ability to carry out their biosurveillance roles and responsibilities, 12 of 19 NBIS partners interviewed noted that NBIC has made improvements in its products, outreach, coordination, and other activities. Further, in recent years, NBIC has been able to obtain or partially fund liaisons from other agencies. Establishing itself as a trusted and effective federal integrator with limited direct authority is a difficult task, and the center and its NBIS partners may merely need more time to evolve their roles and relationships to realize the full potential of the current NBIC as the federal biosurveillance integrator. Some agencies currently find value in NBIC’s products: Agencies with more limited roles in biosurveillance, such as biosurveillance support agencies and information consumers, generally stated that they like NBIC’s products because their own agencies do not have enough resources to review biosurveillance information across multiple agencies and domains. Further, NBIC officials noted that the center’s products benefit some of their nonfederal stakeholders that have limited resources for biosurveillance, such as state, local, tribal, and territorial agencies. For example, as of July 2015, NBIC’s daily Monitoring List e-mail is distributed to 338 individuals representing state, local, tribal, and territorial entities, including state departments of health and agriculture, fusion centers, and police departments. Data challenges: NBIC will likely continue to face challenges in obtaining all the biosurveillance data it needs to effectively apply statistical and analytical tools to identify and characterize biological events of national concern in as close to real time as practicable, per requirements in the 9/11 Commission Act. Partners remain skeptical of NBIC’s value: NBIC has implemented our recommendation to create a strategy, in partnership with the NBIS agencies, that better defines its mission and focus on other collaborative practices. Nevertheless, NBIS partners remain skeptical of NBIC’s value. As previously shown in figure 5, few of the NBIS partners (4 of 19) we interviewed stated that NBIC’s products and activities enhanced their agency’s ability to carry out their biosurveillance roles and responsibilities. Further, as illustrated in figure 8, 8 of 19 NBIS partners we interviewed stated that NBIC is achieving its mission to little or no extent. It is unclear whether additional time or what additional actions will improve partners’ experience with NBIC’s overall value to the national biosurveillance capability. Cost savings: Given that most federal partners stated that they integrate some biosurveillance information themselves and that NBIC has limited impact on their ability to carry out their biosurveillance roles and responsibilities, the cost of operating NBIC may not be worth its benefits. Officials report that a federal integrator is important: Although federal partners generally thought that NBIC’s products and activities did not provide meaningful new information, they largely thought that the concept of having a federal entity to integrate biosurveillance information across the federal government was important. Specifically, in our structured interviews, 13 of the 19 NBIS partners stated that the concept of having a federal entity whose mission is to serve as the integrator of national biosurveillance information across agencies and disease domains is very or moderately important. Potential loss of investment: As previously noted, 13 of 19 NBIS partners stated that NBIC has made improvements in its products, outreach, coordination, and other activities. Defunding NBIC could create a loss of investment, institutional learning, and progress made toward developing a federal biosurveillance integrator, which may need more time to evolve to become effective. Another integrator may experience similar challenges: Even if one of the other primary biosurveillance agencies were designated as the federal biosurveillance integrator, that entity may still find it difficult to overcome organizational boundaries and engender agency cooperation, given that multiple agencies have key biosurveillance responsibilities. We provided a draft of this report for review and comment to DHS and the 13 other departments and agencies that compose the NBIS—the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Interior, Justice, State, Transportation, and Veterans Affairs, as well as EPA, ODNI, and USPS. DHS provided written comments on September 16, 2015, which are summarized below and presented in their entirety in appendix III of this report. DHS, EPA, USPS, and the Departments of Agriculture, Interior, Health and Human Services, and Veteran’s Affairs provided technical comments, which we considered and incorporated, where appropriate. The ODNI and the Departments of Commerce, Defense, Energy, Justice, State, and Transportation did not comment. DHS expressed appreciation for our recognition of its progress fulfilling our prior recommendations, which were designed to enhance interagency collaboration. DHS also acknowledged the array of challenges detailed in this report, and noted some actions it is undertaking to try to address them. DHS noted that the report does not include nonfederal biosurveillance stakeholders in its scope, and posits that these stakeholders may find value in NBIC’s current products. Although we cannot comment on the extent to which these nonfederal stakeholders value NBIC’s current products, we have previously reported on the important role that nonfederal partners in the biosurveillance enterprise, particularly because most of the resources necessary to generate biosurveillance information are outside of the federal government. The federal departments and agencies with primary biosurveillance roles, as outlined in this report, have a variety of relationships and agreements with nonfederal partners to facilitate partnership and information sharing. We note that NBIC’s authorizing legislation calls for NBIC to work with state and local entities in coordination with, and through when possible, its federal partners and these existing relationships. We are sending copies of this report to the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, the Interior, Justice, State, Transportation, and Veterans Affairs; the Environmental Protection Agency; the United States Postal Service; and the Office of the Director of National Intelligence. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this product are listed in appendix IV. We surveyed and interviewed officials from 19 federal departments and their component agencies across 13 of the 14 departments and agencies that compose the National Biosurveillance Integration System (NBIS). We refer to these 19 agencies as NBIS-partner agencies, and we categorized these into three groups: Primary biosurveillance agencies: Have major biosurveillance mission responsibilities that include collecting or analyzing biosurveillance information for the purposes of detecting, monitoring, or responding to biological events. Support biosurveillance agencies: Do not have missions that directly involve disease detection and monitoring; however, they collect data and information or have subject matter expertise that may be useful to efforts to detect, monitor, or respond to biological events. Biosurveillance information consumers: Generally do not produce information that is useful for biosurveillance, but consume such information because biological events can affect their main mission and they may have a particular role to play in responding to an event. We developed these categories based on each partner’s roles and responsibilities related to biosurveillance. Table 3 includes brief summaries of the NBIS partners, including agency type, mission, domains, and biosurveillance responsibilities. We conducted a Web-based survey of the 19 National Biosurveillance Integration System (NBIS) partners to identify the federal agencies from which they obtain information that contributes to their agency’s situational awareness of biological events. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey can introduce errors, commonly referred to as nonsampling errors. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling errors. We conducted pretests with 3 agencies to help ensure that the questions were clear and unbiased, and that the questionnaire did not place an undue burden on respondents. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to its administration. We made appropriate revisions to the content and format of the survey questionnaire based on the pretests and independent review. The survey was administered on the Internet from March 25, 2013, to May 15, 2013. To increase the response rate, we followed up with emails and personal phone calls to the experts to encourage participation in our survey. We received responses from all 19 agencies in our population (100 percent response rate). Based on comments we received from two agencies, we also conducted two follow- up phone calls with officials at these agencies who responded to our survey to verify their answers to survey questions about the federal agencies from which their agency obtains information that contributes to their agency’s situational awareness of biological events. We made appropriate changes to the responses recorded on these officials’ questionnaires to reflect the clarifications made during these phone calls. When we analyzed the data, an independent analyst verified all programs. Because this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, minimizing error. In the survey, we asked each agency whether it obtains information from each of the other agencies in our population, as well as which types of information it obtains from them (perception-, comprehension-, or projection-related information). For the purposes of this report, we use the definition of situational awareness that the NBIC Strategic Plan uses in the articulation of its mission. The definition has its basis in the work of Mica Endsley, who described situational awareness as having three elements: (1) perception that a situation has occurred, (2) comprehension of the situation’s meaning, and (3) projection of the event’s likely course in the near future. We performed a network analysis of these survey data, which is a quantitative and graphical technique for identifying the underlying patterns in a complex system of relationships among entities of interest. Figure 9 illustrates the agency sources from which the primary and support biosurveillance agencies in our survey obtain data that enhances their situational awareness of biological events. For example, officials from the Environmental Protection Agency (EPA) reported that their agency obtains information that enhances all three elements of situational awareness from seven agencies, including the Animal and Plant Health Inspection Service, Centers for Disease Control and Prevention, and National Biosurveillance Integration Center, among others. Further, EPA officials reported that they obtain information that enhances their comprehension and projection of biological events from the Department of the Interior’s Office of Emergency Management and the National Center for Medical Intelligence. In addition to the contact named above, Kathryn Godfrey (Assistant Director), Andrew Brown, David Dornisch, Lorraine Ettaro, Eric Hauswirth, R. Denton Herring, Tracey King, Erin O’Brien, Lerone Reid, John Vocino, Brian Wanlass, and Christopher Yun made key contributions to this report.
A biological event, such as a naturally occurring pandemic or a terrorist attack with a weapon of mass destruction, could have catastrophic consequences for the nation. This potential threat underscores the importance of a national biosurveillance capability—that is, the ability to detect biological events of national significance to provide early warning and information to guide public health and emergency response. The 9/11 Commission Act of 2007 addresses this capability, in part, by creating NBIC within the Department of Homeland Security (DHS); it was tasked with integrating information from human health, animal, plant, food, and environmental monitoring systems across the federal government, to improve the likelihood of identifying a biological event at an earlier stage. In recent years, NBIC's budget has ranged from $10 million to $13 million annually. GAO was asked to evaluate NBIC. This report discusses the (1) extent to which NBIC is implementing its roles as a biosurveillance integrator, and (2) options for improving such integration. To conduct this work, GAO reviewed NBIC products and activities; conducted interviews and surveyed 19 federal partners, 11 of which have key roles in biosurveillance; interviewed NBIC officials; and analyzed the 9/11 Commission Act, NBIC Strategic Plan , and National Strategy for Biosurveillance . The National Biosurveillance Integration Center (NBIC) has activities that support its integration mission, but faces challenges that limit its ability to enhance the national biosurveillance capability. In the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act) and NBIC Strategic Plan , GAO identified three roles that NBIC must fulfill to meet its biosurveillance integration mission. The following describes actions and challenges in each role: Analyzer: NBIC is to use technology and subject matter expertise, including using analytical tools, to meaningfully connect disparate datasets and information for earlier warning and better situational awareness of biological events. GAO found that NBIC produces reports on biological events using open-source data, but faces challenges obtaining data and creating meaningful new information. For example, most of the federal partners with key roles in biosurveillance (8 of 11) stated that NBIC's products help their agencies identify biological events to little or no extent, generally because they already obtain such information directly from other federal partners more quickly. In addition, data that could help to identify and characterize a biological event may not exist or are not in a usable form. Further, few federal partners (5 of 19) reported that they share the data they do have with NBIC, citing legal and regulatory restrictions, among other reasons.. Coordinator: NBIC is to bring together partners across the federal biosurveillance community to enhance understanding of biological events. NBIC has developed procedures and activities to coordinate with partners, such as daily and biweekly calls, but faces challenges related to the limited partner participation in the center's activities, lack of partner personnel detailed to NBIC, and competing structures for convening federal partners. For example, although NBIC would like to obtain liaisons from each of its federal partners, only 3 of 19 partners provided NBIC with dedicated liaisons. Innovator: NBIC is to facilitate the development of new tools to address gaps in biosurveillance integration. GAO found that NBIC has efforts underway to develop some tools, such as pilot projects examining the use of social media data to identify health trends, but faces challenges prioritizing developmental efforts. For example, partners noted limitations in NBIC's ability to address gaps, like limited resources and the difficulty in prioritizing the center's innovation efforts because its partners have diverse needs. GAO identified various options that could address these challenges, ranging from strengthening the center's ability to implement its current roles to repealing NBIC's statute. GAO also identified potential benefits and limitations with each option. For example, one option would be to provide NBIC with additional authorities to obtain data to better develop meaningful information; however this may also require additional investments. Another option is to not pursue national biosurveillance integration through NBIC and to consider designating one of the other federal partners with key roles in biosurveillance as the federal integrator. The options identified are not exhaustive, and some could be implemented together or in part. GAO did not evaluate the financial implications of each option, but acknowledges some options may require additional investment or shifting of resources or priorities to result in significant long lasting change. GAO is not making recommendations. GAO provided this draft to DHS and its federal partners who provided technical comments which were incorporated, as applicable.
We sent a questionnaire to all 57 IGs to obtain information on IG organization structure, staffing, and workload. The following series of 11 questions and answers provide an overall profile of the 56 IGs who responded. More specifically, information is provided that relates to the IGs’ budget obligations, number and classification of staff by occupational job series, career background and years of experience of the current IGs, and techniques used by IGs to help ensure the quality of the work. 1. What were the IG budget obligations for fiscal year 1997? For fiscal year 1997, IG reported obligations totaled $957 million. Of the reported amount, about $912 million (95 percent) was for the presidential IGs and $45 million (5 percent) was for the DFE IGs. Figure 1 shows how these funds were distributed among the various types of work performed by the IGs. The largest single use of the funds for the presidential and DFE IGs was for auditing—financial and performance. For the presidential IGs, about 46 percent of the funds was devoted to auditing; similarly, DFE IGs spent about 47 percent of the funding on auditing. 2. How many and what types of staff were employed by the IGs as of September 30, 1997? In fiscal year 1997, the IGs stated that they had a total of 9,348 staff of which 8,818 staff (94 percent) worked in presidential IG offices and 530 (6 percent) worked in DFE IG offices. As shown in figure 2, these staff are classified in various types of occupational job series. The largest group of the IGs’ staff (45 percent) are auditors, with the next largest group—26 percent—being investigators. Figure 3 provides a breakdown of the occupational job series held by the presidential and DFE IG staff. The reported distribution of the staff among the various occupations is proportionately about the same for both the presidential and DFE IGs. As requested, we sent a questionnaire to all 57 IGs to obtain their views on current policy issues affecting them. These issues include law enforcement authority, the semiannual report, the 7-day letter, independence, and the effectiveness of PCIE, ECIE, and the Integrity Committee. These issues were identified through discussions with congressional staff and review of congressional testimonies and other publications such as those issued by the Congressional Research Service. Since the IGs’ responses were anonymous, we were unable to follow up and obtain additional information or clarification. The following nine questions and answers summarize the views of the 56 IGs who responded to this questionnaire. 1. What type of law enforcement authority do the IGs think they should be granted on a permanent basis? Currently, law enforcement authorities have not been granted to IGs across-the-board in public law. The IGs that have law enforcement authorities have acquired them through transfers from preexisting offices, specific statutory grants, delegation by the agency head, or special deputation by the Department of Justice. The special deputation can be granted on a case-by-case basis that is limited to the scope and duration of a case or by a blanket authority that covers a broader scope and is renewed after a period of time, as specified in the memorandum of understanding with Justice. In general, the law enforcement authority provided to the IGs in performing investigations include serving arrest warrants, making arrests without warrants, carrying firearms, and serving search warrants. Figure 17 identifies the various types of law enforcement actions that IGs can currently take. Most presidential IGs (81 percent) indicated that they had blanket authority granted by the Department of Justice. Most DFE IGs (60 percent) indicated that they did not have law enforcement authority but could obtain assistance from other IGs or Justice, if needed. Only three presidential IGs and one DFE IG had statutory law enforcement authority. A few IGs (5) had case-by-case authority granted by Justice. Figure 18 identifies the type of law enforcement authority the IGs currently desire. Most IGs—81 percent of the presidential and 57 percent of the DFEs—responded that they should have statutory law enforcement authority. In addition, 77 percent of the presidential and 53 percent of the DFEs stated that blanket authority should be indefinite, unless a problem occurs. On a related issue, a majority—58 percent of the presidential and 60 percent of the DFEs—indicated that they should have testimonial subpoena authority. 2. What are IG views on the usefulness of the 7-day letter? Section 5(d) of the IG Act, as amended, requires IGs to report immediately to the agency head whenever the IG becomes aware of “particularly serious or flagrant problems, abuses, or deficiencies relating to the administration of programs or operations.” The agency head, in turn, is to transmit the IG report, with the agency head’s comments, to the appropriate committees or subcommittees of the Congress within 7 calendar days. This is referred to as the 7-day letter. The survey responses indicated that none of the IGs had used the 7-day letter during the period January 1, 1990, to April 30, 1998. Earlier surveys have shown that the 7-day letter had been used on occasion by some IGs. For example, a survey conducted by the Inspections and Special Reviews Committee of PCIE in June 1986, and updated in 1989, showed that the 7-day letter had been used on 10 occasions, by seven IGs. Although a 7-day letter has not been issued in recent years, the IGs noted that it is a useful mechanism to encourage agencies to comply with the IGs’ requests. A 10-year review of the IG Act by the House Committee on Government Operations found that the IGs viewed the use of the 7-day letter as a last resort to attempt to force appropriate action by the agency. Our survey responses indicated that the IGs continue to view the 7-day letter useful as a tool. Twelve of the 22 presidential IGs and 20 of the 24 DFE IGs that responded to the question find it useful to a great or very great extent. Three IGs specifically stated that the threat of a 7-day letter gets immediate results. Another IG responded that it had threatened the use of the letter twice and in each instance the agency responded to the IG’s request. Figure 19 provides the IGs’ views on the usefulness of the 7-day letter. 3. In the opinion of the IGs, should the current requirement for the preparation of a semiannual report be modified, replaced, or eliminated? Section 5(a) of the IG Act requires that each IG issue a semiannual report to the Congress and agency management. The IG Act outlines 12 specific areas that are to be covered by each semiannual report, which are listed in appendix V. Overall, the IGs responded that most of the reporting requirements should remain. However, the following three requirements were the most frequently cited as needing change. The percent of IGs favoring each change is shown parenthetically. Audits identifying questioned costs and funds to be put to better use(39 percent) Statistical tables of questioned costs (55 percent) Statistical tables of funds to be put to better use (57 percent) While there was support for some changes to the reporting requirements, only one IG supported elimination of the report and eight IGs suggested substituting another document for the semiannual report, such as the agency’s accountability report or an annual performance report. Thirty-five IGs stated that the reporting requirement should be changed from semiannual to annual. Further, the presidential IGs indicated that the semiannual report is at least moderately useful to the Congress, OMB, agency heads, and program managers. The DFE IGs noted that the report is of great use to the Congress, OMB, and agency heads and of moderate use to program managers. 4. Do the IGs generally believe that they have sufficient independence in the performance of their work? The independence of the IGs is central to the success of the IG concept. Each IG reports to and is under the general supervision of the agency head or in the case of a presidential IG, the official next in rank, to the extent delegated. However, these individuals, with a few specified exceptions, cannot prevent or prohibit the IG from initiating, carrying out, or completing any audit or investigation or from issuing any subpoena during the course of any audit or investigation. Under the IG Act, as amended, the heads of three Departments—Defense, Justice, and Treasury—may prevent the IG in his or her department from initiating or proceeding with an audit or investigation in order to prevent disclosure of information relating to national security, on-going criminal investigations, and other matters outlined in the IG Act. If the department heads take this action, the IG Act provides for written notification to the House Committee on Government Reform and Oversight, the Senate Committee on Governmental Affairs, and other specified committees and subcommittees. Similar exceptions are provided for the Central Intelligence Agency and the Federal Reserve Board. In addition, the IGs are required to follow GAGAS, which require IGs and individual auditors to be free from personal and external impairments to independence. As shown in figure 20, the survey results indicated that 81 percent of the presidential IGs and 73 percent of the DFE IGs responded that they have the level of independence needed to accomplish their mission. On the other hand, 15 percent of the presidential and 27 percent of the DFE IGs indicated that they do not have sufficient independence, but they did not identify specific areas in which they felt their independence was being compromised. The IGs did suggest some options for enhancing independence. The most frequently cited option was to allow IGs to submit their budget request directly to OMB and the Congress rather than going through the agency review process. Other suggestions included ensuring that the removal of an IG is only for cause, clarifying the general supervision clause of the IG Act, and establishing term limits. Since these responses were part of the anonymous questionnaire, we were unable to obtain more specific information. 5. What are IG views regarding term limits and the process for identifying potential IG candidates? In recent hearings questions have been raised as to whether or not the IGs should have term limits. Term limits would mandate that IGs have a fixed period of time that they could serve as IG at their respective agency. The IGs’ survey responses indicate that while the IGs had varying views on this issue, neither the presidential IGs nor the DFE IGs favored term limits for their particular group. Their perception of the impact of term limits also varies. Some view it as enhancing independence, while others view it as inhibiting independence. The following provides a breakdown of the responses. Of the Presidential IGs, 39 percent favored term limits for presidential IGs, 35 percent favored term limits for DFE IGs, 58 percent did not favor term limits for presidential IGs, and 31 percent did not favor term limits for DFE IGs. Of the DFE IGs, 40 percent favored term limits for presidential IGs, 37 percent favored term limits for DFE IGs, 47 percent did not favor term limits for presidential IGs, and 57 percent did not favor term limits for DFE IGs. Another key IG issue is the process for identifying potential IG candidates. We asked the IGs for their views on establishing a list of potential candidates that could be used as a source when vacancies occur. The survey responses showed that the presidential and DFE IGs have different views on establishing a list of potential IG candidates. Fifty percent of the presidential IGs were of the opinion that a list should be established for all IGs, while 35 percent were not in favor of establishing any type of list, and 8 percent believed a list should be developed for only presidential IGs. In regard to the DFE IGs, 50 percent did not want any list established, while 33 percent favored the establishment of a list for all IGs, and 10 percent indicated other ways of identifying candidates, such as open competition, establishment of an IG candidate development program, and the creation of a board. However, when asked who should develop the list, there was general agreement that, if such a list was established, it should be developed by PCIE, ECIE, and OMB. 6. In the opinion of the IGs, are changes needed in the organizational structure of the DFE IGs? The Congress enacted the Inspector General Act Amendments of 1988 to establish statutory IGs at 33 DFEs. The amendments provided for the entity heads to appoint their inspectors general. The powers and duties extended to the DFE IGs are essentially the same as those provided to presidentially appointed IGs. As shown in figure 21, the majority of the DFE IGs—approximately 53 percent—expressed satisfaction with the current organization structure and operating environment, and therefore, did not favor any change. Fifteen percent of the presidential IGs were of the same opinion. However, some respondents—both presidential and DFE—expressed the opinion that some change was warranted. For example, 30 percent of the DFE IGs and 23 percent of the presidential IGs believed that consideration should be given to cross-servicing among the presidential and DFE IGs. Similarly, 23 percent of the DFE IGs and 15 percent of the presidential IGs favored cross-servicing among the DFE IGs. Additionally, the topic of reorganizing some of the DFE IGs offices has been discussed within the IG community and at congressional hearings. Therefore, we asked the IGs their opinion on this matter. From an overall perspective, the presidential IGs were more in favor of combining the DFEs. Specifically, 27 percent of the presidential IGs favored combining the smaller DFE IGs under several DFE IGs, whereas, only 7 percent of the DFE IGs supported this alternative. Similarly, 15 percent of the presidential IGs supported the idea of combining all of the DFEs under a new presidentially appointed IG; 10 percent of the DFEs supported this approach. 7. Do the presidential IGs view PCIE as an effective organization? PCIE is an interagency council established in March of 1981 by Executive Order No. 12301. PCIE is primarily comprised of the presidentially appointed and Senate-confirmed IGs and chaired by OMB’s Deputy Director for Management. An IG member serves as the Vice Chair. Members of PCIE are identified in appendix VI. The purpose of PCIE is to identify, review, and discuss areas of weakness in federal programs and to develop plans for coordinated governmentwide activities to address problems and promote economy and efficiency in federal programs. According to PCIE’s annual report for fiscal year 1996, A Progress Report to the President, PCIE works to address integrity, economy, and effectiveness issues that transcend individual agencies and to increase the professionalism and effectiveness of IG personnel throughout government. As shown in figure 22, 46 percent of the presidential IGs responded that PCIE was a moderately effective organization. However, about 35 percent responded that PCIE was moderately to very ineffective. Both sides presented pros and cons of PCIE’s effectiveness. The presidential IGs noted that PCIE is a good forum for the exchange of information and discussion of common issues among IGs. It is also viewed as useful in providing professional training to its personnel—both audit and investigative. However, some presidential IGs stated that it is difficult to reach agreement or consensus in PCIE meetings because of the diversity of its membership with representatives bringing to meetings different agendas based on their respective agency’s missions. Further, according to some IGs, PCIE needs to better address governmentwide issues and projects. In commenting on a draft of this report, the presidential IGs noted that PCIE has provided valuable service to the IG community. They noted that PCIE has developed governmentwide standards for audits, investigations, and inspections. In addition, PCIE maintains two training centers that are used for the benefit of all IGs. Additionally, the IGs commented that PCIE has worked collectively on various projects and governmentwide issues. For example, representatives from nine IGs conducted an assessment of the Internal Revenue Service’s inspection service. Further, the IGs noted that PCIE has developed forums for the discussion and exchange of information related to the Year 2000 problem and the Government Performance and Results Act of 1993.8. Do the DFE IGs view ECIE as an effective organization? ECIE was established in 1992 by Executive Order No. 12805. ECIE membership consists primarily of the DFE IGs and is chaired by OMB’s Deputy Director for Management. An IG member serves as the Vice Chair. The entire ECIE membership is identified in appendix VII. The purpose of ECIE is the same as PCIE, that is, to identify, review, and discuss areas of weakness in federal programs; and develop plans for coordinated governmentwide activities to address problems and promote economy and efficiency in federal programs. As shown in figure 23, the vast majority of DFE IGs—80 percent—indicated that ECIE is an effective organization, with 10 percent responding very effective and 70 percent responding moderately effective. The DFE IGs noted that ECIE facilitates communication and sharing of common community issues among IGs as well as provides a forum to discuss and keep abreast of current issues. A few DFE IGs stated that ECIE facilitates coordination and information sharing between ECIE and OMB. In addition, some DFE IGs stated that their size often precludes them from having the necessary resources to undertake common projects. Since all of these responses were part of the anonymous questionnaire, we were unable to follow up with the DFE IGs to obtain more details on the specific concerns they raised. 9. What are the presidential IGs views on the effectiveness of PCIE Integrity Committee? The Integrity Committee was established by PCIE in January 1995 for the purpose of receiving, reviewing, and referring for investigation, allegations of wrongdoing against certain staff members of the IG offices. The committee replaced a working group that reviewed allegations against IGs and their principal deputies. In March 1996, Executive Order No. 12993 formalized the process and established the membership of the Integrity Committee. The Integrity Committee is chaired by the Federal Bureau of Investigation (FBI) representative to PCIE. Other members include the Special Counsel, Office of the Special Counsel; the Director, Office of Government Ethics; and three or more IGs, representing both PCIE and ECIE. In addition, the Chief of the Public Integrity Section, Department of Justice Criminal Division, serves as an advisor. Allegations received by PCIE are assigned to the Integrity Committee for processing and review. Allegations that the Integrity Committee deems to be worthy of further review are referred to the Department of Justice, Public Integrity Section, to determine if the allegation, if proved, would constitute a violation of federal criminal law. If it is determined that a criminal investigation is warranted, the Public Integrity Section can investigate the allegation or refer it to the FBI. If it is determined that a criminal investigation is not warranted or that the investigation substantiated misconduct but prosecution is declined, the allegation is returned to the Integrity Committee for an administrative review. If a noncriminal allegation is determined to warrant referral, the Integrity Committee can (1) refer the allegation to the head of the affected agency for a response, (2) request an uninvolved IG, or its staff on detail, to conduct an investigation, or (3) refer the allegation to an appropriate governmentwide agency for review. For example, the Special Counsel, Office of Special Counsel, who as a member of the Integrity Committee participates in the review of all allegations, can ask that particular allegations be referred to that office for investigation. Upon completion of the agency’s follow-up, investigation, or review, the agency head notifies the Integrity Committee of the results and what action, if any, should be taken against the subject of the allegation. If the committee concurs with the agency’s findings, the matter is closed with a letter to PCIE Chair and others as appropriate. If the Integrity Committee does not concur with the investigation, the matter is to be referred back to the agency head, or to another agency, for appropriate action. The matter is not to be closed until the committee concurs with the agency’s investigative findings. The majority of the presidential IGs—58 percent—stated that the Integrity Committee was effective in handling allegations of wrongdoing against an IG or an IG staff member. However, about 31 percent of the presidential IGs responded that the Integrity Committee was ineffective. Some IGs raised concern that the process of handling allegations took too long and did not adequately address noncriminal or administrative allegations. In commenting on a draft of this report, the Chairman of the Integrity Committee noted that although the number of cases has doubled since 1990, the time required to process cases has declined. According to the Chairman, it took approximately 28 months to process cases in 1990, but in 1998 it took only about 4 months to process cases. Additionally, the Chairman stated that about 93 percent of the allegations the Integrity Committee receives are determined to be unsubstantiated, insufficiently supported, or frivolous or to fall outside of the Committee’s purview. The IGs and OMB generally agreed with the contents of the report, and OMB generally agreed with the comments provided by PCIE and ECIE. Additionally, ECIE noted that the report presents the survey results in a clear and objective manner. ECIE provided technical comments that we have incorporated where appropriate. The Chairman of the Integrity Committee commented only on the IGs’ responses related to the Committee. We have incorporated these comments as appropriate. We are sending copies of this report to the Vice Chairs and Ranking Minority Members of the House Committee on Government Reform and Oversight and its Subcommittee on Government Management, Information and Technology; the Ranking Minority Member of the Senate Special Committee on Aging; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs, House and Senate Committees on Appropriations, House and Senate Committees on the Budget; the Director, Office of Management and Budget; and the 57 IGs. Copies will also be made available to others upon request. The major contributors to this letter are listed in appendix X. If you have any questions concerning this report, please contact me at (202) 512-6240. The importance of legislative underpinnings for auditing in the federal government dates back almost half a century to the Accounting and Auditing Act of 1950, which held federal agency heads responsible for internal controls, including appropriate internal audit. The need to strengthen this requirement became evident when, in 1976, GAO began to issue a series of reports on reviews at 157 fiscal offices in 11 major federal organizations. These reports indicated widespread and serious internal control weaknesses that resulted in the waste of government money through fraud and mismanagement. We reported that federal agencies did not use their internal auditors to examine their financial operations and when they did, no action was taken on the auditors’ recommendations. We also found that internal audit groups were not independent, they were underfunded and understaffed, audit efforts were fragmented among several offices, and problems found by the audits were not communicated to the agency heads. With rare exceptions, the executive agencies had not adequately monitored, assessed or reviewed their own operations and programs. As a result, the Congress passed the Inspector General Act of 1978 (IG Act), Public Law 95-452, as amended. The IG Act established Inspector General offices in federal departments and agencies to create independent and objective units responsible for (1) conducting and supervising audits and investigations, (2) providing leadership and coordination and recommending policies to promote economy, efficiency, and effectiveness, and (3) detecting and preventing fraud and abuse in their agencies’ programs and operations. Subsequently, two interagency councils were established to provide a coordinating mechanism for the IGs. Through these councils, IGs are to identify, review, and discuss areas of weakness in federal programs; and develop plans for coordinated governmentwide activities to address problems and promote economy and efficiency in federal programs. The President’s Council on Integrity and Efficiency (PCIE) was established in March of 1981 by Executive Order No. 12301 and is primarily comprised of presidentially appointed and Senate-confirmed IGs. The Executive Council on Integrity and Efficiency (ECIE) was established in May 1992 by Executive Order No. 12805 and consists primarily of the DFE IGs. The Office of Management and Budget’s (OMB) Deputy Director for Management chairs both of these councils. An IG member of each council serves as the vice chair. Appendix IV and V contain complete lists of PCIE and ECIE membership. To carry out their mandate, IGs perform various types of work including financial and performance audits, investigations, and inspections. The types of work performed by the IGs are highlighted below. Financial statement audit provides reasonable assurance about whether the financial statements of an audited entity represent fairly the financial position, results of operations, and cash flows in conformity with generally accepted accounting principles based on audits conducted in accordance with GAGAS. Financial related audits include determining whether (1) financial information is presented in accordance with established or stated criteria, (2) the entity has adhered to specific financial compliance requirements, or (3) the entity’s internal control structure over financial reporting and/or safeguarding assets is suitably designed and implemented to achieve the control objectives. Investigation is a planned systematic search for relevant, objective, and sufficient facts and evidence derived through interviews, record examinations, and the application of other approved professional investigative techniques. Investigations may be administrative, civil, or criminal in nature. Inspection is a process, other than an audit or an investigation, that is aimed at evaluating, reviewing, studying, and analyzing the programs and activities of a department or agency for the purposes of providing information to managers for decision-making, for making recommendations for improvements to programs, policies, or procedures, and for administrative action. Inspections include providing factual and analytical information, monitoring compliance, measuring performance, assessing the efficiency and effectiveness of operations, and conducting inquiries into allegations of fraud, waste, abuse, and mismanagement. Performance audit is an objective and systematic examination of evidence for the purpose of providing an independent assessment of the performance of a government organization, program, activity, or function in order to provide information to improve public accountability and facilitate decision-making by parties with responsibility to oversee or initiate corrective action. Performance audits include economy and efficiency audits, program audits, and program evaluations, which are highlighted below. Economy and efficiency audits include determining (1) whether the entity is acquiring, protecting, and using its resources economically and efficiently, (2) the causes of inefficiencies or uneconomical practices, and (3) whether the entity has complied with laws and regulations on matters of economy and efficiency. Program audits include determining (1) the extent to which the desired results or benefits established by the legislature or other authorizing body are being achieved, (2) the effectiveness of organizations, programs, activities, or functions, and (3) whether the entity has complied with significant law and regulations applicable to the program. Program evaluations are systematic studies conducted periodically to assess how well a program is working. Program evaluations include (1) assessing the extent to which a program is operating as it was intended, (2) assessing the extent to which a program achieves its outcome-oriented objectives, (3) assessing the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program, and (4) comparing a program’s outputs or outcomes with the costs to produce them. Clinger-Cohen Act implementation encourages federal agencies to evaluate and adopt best management and acquisition practices used by both private and public sector organizations and requires agencies to base decisions about information technology investments on quantitative and qualitative factors such as costs, benefits, and risks of those investments and to use performance data to demonstrate how well the information technology expenditures support improvements to agency programs. Year 2000 computer system problem results from the inability of computer programs at the year 2000 to interpret the correct century from a recorded or calculated date having only two digits to indicate the year. Unless corrected, computer systems could malfunction or produce incorrect information when the year 2000 is encountered during automated data processing. Oversight of Nonfederal Audits ensures that audit work performed by nonfederal auditors complies with applicable federal standards. Operation of Hotlines receives and analyzes allegations of waste, fraud, or abuse in connection with the programs and operations of their respective agencies. In commenting on a draft of this report, one IG responded that the office performs administrative enforcement activities which are authorized by the Fraud Civil Remedies Act and similar legislation. Our objectives were to obtain (1) information on IG organization, staffing, workload, and operational issues and (2) the views of the IGs on policy issues affecting them such as term limits, law enforcement authority, semiannual reporting, and the IG selection process. To accomplish our objectives, we developed and administered two questionnaires to obtain information from the IGs. One questionnaire was for attribution and requested information regarding the IGs’ organizational structure, staffing, and workload. At your request, the other questionnaire was anonymous and requested the IGs’ views on current policy issues. Because this questionnaire was anonymous, we were unable to contact the IGs and obtain clarification, additional details, or missing responses. Prior to sending out the questionnaires, we pretested them with the IGs from the Smithsonian Institution and the Office of Personnel Management and revised them as necessary. The questionnaires were sent to all 57 IGs—27 presidentially appointed and 30 DFE IGs. We received responses for each questionnaire from 56 IGs. We did not independently verify the information provided by the IGs. In addition, we reviewed the testimony presented at the April 21, 1998, hearing held by the Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight, and the September 9, 1998, hearing held by the Senate Committee on Governmental Affairs to ascertain the current IG policy issues. We performed our review between April 1998 and December 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Office of Management and Budget’s (OMB) Acting Deputy Director for Management, the Chairman of the Integrity Committee, and all 57 IGs. On December 10, 1998, and December 14, 1998, respectively, we received oral comments from the Integrity Committee and OMB that are discussed in the “Agency Comments” section. The Vice Chair of the President’s Council on Integrity and Efficiency (PCIE) and the Vice Chair of the Executive Council on Integrity and Efficiency (ECIE) provided written comments consolidating the comments of presidential and DFE IGs, respectively. These comments are discussed in the “Agency Comments” section and are reprinted in appendixes VIII and IX, respectively. The following are GAO’s comments regarding the presidential IG views contained in the December 10, 1998, letter from PCIE. 1. The report has been revised accordingly. 2. In responding to the anonymous questionnaire, the presidential IGs were provided an opportunity to give their views and opinions on the effectiveness of PCIE. Of the 26 presidential IGs, 21 (or approximately 81 percent) provided written comments, which are summarized in the report. The written comments from 16 of the 21 IGs did not fully reflect the information discussed in the December 10, 1998, letter. However, we have revised the report to include the additional views presented in the letter. Jacquelyn Hamilton, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO surveyed inspectors general (IG) to obtain: (1) information on their organizational structure, staffing, and workload; and (2) their views on current policy issues affecting them. GAO noted that: (1) IGs' work covers a broad spectrum of agency programs and operations; (2) in general, the IGs responded that they have the expertise and resources necessary to assemble the teams of staff needed to perform the major types of work for which they are responsible; (3) additionally, while they generally anticipate the level of work to remain the same or slightly increase across the range of areas they review, IGs anticipated the greatest increase to be in information technology reviews; and (4) IGs also indicated that they were generally satisfied with their role and the overall legislation governing them, but did identify certain potential areas for modification.
As the central human resources agency for the federal government, OPM is tasked with ensuring that the government has an effective civilian workforce. To carry out this mission, OPM delivers human resources products and services including policies and procedures for recruiting and hiring, provides health and training benefit programs, and administers the retirement program for federal employees. According to the agency, approximately 2.7 million active federal employees and nearly 2.5 million retired federal employees rely on its services. The agency’s March 2008 analysis of federal employment retirement data estimates that nearly 1 million active federal employees will be eligible to retire and almost 600,000 will most likely retire by 2016. According to OPM, the retirement program serves current and former federal employees by providing (1) tools and options for retirement planning and (2) retirement compensation. Two defined-benefit retirement plans that provide retirement, disability, and survivor benefits to federal employees are administered by the agency. The first plan, the Civil Service Retirement System (CSRS), provides retirement benefits for most federal employees hired before 1984. The second plan, the Federal Employees Retirement System (FERS), covers most employees hired in or after 1984 and provides benefits that include Social Security and a defined contribution system. OPM and employing agencies’ human resources and payroll offices are responsible for processing federal employees’ retirement applications. The process begins when an employee submits a paper retirement application to his or her employer’s human resources office and is completed when the individual begins receiving regular monthly benefit payments (as illustrated in fig. 1). Once an employee submits an application, the human resources office provides retirement counseling services to the employee and augments the retirement application with additional paperwork, such as a separation form that finalizes the date the employee will retire. Then the agency provides the retirement package to the employee’s payroll office. After the employee separates for retirement, the payroll office is responsible for reviewing the documents for correct signatures and information, making sure that all required forms have been submitted, and adding any additional paperwork that will be necessary for processing the retirement package. Once the payroll office has finalized the paperwork, the retirement package is mailed to OPM to continue the retirement process. Payroll offices are required to submit the package to OPM within 30 days of the retiree’s separation date. Upon receipt of the retirement package, OPM calculates an interim payment based on information provided by the employing agency. The interim payments are partial payments that typically provide retirees with 80 percent of the total monthly benefit they will eventually receive.then starts the process of analyzing the retirement application and associated paperwork to determine the total monthly benefit amount to which the retiree is entitled. This process includes collecting additional information from the employing agency’s human resources and payroll offices or from the retiree to ensure that all necessary data are available before calculating benefits. After OPM completes its review and authorizes payment, the retiree begins receiving 100 percent of the monthly retirement benefit payments. OPM then stores the paper retirement folder at the Retirement Operations Center in Boyers, Pennsylvania. The agency recently reported that the average time to process a new retirement claim is 156 days. According to the Deputy Associate Director for the Center of Retirement and Insurance Services, about 200 employees are directly involved in processing the approximately 100,000 retirement applications OPM receives annually. Retirement processing includes functions such as determining retirement eligibility, inputting data into benefit calculators, and providing customer service. The agency uses over 500 different procedures, laws, and regulations, which are documented on the agency’s internal website, to process retirement applications. For example, the site contains memorandums that outline new procedures for handling special retirement applications, such as those for disability or court orders. Further, OPM’s retirement processing involves the use of over 80 information systems that have approximately 400 interfaces with other internal and external systems. For instance, 26 internal systems interface with the Department of the Treasury to provide, among other things, information regarding the total amount of benefit payments to which an employee is entitled. OPM has reported that a greater retirement processing workload is expected due to an anticipated increase in the number of retirement applications over the next decade, although current retirement processing operations are at full capacity. Further, the agency has identified several factors that limit its ability to process retirement benefits in an efficient and timely manner. Specifically, OPM noted that: current processes are paper-based and manually intensive, resulting in a higher number of errors and delays in providing benefit payments; the high costs, limited capabilities, and other problems with the existing information systems and processes pose increasing risks to the accuracy of benefit payments; current manual capabilities restrict customer service; federal employees have limited access to retirement records, making planning for retirement difficult; and attracting qualified personnel to operate and maintain the antiquated retirement systems, which have about 3 million lines of custom programming, is challenging. Recognizing the need to modernize its retirement processing, in the late 1980s OPM began initiatives that were aimed at automating its antiquated paper-based processes. Initial modernization visions called for developing an integrated system and automated processes to provide prompt and complete benefit payments. However, following attempts over more than two decades, the agency has not yet been successful in achieving the modernized retirement system that it envisioned. In early 1987, OPM began a program called the FERS Automated Processing System. However, after 8 years of planning, the agency decided to reevaluate the program, and the Office of Management and Budget requested an independent review of the program that identified various management weaknesses. The independent review suggested areas for improvement and recommended terminating the program if immediate action was not taken. In mid-1996, OPM terminated the program. In 1997, OPM began planning a second modernization initiative, called the Retirement Systems Modernization (RSM) program. The agency originally intended to structure the program as an acquisition of commercially available hardware and software that would be modified in- house to meet its needs. From 1997 to 2001, OPM developed plans and analyses and began developing business and security requirements for the program. However, in June 2001, it decided to change the direction of the retirement modernization initiative. In late 2001, retaining the name RSM, the agency embarked upon its third initiative to modernize the retirement process and examined the possibility of privately sourced technologies and tools. Toward this end, the agency determined that contracting was a viable alternative and, in 2006, awarded three contracts for the automation of the retirement process, to include the conversion of paper records to electronic files and consulting services to redesign its retirement operations. In February 2008, OPM renamed the program RetireEZ and deployed an automated retirement processing system. However, by May 2008 the agency determined that the system was not working as expected and suspended system operation. In October 2008, after 5 months of attempting to address quality issues, the agency terminated the contract for the system. In November 2008, OPM began restructuring the program and reported that its efforts to modernize retirement processing would continue. However, after several years of trying to revitalize the program, the agency terminated the retirement system modernization in February 2011. OPM’s efforts to modernize its retirement system have been hindered by weaknesses in several key IT management disciplines. Our experience with major modernization initiatives has shown that having sound management capabilities is essential to achieving successful outcomes. Among others, these capabilities include project management, risk management, organizational change management, system testing, cost estimating, progress reporting, planning, and oversight. However, we found that many of the capabilities in these areas were not sufficiently developed. For example, in reporting on RSM in February 2005, we noted weaknesses in key management capabilities, such as project management, risk management, and organizational change management. Project management is the process for planning and managing all project-related activities, including defining how project components are interrelated. Effective project management allows the performance, cost, and schedule of the overall project to be measured and controlled in comparison to planned objectives. Although OPM had defined major retirement modernization project components, it had not defined the dependencies among them. Specifically, the agency had not identified critical tasks and their impact on the completion of other tasks. By not identifying critical dependencies among retirement modernization components, OPM increased the risk that unforeseen delays in one activity could hinder progress in other activities. Risk management entails identifying potential problems before they occur. Risks should be identified as early as possible, analyzed, mitigated, and tracked to closure. OPM officials acknowledged that they did not have a process for identifying and tracking retirement modernization project risks and mitigation strategies on a regular basis but stated that the agency’s project management consultant would assist it in implementing a risk management process. Without such a process, OPM did not have a mechanism to address potential problems that could adversely impact the cost, schedule, and quality of the retirement modernization project. Organizational change management includes preparing users for the changes to how their work will be performed as a result of a new system implementation. Effective organizational change management includes plans to prepare users for impacts the new system might have on their roles and responsibilities, and a process to manage those changes. Although OPM officials stated that change management posed a substantial challenge to the success of retirement modernization, they had not developed a detailed plan to help users transition to different job responsibilities. Without having and implementing such a plan, confusion about user roles and responsibilities could have hindered effective implementation of new retirement systems. We recommended that the Director of OPM ensure that the retirement modernization program office expeditiously establish processes for effective project management, risk management, and organizational change management. In response, the agency initiated steps toward establishing management processes for retirement modernization and demonstrated activities to address our recommendations. We subsequently reported on OPM’s retirement modernization again in January 2008, as the agency was on the verge of deploying a new automated retirement processing system. We noted weaknesses in additional key management capabilities, including system testing, cost estimating, and progress reporting. Effective testing is an essential activity of any project that includes system development. Generally, the purpose of testing is to identify defects or problems in meeting defined system requirements or satisfying system user needs. At the time of our review, 1 month before OPM planned to deploy a major system component, test results showed that the component had not performed as intended. We warned that until actual test results indicated improvement in the system, OPM risked deploying technology that would not accurately calculate retirement benefits. Although the agency planned to perform additional tests to verify that the system would work as intended, the schedule for conducting these tests became compressed from 5 months to 2-1/2 months, with several tests to be performed concurrently rather than in sequence. The agency identified a lack of testing resources, including the availability of subject matter experts, and the need for further system development, as contributing to the delay of planned tests and the need for concurrent testing. The high degree of concurrent testing that OPM planned to meet its February 2008 deployment schedule increased the risk that the agency would not have the resources or time to verify that the planned system worked as expected. Cost estimating represents the identification of individual project cost elements, using established methods and valid data to estimate future costs. The establishment of a reliable cost estimate is important for developing a project budget and having a sound basis for measuring performance, including comparing the actual and planned costs of project activities. Although OPM developed a retirement modernization cost estimate, the estimate was not supported by the documentation that is fundamental to a reliable cost estimate. Without a reliable cost estimate, OPM did not have a sound basis for formulating retirement modernization budgets or for developing the cost baseline that is necessary for measuring and predicting project performance. Earned value management (EVM) is a tool for measuring program progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Fundamental to reliable EVM is the development of a baseline against which variances are calculated. OPM used EVM to measure and report monthly performance of the retirement modernization system. The reported results provided a favorable view of project performance over time because the variances indicated the project was progressing almost exactly as planned. However, this view of project performance was not reliable because the baseline on which it was based did not reflect the full scope of the project, had not been validated, and was unstable (i.e., subject to frequent changes). This EVM approach in effect ensured that material variances from planned project performance would not be identified and that the state of the project would not be reliably reported. We recommended that the Director of OPM address these deficiencies by, among other things, conducting effective system tests prior to system deployment, in addition to improving program cost estimation and progress reporting. In response to our report, OPM stated that it concurred with our recommendations and stated that it would take steps to address the weakness we identified. Nevertheless, OPM deployed a limited initial version of the modernized retirement system in February 2008. After unsuccessful efforts to address system quality issues, the agency suspended system operation, terminated the system contract, and began restructuring the modernization effort. In April 2009, we again reported on OPM’s retirement modernization, noting that the agency still remained far from achieving the modernized retirement processing capabilities that it had planned. Specifically, we noted that significant weaknesses continued to exist in three key management areas that we had previously identified—cost estimating, progress reporting, and testing—while also noting two additional weaknesses related to planning and oversight. Despite agreeing with our January 2008 recommendation that OPM develop a revised retirement modernization cost estimate, the agency had not completed initial steps for developing a new cost estimate by the time we reported again in April 2009. At that time, we reported that the agency had not yet fully defined the estimate’s purpose, developed an estimating plan, or defined the project’s characteristics. By not completing these steps, OPM increased the risk that it would produce an unreliable estimate and not have a sound basis for measuring project performance and formulating retirement modernization budgets. Although it agreed with our January 2008 recommendation to establish a basis for effective EVM, OPM had not completed key steps as of the time of our April 2009 report. Specifically, despite planning to begin reporting on the retirement project’s progress using EVM, the agency was not prepared to do so because initial steps, including the development of a reliable cost estimate and the validation of a baseline, had not been completed. Engaging in EVM reporting without first performing these fundamental steps could have again rendered the agency’s assessments unreliable. As previously discussed, effective testing is an essential component of any project that includes developing systems. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion. Beginning the test planning process in the early stages of a project life cycle can reduce rework later. Early test planning in coordination with requirements development can provide major benefits. For example, planning for test activities during the development of requirements may reduce the number of defects identified later and the costs related to requirements rework or change requests. OPM’s need to compress its testing schedule and conduct tests concurrently, as we reported in January 2008, illustrates the importance of planning test activities early in a project’s life cycle. However, at the time of our April 2009 report, the agency had not begun to plan test activities in coordination with developing its requirements for the system it was planning at that time. Consequently, OPM increased the risk that it would again deploy a system that did not satisfy user expectations and meet requirements. Project management principles and effective practices emphasize the importance of having a plan that, among other things, incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done, by whom, and when. Although OPM had developed a variety of informal documents and briefing slides that described retirement modernization activities, the agency did not have a complete plan that described how the program would proceed in the wake of its decision to terminate the system contract. As a result, we concluded that until the agency completed and used a plan that could guide its efforts, it would not be properly positioned to move forward with its restructured retirement modernization initiative. Office of Management and Budget and GAO guidance call for agencies to ensure effective oversight of IT projects throughout all life- cycle phases. Critical to effective oversight are investment management boards made up of key executives who regularly track the progress of IT projects such as system acquisitions or modernizations. OPM’s Investment Review Board was established to ensure that major investments are on track by reviewing their progress and determining appropriate actions when investments encounter challenges. Despite meeting regularly and being provided with information that indicated problems with the retirement modernization, the board did not ensure that retirement modernization investments were on track, nor did it determine appropriate actions for course correction when needed. For example, from January 2007 to August 2008, the board met and was presented with reports that described problems the retirement modernization program was facing, such as the lack of an integrated master schedule and earned value data that did not reflect the “reality or current status” of the program. However, meeting minutes indicated that no discussion or action was taken to address these problems. According to a member of the board, OPM guidance regarding how the board is to communicate recommendations and needed corrective actions for investments it is responsible for overseeing had not been established. Without a fully functioning oversight body, OPM could not monitor the retirement modernization and make the course corrections that effective boards are intended to provide. Our April 2009 report made new recommendations that OPM address the weaknesses in the retirement modernization project that we identified. Although the agency began taking steps to address them, the recommendations were overtaken by the agency’s decision in February 2011 to terminate the retirement modernization project. In mid-January 2012, OPM released a new plan that describes the agency’s intention to improve retirement processing through actions that include hiring and training 56 new staff to adjudicate retirement claims and 20 additional staff to support the claims process; establishing higher production standards and identifying potential retirement process improvements; and working with other agencies to improve the accuracy and completeness of the data they provide to OPM for use in retirement processing. Additionally, OPM’s new plan identifies existing and planned IT improvements to support the retirement process. Recognizing that its previous, large-scale efforts to automate the retirement process have failed, the agency characterizes its new plan as representing partial, progressive IT improvements. These efforts include providing retirees with the capability to access and update their accounts to change addresses, banking information, and tax exemptions and planning to automate retirement applications and to automatically collect retirement data from agencies’ payroll processing centers. Under this approach, OPM expects to eliminate the agency’s retirement processing backlog within 18 months and to accurately process 90 percent of its cases within 60 days. However, this goal represents a substantial reduction from the agency’s fiscal year 2009 retirement modernization goal to accurately process 99 percent of cases within 30 days. Moreover, the plan does not describe whether or how the agency intends to modify or decommission the over 80 legacy systems that support retirement processing. In summary, despite OPM’s recognition of the need to improve the timeliness and accuracy of retirement processing, the agency has thus far been unsuccessful in several attempts to develop the capabilities it has long sought. For over two decades, the agency’s retirement modernization efforts have been plagued by weaknesses in management capabilities that are critical to the success of such endeavors. Among the management disciplines the agency has struggled with are project management, risk management, organizational change management, cost estimating, system testing, progress reporting, planning, and oversight. Although the agency is now considering modest, incremental efforts to improve retirement processing, the development and institutionalization of the aforementioned management capabilities remain important to OPM’s success in improving the delivery of retirement services. Mr. Chairman, this concludes my statement today. I would be pleased to answer any questions that you or other members of the Subcommittee may have. If you have any questions concerning this statement, please contact Valerie C. Melvin, Director, Information Management and Technology Resources Issues, at (202) 512-6304 or [email protected]. Other individuals who made key contributions include Mark T. Bird, Assistant Director; Barbarol J. James; Lee A. McCracken; Teresa M. Neven; and Robert L. Williams, Jr. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Office of Personnel Management (OPM) is the central human resources agency for the federal government and, as such, is responsible for ensuring that the government has an effective civilian workforce. As part of its mission, OPM defines recruiting and hiring processes and procedures; provides federal employees with various benefits, such as health benefits; and administers the retirement program for federal employees. OPM’s use of information technology (IT) is critical in carrying out its responsibilities; in fiscal year 2011 the agency invested $79 million in IT systems and services. For over 2 decades, OPM has been attempting to modernize its federal employeeretirement process by automating paper-based processes and replacing antiquated information systems. However, these efforts have been unsuccessful, and OPM canceled its most recent large-scale retirement modernization effort in February 2011. GAO was asked to summarize its work on challenges OPM has faced in attempting to modernize the federal employee retirement process. To do this, GAO relied on previously published work in addition to reviewing OPM’s recent plan for retirement services. In a series of reviews, GAO found that OPM’s retirement modernization efforts were hindered by weaknesses in key management practices that are essential to successful IT modernization projects. For example, in 2005, GAO made recommendations to address weaknesses in the following areas: Project management : While OPM had defined major components of its retirement modernization effort, it had not identified the dependencies among them, increasing the risk that delays in one activity could have unforeseen impacts on the progress of others. Risk management : OPM did not have a process for identifying and tracking project risks and mitigation strategies on a regular basis. Thus, OPM lacked a mechanism to address potential problems that could adversely impact the cost, schedule, and quality of the modernization effort. Organizational change management : OPM had not adequately prepared its staff for changes to job responsibilities resulting from the modernization by developing a detailed transition plan. This could lead to confusion about roles and responsibilities and hinder effective system implementation. In 2008, as OPM was on the verge of deploying an automated retirement processing system, GAO reported deficiencies in and made recommendations to address additional management capabilities: Testing : The results of tests 1 month prior to the deployment of a major system component revealed that it had not performed as intended. These defects, along with a compressed testing schedule, increased the risk that the system would not work as intended upon deployment. Cost estimating : The cost estimate OPM developed was not fully reliable. This meant that the agency did not have a sound basis for formulating budgets or developing a program baseline. Progress reporting : The baseline against which OPM was measuring the progress of the program did not reflect the full scope of the project; this increased the risk that variances from planned performance would not be detected. In 2009, GAO reported that OPM continued to have deficiencies in its cost estimating, progress reporting, and testing practices and made recommendations to address these deficiencies, as well as additional weaknesses in the planning and oversight of the modernization effort. OPM agreed with these recommendations and began to address them, but the agency terminated the modernization effort in February 2011. More recently, in January 2012, OPM released a new plan to improve retirement processing that aims at targeted, incremental improvements rather than a largescale modernization. Specifically, OPM plans to hire new claims-processing staff, take steps to identify potential process improvements, and work with other agencies to improve data quality. Further, OPM intends to make IT improvements that allow retirees to access and update their accounts and automate the retirement application process. However, the plan reflects a less ambitious retirement processing timeliness goal and does not address improving or eliminating the legacy systems that support retirement processing. GAO is not making new recommendations at this time. GAO has previously made numerous recommendations to address IT management challenges OPM has faced in carrying out its retirement modernization efforts. Fully addressing these challenges remains key to the success of OPM’s efforts.
DOD’s efforts to improve its financial management and achieve audit readiness have evolved over a number of years into its current FIAR effort. In 1995, we first designated DOD financial management as one of the federal government’s programs at high risk of waste, fraud, abuse, or mismanagement because of long-standing and pervasive weaknesses in DOD’s internal control, and it still carries that designation today. Over the past two decades, DOD has initiated efforts to strengthen its internal control, become auditable, and improve its financial management. However, as we stated in our most recent high-risk report, DOD has emphasized asserting audit readiness by set dates over assuring that processes, systems, and controls are effective, reliable, and sustainable. DOD submitted its biennial strategic plan for the improvement of financial management (Biennial Plan) to Congress on October 26, 1998, as required by section 1008 of the National Defense Authorization Act (NDAA) for Fiscal Year 1998. This Biennial Plan was an important first step toward DOD improving its financial management operations. The Biennial Plan included, for the first time, a discussion of the importance of the programmatic functions of personnel, acquisition, property management, and inventory management to the department’s ability to support consistent, accurate information flows to all information users. Although the Biennial Plan included a number of initiatives that would help improve DOD’s financial management plans, we reported that it lacked critical elements necessary for producing sustainable financial management improvement over the long term. Another initiative that preceded the FIAR effort was DOD’s 2003 Financial Improvement Initiative, which was intended to fundamentally transform DOD’s financial management operations and achieve clean financial statement audit opinions. While DOD’s former Comptroller started the 2003 Financial Improvement Initiative with the goal of obtaining an unqualified audit opinion for fiscal year 2007 on DOD’s department-wide financial statements, the initiative lacked a clearly defined, well- documented, and realistic plan to make the goal a reality. As we previously reported, although most of the DOD components, including the Army, Navy, and Air Force, had submitted improvement plans to the DOD Comptroller, DOD had not yet developed an integrated departmental strategy, key milestones, accountability mechanisms, or departmental cost estimates for achieving its fiscal year 2007 audit opinion goal. In 2005, the DOD Comptroller established the DOD FIAR Directorate to develop, manage, and implement a strategic approach for addressing the department’s financial management weaknesses and achieving auditability and to integrate those efforts with other improvement activities, such as the department’s business system modernization efforts. DOD’s first FIAR Plan was issued in 2005 and is updated semiannually through the FIAR Plan Status Reports, which also summarize the current status of DOD and its components in executing the FIAR Plan. DOD’s FIAR strategy and approach have evolved since the issuance of the first FIAR Plan in 2005. The DOD Comptroller announced in August 2009 that in DOD’s effort to improve its financial management information, priority would be given to improving those processes and controls that produce information on which DOD managers rely most heavily to run the agency. Because budgetary information is widely and regularly used for management, the DOD Comptroller designated as one of DOD’s highest priorities the improvement of its budgetary information and processes underlying the Statement of Budgetary Resources (SBR). The United States Marine Corps was selected as the pilot military service for an audit of the SBR. The Secretary of Defense underscored the department’s SBR priority with an October 2011 memorandum directing the Comptroller to provide a revised plan for achieving audit readiness of the SBR by September 30, 2014, with the aim to provide DOD managers with auditable General Fund information to track spending, identify waste, and improve DOD’s business processes. In response to component difficulties in preparing for an audit of the SBR, the November 2014 FIAR Plan Status Report and the November 2013 revised FIAR Guidance included a revision to narrow the scope of initial audits to only current year budget activity and expenditures on a General Fund Schedule of Budgetary Activity. The NDAA for Fiscal Year 2002 required DOD to minimize the resources spent to develop, compile, report, and audit financial statements that the Secretary of Defense assesses as expected to be unreliable. Additionally, the NDAA for Fiscal Year 2008 designated the Deputy Secretary of Defense as the department’s Chief Management Officer (CMO), created a DCMO position, and designated the under secretary of each military department as the CMO for the respective department. The act also required the Secretary of Defense, acting through the CMO, to develop a strategic management plan that among other things would provide a detailed description of performance goals and measures for improving and evaluating the overall efficiency and effectiveness of DOD’s business operations and actions under way to improve operations. To establish statutory objectives for DOD to achieve financial statements that are validated as ready for audit by a certain date, the NDAA for Fiscal Year 2010 mandated that DOD develop and maintain a FIAR Plan that includes, among other things, the specific actions to be taken and costs associated with (1) correcting the financial management deficiencies that impair the department’s ability to prepare timely, reliable, and complete financial management information and (2) ensuring that DOD’s financial statements are validated as ready for audit by not later than September 30, 2017. In addition, the 2010 NDAA required that DOD (1) provide semiannual reports by no later than May 15 and November 15 on the status of the department’s implementation of the FIAR Plan (which DOD provides as FIAR Plan Status Reports) to congressional defense committees, (2) develop standardized guidance for DOD components’ financial improvement plans (FIP), and (3) define oversight roles and assign accountability for carrying out the FIAR Plan to appropriate officials and organizations. The NDAA for Fiscal Year 2013 amended the legal requirement to additionally require that the FIAR Plan Status Reports should include (1) a description of the actions military departments have taken to achieve an auditable SBR for DOD by September 30, 2014, and (2) a determination by each military department’s CMO on whether the unit is able to achieve an auditable SBR by September 30, 2014, without an unaffordable or unsustainable level of onetime fixes and manual work- arounds and without delaying the auditability of the financial statements. In the event that the CMO of a military department determines that the military department would not be able to achieve an auditable SBR by that date, the CMO is required to explain why the military department cannot meet that date and provide an alternative date for meeting the target date. In the November 2014 FIAR Plan Status Report, DOD acknowledged that it did not meet the September 30, 2014, target date for achieving audit readiness of the SBR, but stated that the three military departments asserted Schedule of Budgetary Activity audit readiness in the last quarter of fiscal year 2014. In January 2015, independent public accountant (IPA) firms began auditing the military departments’ General Fund Schedules of Budgetary Activity for fiscal year 2015. For the first- year Schedule of Budgetary Activity audits, the scope is the schedules containing only current year appropriations and all related activity, such as obligations and outlays, against those appropriated funds approved on or after October 1, 2014. As a result, these first-year Schedule of Budgetary Activity audits exclude unexpended amounts, whether obligated or unobligated, carried over from prior years’ funding as well as information on the status and use of such funding in subsequent years (e.g., obligations incurred and outlays). These amounts will remain unaudited. Over the ensuing years, as the unaudited portion of SBR balances and activity related to this funding decline, the audited portion is expected to increase. The NDAA for Fiscal Year 2014 mandates that upon the conclusion of fiscal year 2018, the Secretary of Defense shall ensure that a full audit is performed on DOD’s fiscal year 2018 financial statements and submit the results of that audit to Congress not later than March 31, 2019. DOD’s FIAR Plan is DOD’s strategic plan and management tool for guiding, monitoring, and reporting on the department’s financial management improvement efforts. As such, the plan communicates incremental progress in addressing the department’s financial management weaknesses and achieving financial statement auditability. The plan focuses on several goals: 1. achieve and sustain unqualified assurance on the effectiveness of internal controls through the implementation of sustained improvements in business processes and controls addressing the material weaknesses in internal control, 2. develop and implement financial management systems that support effective financial management, and 3. achieve and sustain financial statement audit readiness. The department has envisioned achieving financial statement auditability in four waves of concerted improvement activities described in its FIAR Plan. The activities of these four waves are within groups of end-to-end business processes, which are further broken down into discrete units, called assessable units. DOD defines an assessable unit as any part of the financial statements, such as a line item or a class of assets, a class of transactions, or a process or a system, that helps produce the financial statements. The four waves are as follows: Wave 1: Appropriations Received Audit Wave 2: Schedule of Budgetary Activity/SBR Audit Wave 3: Mission Critical Asset Existence and Completeness Audit Wave 4: Full Financial Statements Audit Waves 1 through 3 started concurrently, and an IPA firm validated Wave 1 as audit ready in August 2011. In the November 2013 FIAR Plan Status report, DOD defined the term audit ready as meaning that the department has strengthened internal controls and improved financial practices, processes, and systems so that there is reasonable confidence that the information can withstand an audit by an independent auditor. The department organized these audit readiness waves to use the interdependencies between budgetary and accounting information. FIAR priorities have required reporting entities to devote their resources and efforts toward completing audit readiness activities for Waves 1 through 3 before beginning work for Wave 4. DOD officials have stated that much of the audit readiness work required to complete Waves 1 through 3 affects Wave 4 requirements and objectives. For example, DOD has identified interdependencies between accounts included in Waves 2 (budgetary) and 4 (accounting information). DOD officials stated that the department has expanded its audit readiness priorities from budgetary data reported on the SBR to all financial transactions reported on the Balance Sheet and the Statements of Net Cost and Changes in Net Position. As stated in the May 2015 FIAR Plan Status Report, the focus of current FIAR activity includes (1) valuing and accurately reporting over $2.2 trillion in assets, (2) reporting over $2.4 trillion in liabilities, and (3) preparing full financial statements for audit. The FIAR Guidance, first issued in May 2010 and periodically updated, provides the standard methodology by which the components are to implement the FIAR Plan. DOD components are required to establish assessable units for all processes, systems, or classes of assets that result in material transactions and balances in their financial statements to focus their improvement efforts. Components are required to prepare FIPs for each assessable unit under the FIAR Guidance. According to the FIAR Guidance, component audit readiness assertions for assessable units are to specify that (1) control activities are suitably designed and implemented, operating effectively, and sufficiently documented to provide reasonable assurance that applicable financial reporting objectives are achieved; (2) key supporting documents are readily available for review; and (3) account balances and transactions are accurately recorded. DOD has established a mandatory set of five standardized phases for achieving audit readiness that its components are required to apply to each assessable unit. As of the April 2015 FIAR Guidance, these five phases were Discovery, Corrective Action, Assertion/Examination, Validation, and Audit. In the FIAR Guidance, service providers, such as DFAS and DLA, are defined as components that are responsible for their systems and data, processes and internal controls, and supporting documentation that affect a reporting entity’s audit readiness. Service providers are to prepare documentation illustrating the financial reporting aspects of their operations through end-to-end business processes. Based on that documentation, the service providers are to identify and evaluate control activities and supporting documentation over those processes that affect the reporting entities’ financial reporting objectives (i.e., the outcomes needed to achieve proper financial reporting and serve as a point against which the effectiveness of internal controls over financial reporting can be evaluated). In accordance with the FIAR Guidance, service providers’ control activities and supporting documentation undergo examinations conducted in accordance with the Statement on Standards for Attestation Engagements (SSAE) No. 16, Reporting on Controls at a Service Organization. Service providers, such as DFAS, with three or more customers working to become audit ready must obtain SSAE No. 16 examinations on their internal controls over financial reporting. According to DOD officials, the results of these examinations can then be relied upon by all of the customers, reducing audit time and therefore saving money. The updated FIAR Guidance, issued in April 2015, provides specific tasks, work products, and deliverables for achieving and validating full financial statement auditability. As described in this FIAR Guidance, DOD has expanded its FIAR priorities from budgetary information and mission- critical asset information to include two other priorities—proprietary accounting data/information for the Balance Sheet and Statements of Net Cost and Changes in Net Position and valuation of assets and liabilities. It also includes newly established milestones that the components must meet to give DOD the best opportunity to succeed in achieving auditable financial statements by fiscal-year end 2017. The panel reviewed the following four areas and provided a total of 29 recommendations to DOD to resolve the issues it found in each area. FIAR strategy and methodology (6 recommendations). While acknowledging that the department had a reasonable strategy and methodology for its FIAR effort, the panel stated that DOD’s strategy needed to be more detailed and refined. For example, according to the panel, DOD had not yet fully defined all of the elements of the strategy necessary to achieve audit readiness on all financial statements in 2017. Moreover, the panel was concerned that certain DOD components may not be effectively implementing the FIAR strategy and methodology. Challenges to achieving financial management reform and auditability (9 recommendations). The panel recognized that DOD’s size and complexity contribute to the complicated and pervasive challenges to its financial management processes and related business operations and systems. In its report, the panel expressed concern about DOD’s progress in addressing long-standing weaknesses in internal controls. Specifically, the panel referred to the numerous material weaknesses in internal control over financial reporting—cited by the DOD IG in its reports since the mid-1990s—that have affected DOD’s ability to achieve a clean audit opinion. In addition, the panel noted that weaknesses in controls over the recording, accounting, and reporting of financial information jeopardize DOD’s ability to safeguard taxpayer dollars. For example, these weaknesses can result in improper payments, Antideficiency Act (ADA) violations, and problem disbursements. The panel also noted organizational challenges faced by the logistics community, military components, and DFAS. Specifically, the majority of transactions recorded in accounting systems are initiated by military components, including military commands, installations, and bases, and within nonfinancial functional communities, such as acquisition and logistics. The panel stated that continued emphasis must be placed on fully engaging both the military components and functional communities in audit readiness efforts. Financial management workforce (5 recommendations). According to the panel, ensuring that the financial management workforce is adequately staffed, skilled, and well-trained is crucial to DOD’s ability to improve financial management. For example, the panel was concerned that DOD had not yet performed a complete department- wide systematic competency assessment, which would include analysis of the types and ranges of abilities, knowledge bases, and skills of the present financial management workforce and those that will be needed in the future. ERP system implementation efforts (9 recommendations). While acknowledging that DOD has taken positive steps, the panel expressed five concerns about the department’s ERP system implementation efforts, including (1) reported schedule delays and cost overruns as well as the reliability of ERP schedule and cost estimates; (2) issues with the requirements process in that in some cases not enough requirements are identified and in some cases too many requirements are included in ERP systems; (3) ERP systems that may not provide the capabilities needed for achieving FIAR objectives; (4) poor execution of data conversion that could cause delays in the full implementation of the ERP systems; and (5) the numerous interfaces that exist between the legacy systems and the ERP systems and the problems associated with these interfaces that could compromise ERP functionality. In its report, the panel also recognized that because most financial information is maintained in computer systems, the reliability of the financial data in these systems depends on the effectiveness of information system controls over how those systems operate. We determined that DOD’s actions have met 6 of the panel’s recommendations and partially met 23. In its May 2015 FIAR Plan Status Report, DOD stated that 9 recommendations were met and the remaining 20 were partially met. With regard to the 9 recommendations that DOD reported as met, we determined three to be partially met. Table 1 includes a list of the 29 panel recommendations, shown by the four areas reviewed by the panel, and our determination on the status of DOD’s implementation of each recommendation along with the status DOD reported in May 2015. See appendix I for detailed information on each of the 29 recommendations. While DOD is making progress, it is important to note that implementation of the panel’s recommendations may not include all of the actions that the department must take to achieve auditable financial statements. For example, as the DOD IG and the IPA firms perform examinations and audits, they may identify deficiencies in internal controls that were not previously known and therefore were not addressed by the panel’s recommendations. We agree with DOD’s status determinations that its actions have met six of the panel’s recommendations (see table 1, recommendations 2.5, 2.8, 3.2, 3.3, 3.4, and 3.5). Two of these relate to the area on challenges to achieving financial management reform and auditability. DOD has taken the actions recommended by the panel to reduce ADA violations by analyzing the causes of these violations; developing and implementing procedures to address them; and ensuring that key funds control personnel are adequately trained to prevent, detect, and report ADA violations (2.5). In addition, as recommended by the panel, DOD has developed forums for sharing lessons learned within commands of military departments as well as external to these departments (2.8). These information-sharing forums include newsletters, quarterly in- process reviews, and stakeholder meetings as well as the FIAR Governance Board, FIAR Committee, and FIAR Subcommittee. DOD has also met the requirements of four financial management workforce- related recommendations. Actions taken include using the expertise of certified public accountants with financial statement audit experience in its audit readiness efforts (3.2); developing and making financial management web-based courses available to both personnel in the financial management functional community and those outside of the financial management community, such as personnel in the logistics and acquisition functional communities (3.3); providing training programs for the department’s current ERP systems that users are to complete prior to obtaining access to these systems (3.4); and submitting a proposal to both the Senate and House Armed Services Committees for a financial management exchange program between DOD and the private sector (3.5). Further details on the specific recommendations made and DOD’s actions taken are discussed in appendix I. We determined that DOD’s actions partially met three of the panel’s recommendations for which DOD had reported the status as met (see table 1, recommendations 1.6, 2.1, and 2.2). Descriptions of the panel’s recommendations, the department’s actions, our reasons for disagreeing with DOD on the status of these recommendations, and the additional actions needed to fully address the panel recommendations are discussed below. The panel recommended that the FIAR Governance Board attest to whether DOD is on track to achieve audit readiness in 2017 in each FIAR Plan Status Report (1.6). DOD reported this recommendation as met because each FIAR Plan Status Report is coordinated among FIAR Governance Board members, providing them with the opportunity to (1) formally attest to the accuracy and completeness and (2) determine if their components are on track to achieve audit readiness in 2017. DOD officials also stated that each military department’s CMO reports on audit readiness progress and challenges in signed statements in the FIAR Plan Status Report and indicates whether the military department is on track to achieve audit readiness by September 30, 2017. However, we consider this recommendation to be partially met because all FIAR Governance Board members need to attest for their individual components, and the DOD Comptroller needs to attest for the department, whether they are on track to achieve audit readiness in 2017 in each FIAR Plan Status Report to fully meet this recommendation. In making these attestations, it is critical that the FIAR Governance Board consider the effect of known factors (e.g., the lack of integrated systems) and other risks, such as weaknesses and deficiencies that may be identified in examinations and audits, on the status of their respective components’ financial statement audit readiness. The panel required DOD to (1) include objective and measurable criteria regarding FIAR-related goals in its senior personnel performance plans and evaluations, (2) appropriately reward or hold officials accountable based on evaluated performance against the FIAR-related goals, and (3) document and track evaluated performances to measure progress over time (2.1). On April 9, 2013, the Deputy Secretary of Defense issued a memorandum, stating that most, if not all, of DOD’s executives have a role in its effort to achieve audit readiness and that Senior Executive Service (SES) performance plans were to be updated with FIAR-related goals by April 30, 2013. We consider this recommendation partially met because, although DOD components have inserted FIAR-related requirements in SES performance plans, they have not yet determined how to reward executives based on evaluated performance for FIAR-related goals and assessed the effect on accomplishing FIAR activities by tracking evaluated performances over time. To improve oversight of the FIAR effort, the panel recommended that DOD require each component senior executive committee to review its corresponding component’s audit readiness assertion packages for compliance with FIAR Guidance before submitting those packages to the Office of the Under Secretary of Defense (Comptroller) for validation (2.2). DOD considers the recommendation to be met because the FIAR Guidance states that management’s audit readiness assertions must be signed by the person, individual, or representative of the organization responsible for the subject matter (assessable unit) and that this level of review and approval is appropriate. DOD requires review of the assertion package in accordance with the FIAR Guidance, but we consider this recommendation partially met because the FIAR Guidance does not require senior executive committee reviews of audit readiness assertion packages as the panel recommended. We agree with DOD that the department’s actions partially met the remaining 20 panel recommendations, described below by the four areas of the panel’s review, and continued actions are needed. However, additional actions beyond what DOD indicated is planned are needed in some cases to fully address the panel recommendations. See appendix I for details on all partially met recommendations, including actions under way or additional actions needed to meet these recommendations. The panel made six recommendations in this area, including recommendation 1.6 discussed above that DOD considers met which we consider partially met and the remaining five for which we agree with DOD’s determination that these are partially met. Two of the five remaining partially met recommendations in this area specifically relate to DOD’s implementation of its FIAR strategy and methodology (1.1 and 1.4). In addition, one recommendation relates to the department’s process for consolidating the components’ financial statements to prepare the department-wide consolidated financial statements (1.2). Another recommendation is related to the valuation of mission-critical assets (1.3). The other remaining partially met recommendation relates to risk management activities associated with implementing the FIAR Plan (1.5). Although some actions have been taken, continued actions are needed to address all recommendations in this area, and additional actions are needed for some. For example, the panel recommended that the Comptroller’s office, in consultation with DOD’s DCMO, the secretaries of the military departments, and the heads of the defense agencies and field activities, incorporate risk mitigation plans to support meeting future interim milestones in the FIAR Plan (1.5). As we reported in August 2013, DOD officials acknowledged that there is not a department-wide written risk management policy for its FIAR effort. DOD officials had stated earlier that a department-wide risk management plan would aid in assessing and integrating risk strategies across the department. However, instead of a department-wide risk management plan, DOD is using a three-pronged approach to address aspects of risk management, as stated in its May 2015 FIAR Plan Status Report. While DOD’s three- pronged approach to address risk management includes the use of “deal- breakers,” DOD’s critical path, and its centralized notices of findings and recommendations tracking database, a department-wide risk management plan, as DOD had previously planned to implement, is needed for developing and implementing consistent risk mitigation strategies across DOD and thus to support meeting future interim milestones as called for in the recommendation. The panel made nine recommendations in this area, including two recommendations previously discussed that DOD considers met and we consider partially met (2.1 and 2.2) and two recommendations that we agree have been met (2.5 and 2.8). For the remaining five recommendations, we agree with DOD’s determination that they are partially met. These five relate to addressing existing material weaknesses and those identified during the FIAR effort (2.3), reducing improper payments and problem disbursements (2.4 and 2.6), identifying and institutionalizing best practices (2.7), and ensuring that components and service providers working with them have consistent milestones (2.9). Although DOD has taken some actions to address these recommendations, continued or additional actions are needed as detailed in appendix I. For example, the panel recommended that to reduce problem disbursements, the department should address the underlying causes of problem disbursements in its efforts to develop and implement ERP systems (2.6). According to a DOD official, the implementation of modern financial systems, including ERP systems, has increased the level of problem disbursements because of data quality issues and interfaces with legacy systems that are still in use. The DOD official also stated that when the ERP systems are fully implemented and operating in the stable end states, these systems should provide an automated, integrated environment that will significantly reduce the number of problem disbursements. In its May 2015 FIAR Plan Status Report, DOD officials stated that analyses to assist in identifying root causes and implementing corrective actions, as called for in the recommendation, will be performed on a recurring basis until the department can retire all legacy systems and fully implement the ERP systems’ capabilities. The panel made five recommendations in this area, four of which have been met, as previously discussed, and one for which we agree with DOD’s partially met assessment. This recommendation relates to DOD performing a department-wide systematic skills assessment of its financial management workforce and that of all other functional areas that are performing financial management-related functions (3.1). The panel recognized the importance of having personnel within DOD’s other functional communities who are skilled in performing financial management-related tasks, because these communities initiate and maintain much of the financial information critical to the results of DOD’s operations. DOD completed its systematic competency assessment of certain occupations in its financial management workforce and has plans to assess the remaining financial management workforce in phases. However, at the time of our review, DOD had not yet assessed other functional communities as called for by the panel recommendation. The panel made nine recommendations in this area, and we agree with DOD that all of these recommendations have been partially met and none have been fully met. The first six include recommendations relating to ERP schedule delays and cost overruns, issues with the requirements process, and the capabilities of the ERP systems to achieve FIAR objectives. For example, the panel recommended that DOD develop ERP-related schedule and cost estimates based on best practices for future ERP deployments (4.3). While the Army had made some improvements to the schedule and cost estimates for Army’s Global Combat Support System (GCSS-Army), we reported in 2014 that DOD has not fully implemented best practices in its schedule estimates, cost estimates, or both for the Defense Enterprise Accounting and Management System and GCSS-Army. The panel also made two recommendations to address its concerns about DOD’s data conversion efforts and the numerous interfaces that exist and their possible compromising of ERP functionality. The panel’s final recommendation related to DOD’s assessment of information systems controls testing needs for all ERP systems being developed and determination of whether appropriate workforce levels and corresponding skill sets exist within DOD’s developmental and operational test communities. As detailed in appendix I, the department needs to continue taking actions to address all of these recommendations and take additional actions for some. For example, as mentioned above, the panel recommended that among other things, DOD evaluate lessons learned from previous data conversion efforts and incorporate these lessons into its ERP conversion plans as well as assess the merits of designating a senior official to be responsible for the coordination and managerial oversight of data conversion (4.7). Although DOD assessed the merits of designating a senior official responsible for data conversion and identified lessons learned from previous data conversion efforts, as DOD implements ERP systems, the department needs to continue to incorporate lessons learned into its current and future ERP data conversion plans, as recommended by the panel. Given that DOD officials have stated that these ERP systems are critical to DOD’s ability to achieve audit readiness, fully implementing these recommendations will be necessary for achieving this goal. The panel’s report and its recommendations touch on some of the most critical challenges the department faces in achieving lasting financial management improvements and financial statement audit readiness. As previously stated, DOD has defined audit readiness to mean that the department has strengthened internal controls and improved financial practices, processes, and systems so that there is reasonable confidence that the information can withstand an audit by an independent auditor. DOD’s actions to meet the panel’s 29 recommendations, if effectively designed and implemented, will bring the department closer to achieving these important goals. However, it is important to note that implementation of the panel’s recommendations may not include all of the actions that the department must take to achieve auditable financial statements. As the DOD IG and the IPA firms perform examinations and audits, they may identify deficiencies in controls that may not have previously been known and as such were not addressed by the panel’s recommendations. Nonetheless, without taking the actions needed to fully implement the panel recommendations, DOD is at increased risk of not achieving its financial management improvement and audit readiness goals. DOD is monitoring its progress implementing the FIAR Plan against interim milestones. However, as the target date for validating DOD’s audit readiness approaches, as we have previously stated, DOD has emphasized asserting audit readiness by set dates over assuring that processes, systems, and controls are effective, reliable, and sustainable. While time frames are important for measuring progress, DOD should not lose sight of the ultimate goal of implementing lasting financial management reform to ensure that it can routinely generate reliable, auditable financial information and other information critical to decision making and effective operations. To help meet its financial management improvement and audit readiness goals, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to reconsider the status of the three panel recommendations that DOD classified as met that we determined were partially met and take the necessary actions to reasonably assure that these recommendations have been met. We provided a draft of this report to DOD for comment. In its written comments, reprinted in appendix II, DOD concurred with our recommendation to reconsider whether further actions are needed to meet panel recommendations 1.6, 2.1, and 2.2. DOD also described planned actions that it will take to address the recommendations. For panel recommendation 2.2, DOD stated that it will update the FIAR Guidance to require dual signatures from both the senior executives in charge of financial and relevant functional areas for the subject assertions. As DOD updates its FIAR Guidance, DOD should require each component senior executive committee to review its corresponding component’s audit readiness assertion package as called for by the panel recommendation, rather than focus on dual signatures. DOD stated that it will continue to provide status updates on actions planned and completed for the remaining 23 panel recommendations in its semiannual FIAR Plan Status Report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Under Secretary of Defense (Comptroller). In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. This appendix contains our assessment of the Department of Defense’s (DOD) progress in addressing the House Armed Services Committee (HASC) Panel on Defense Financial Management and Auditability Reform’s (the panel) recommendations, organized by the four areas the panel reviewed. Each section of the appendix covers the recommendations made for the four areas and includes (1) the panel recommendations, (2) statuses reported by GAO and DOD, (3) key background information, (4) DOD actions taken to address the recommendations, and (5) our assessment of the status of the recommendations. The status of each recommendation is based on our assessment of information received prior to or as of May 2015 and DOD’s reported status as of the May 2015 Financial Improvement and Audit Readiness (FIAR) Plan Status Report. Background: The panel reported that DOD had a reasonable strategy and methodology for its FIAR effort, although its strategy needed to be more detailed and refined. While the panel noted that DOD’s December 2011 FIAR Guidance detailed the strategy and methodology for completing Wave 4, the components were instructed to continue their focus on implementing the requirements of Waves 1, 2, and 3. Additionally, the panel reported that the department, in its FIAR Guidance, instructed its components not to address Wave 4 requirements in their financial improvement plans (FIP) until the FIAR activities associated with Waves 1, 2, and 3 were complete. The panel stated its concern that this approach may affect the department’s ability to achieve financial statement auditability in 2017, because of DOD’s lack of analysis of the interdependencies among Waves 1, 2, 3, and 4. For example, the panel stated that by testing certain information on the Statement of Budgetary Resources (SBR) in Wave 2, assurance can be obtained on the reliability of certain data in the Balance Sheet in Wave 4. DOD Actions: DOD first identified interdependencies in its December 2011 FIAR Guidance that can be leveraged to accelerate progress in Wave 4 and revised them in subsequent versions of the FIAR Guidance as follows: Delivered Orders, reported on the SBR (covered in Wave 2), equates to a portion of Accounts Payable reported on the Balance Sheet (Wave 4). Spending Authority from Offsetting Collections, reported on the SBR (covered in Wave 2), includes some of the amounts reported in Accounts Receivable—Intragovernmental on the Balance Sheet (Wave 4). Unobligated Balances and Unpaid Obligations, reported on the SBR (covered in Wave 2), correlates to Fund Balance with Treasury, reported on the Balance Sheet (Wave 4). Obligations Incurred, reported on the SBR (covered in Wave 2), equates to a substantial portion of Gross Costs reported on the Statement of Net Cost (Wave 4). In its May 2015 FIAR Plan Status Report, DOD considered this recommendation as partially met and stated that its analysis of interdependencies will continue until the department achieves financial statement auditability. Also, in its April 2015 FIAR Guidance, DOD stated that partly because of interrelationships between financial statements, the department can leverage audit readiness efforts from previous waves to succeeding waves, much like the above interdependencies. For example, DOD officials stated that through Schedule of Budgetary Activity/SBR (Wave 2) audit readiness efforts, some portions of line items on other financial statements have been addressed. GAO Status: We consider this recommendation partially met because as DOD continues its audit readiness activities, the department will likely identify additional interdependencies between accounts on its SBR, Balance Sheet, and Statement of Net Cost. For example, Wave 4 audit readiness activities include valuation of assets, liabilities, revenues, and costs and therefore will require DOD to determine the dollar values to be assigned to line items such as environmental liabilities; general property, plant and equipment; inventory; and operating materials and supplies. These audit readiness activities will involve interdependencies between the Balance Sheet and the Statement of Net Cost. Background: The panel stated that the FIAR Plan did not address the process for ensuring that the DOD components’ financial information will be properly consolidated into DOD’s department-wide financial statements. The panel’s concern was that the lack of an articulated process for addressing financial statement consolidation may affect DOD’s ability to achieve financial statement auditability in 2017. In its recommendation, the panel suggested, as one option, that DOD create a DOD financial reporting element or wave for consolidation of DOD’s financial statements and report on this element’s audit readiness progress in the FIAR Plan Status Report. DOD Actions: DOD reports on the status of Defense Finance and Accounting Service (DFAS) financial reporting, defined as the process by which DFAS organizes financial data and produces financial statements, in the FIAR Plan Status Reports. DFAS, one of DOD’s largest service providers, performs accounting and finance services for the department, including preparing the financial statements for its components as well as the consolidated department-wide financial statements. According to DOD officials, the department will continue to use the current DFAS reporting system and processes to produce the department-wide consolidated financial statements. In accordance with the FIAR Guidance, DFAS’s financial reporting system and processes have undergone a Statement on Standards for Attestation Engagements (SSAE) No. 16 examination by an independent auditor. As DOD officials stated in the May 2015 FIAR Plan Status Report, the department is addressing the SSAE No. 16 reported findings and implementing corrective actions. However, DOD officials stated in the report that the department will not know whether outstanding issues related to financial statement compilation have been resolved until an audit of DOD’s department-wide financial statements has been conducted. GAO Status: DOD has established financial reporting assessable units for the military departments and service providers. However, we consider this recommendation partially met because DOD needs to take additional actions to fully develop a financial reporting wave or strategy for consolidating individual component financial statements into department- wide financial statements. In addition, DOD’s reporting system and processes do not allow for the elimination of intragovernmental transactions, which occur between federal entities and must be eliminated to prevent duplicate reporting of information. DFAS, in its SSAE No.16 financial reporting assertion package, stated that DOD’s current accounting systems generally do not capture trading partner information at the level needed to match and eliminate these intragovernmental transactions between trading partners. DOD officials stated that they have actions under way to address the issues affecting the proper elimination of intragovernmental transactions. Background: When the panel made its recommendation, DOD had decided to wait to report the historical costs of real property, general equipment, inventory, and operating materials and supplies on its Balance Sheet until (1) initial FIAR priorities had been met and (2) the enterprise resource planning (ERP) systems under development were capable of recording and reporting transaction data. DOD’s decision was based on its business case in which DOD officials stated that with the exception of operating materials and supplies, historical acquisition cost information was used exclusively for financial reporting and not for managerial decision making. DOD officials, in its business case, also recognized that the ongoing implementation of ERP systems would enhance the recording of auditable historical costs for future acquisitions. Based on its business case, DOD also decided that it would expense the costs of military equipment and request that the Federal Accounting Standards Advisory Board change the federal accounting standards to allow this treatment, because DOD viewed capturing, recording, and reporting auditable costs for military equipment as extremely challenging and costly. In its report, the panel noted that federal accounting standards allow for the use of other methods to provide reasonable estimates for the costs of these assets, which led to its recommendation that the department reevaluate its position on accepting historical costs. DOD Actions: DOD has reevaluated its position from that described in its business case, deciding to include historical cost valuations of general property, plant and equipment, including general equipment as well as real property, and internal use software on its Balance Sheet, without waiting until initial FIAR priorities are met and the implementation of the ERP systems is complete. In September 2013, DOD described its overall strategy for valuing assets (1) acquired prior to October 1, 2013, and (2) accepted by DOD and placed into service on or after October 1, 2013. In February 2014, DOD issued its general equipment valuation estimation methodologies. According to its May 2015 FIAR Plan Status Report, DOD has established five working groups that are addressing impediments to audit readiness, including the valuation of historical assets. DOD reported the status of this recommendation as partially met and stated that the valuation methods for various asset categories are being developed and that the department is working closely with the Federal Accounting Standards Advisory Board to leverage asset valuation methodologies allowed under Statement of Federal Financial Accounting Standards No. 35 to establish a historical cost baseline. GAO Status: We consider this recommendation partially met because DOD has developed a strategy and is basing its asset valuation for the Balance Sheet on historical cost, but the department faces significant challenges in continuing to implement its strategy and develop auditable valuations for these assets. For example, DOD’s components must first determine the existence of these assets. In addition, DOD has cited the following challenges to asset valuation: numerous configurations of highly complex military equipment and military equipment and weapons systems that have been modified, upgraded, or overhauled; renovated or improved real property assets that have changed in value since originally placed in service; large inventories of missiles, ammunition and munitions, spare engines, equipment parts, supplies, and so forth; enormous quantities of deployed general equipment, inventory, and operating materials and supplies located in Afghanistan and worldwide; and identification and valuation of internal use software and DOD property with contractors. Background: The panel’s concern was that without properly implementing the FIAR methodology, DOD components may be prematurely asserting audit readiness. In its report, the panel refers to our September 2011 report in which we stated that the Navy and Air Force did not adequately develop and implement their respective FIPs for civilian pay and military equipment in accordance with the FIAR Guidance. Our review of the FIPs found that the Navy and Air Force did not conduct sufficient control and substantive testing and reached conclusions that were not supported by the testing results, did not complete reconciliations of the population of transactions, and did not fully test information systems controls. Also, neither the Navy nor the Air Force had fully developed and implemented corrective action plans (CAPs) to address deficiencies identified during implementation of the FIPs. As a result of these deficiencies, we reported that these FIPs did not provide sufficient support for the components’ conclusions that the assessable units were ready to be audited. DOD Actions: With regard to part 1 of the recommendation, DOD’s FIAR Directorate, according to DOD officials, worked with DOD’s components initially to overcome FIAR Plan implementation difficulties by reviewing and verifying that the FIPs were consistent with FIAR strategy and methodology. Based on this work, the FIAR Directorate identified causes of implementation difficulties, or what DOD refers to as deal-breakers— weaknesses that have prevented DOD components from demonstrating audit readiness or succeeding in audits. With regard to part 2 of the recommendation, DOD’s March 2013 FIAR Guidance accelerated examination of audit readiness assertion packages by the DOD Inspector General (IG) or independent public accountant (IPA) firms to identify needed corrective actions based on auditor- identified notices of findings and recommendations by combining the Assertion and Evaluation phases into one phase. In addition to examinations of audit readiness assertion packages, audits of the military departments’ General Fund Schedules of Budgetary Activity for fiscal year 2015 began in January 2015. The FIAR Directorate has established a tracking database to monitor the implementation of corrective actions for findings and recommendations resulting from examinations and audits. Finally, with regard to part 3 of the recommendation, DOD does not have a formal communications plan for sharing information on lessons learned, but the FIAR Directorate promotes this type of sharing. Lessons learned are shared at FIAR Committee, FIAR Subcommittee, and working group meetings, as well as other forums such as in-process review meetings and newsletters. According to the May 2015 FIAR Plan Status Report, the tracking database for notices of findings and recommendations will help in sharing lessons learned as deficiencies at one component are likely to exist at another component. GAO Status: We consider this recommendation partially met because DOD components have not yet implemented all of the FIAR audit readiness activities to achieve its goal of department-wide financial statement auditability. Furthermore, the expansion of the FIAR priorities to include proprietary accounting and information as well as valuation will likely require additional corrective actions to address these areas. As recommended by the panel, the components will need to continue developing CAPs to address any identified weaknesses or deficiencies. Finally, DOD needs to develop a formal communications plan and components will need to communicate any lessons learned as the panel recommended. Background: The panel raised concerns about DOD missing some of the interim milestones included in the May 2011 FIAR Plan Status Report. While recognizing that slippages of interim milestones will not necessarily compromise meeting audit readiness objectives, the panel stated that DOD must incorporate risk mitigation plans to remediate missed interim milestones and apply lessons learned toward achieving later milestones on schedule. DOD Actions: The department, in its May 2015 FIAR Plan Status Report, stated that it has a three-pronged approach for addressing risk management in which DOD has (1) identified audit readiness deal- breakers by reviewing past audits, (2) defined the critical path to achieving financial statement auditability and included related tasks and milestones in the April 2015 FIAR Guidance, and (3) reinforced the importance of internal controls over areas of significant risk by including a new chapter on internal controls in the FIAR Guidance and implementing a centralized tracking database to monitor corrective actions developed in response to notices of findings and recommendations generated from examinations and audits. GAO Status: We consider this recommendation partially met because DOD needs to take additional actions to develop a department-wide risk management plan. In August 2013, we recommended that DOD design and implement department-level policies and detailed procedures for FIAR Plan risk management, including the five guiding principles for effective risk management. DOD officials acknowledged that DOD did not have a written risk management policy for its FIAR effort and originally planned to develop one, but have recently stated that its three-pronged approach will address risk management. According to the May 2015 FIAR Plan Status Report, DOD’s three-pronged approach addresses aspects of risk management, including the use of deal-breakers, DOD’s critical path, and its notices of findings and recommendations tracking database. However, the lack of a department-wide risk management plan to assess risks specifically related to meeting future interim milestones will hinder developing and implementing consistent risk mitigation strategies across DOD. Background: The FIAR Governance Board is charged with providing vision, leadership, oversight, and accountability for DOD’s FIAR effort. Members of the FIAR Governance Board include the Comptroller/Chief Financial Officer, the Deputy Chief Financial Officer, the Deputy Chief Management Officer, the military department assistant secretaries for financial management/comptroller and deputy chief management officers (DCMO), the DFAS Director, and Defense Logistics Agency (DLA) Deputy Director. DOD Actions: DOD officials stated that each FIAR Plan Status Report is coordinated among FIAR Governance Board members, providing them with the opportunity to (1) formally attest to the accuracy and completeness and (2) determine if their components are on track to achieve audit readiness in 2017. In addition, DOD officials stated that each military department’s chief management officer reports on audit readiness progress and challenges in signed statements in the FIAR Plan Status Report and also indicates whether his or her military department is on track to achieve audit readiness by September 30, 2017. GAO Status: The FIAR Plan Status Reports have included signed statements by the military department chief management officers as well as a signed statement from DOD’s Under Secretary of Defense (Comptroller)/Chief Financial Officer. However, we consider this recommendation partially met because not all FIAR Governance Board members (e.g., the DFAS Director and the DLA Deputy Director) provide signed statements and attest to audit readiness in the FIAR Plan Status Reports and thus additional actions are needed. In addition, the signed statements provided in the May 2015 FIAR Plan Status Report did not explicitly state whether DOD was on track to achieve audit readiness by September 30, 2017, as the panel recommended. Instead, the signed statements expressed commitment to continue making progress toward financial statement auditability by September 30, 2017, but did not clearly state whether DOD or the military departments are on target to meet that date. Moreover, because the panel recommendation is addressed to the department, the FIAR Governance Board members need to attest for their individual components and the DOD Comptroller needs to attest for the department whether they are on track to achieve audit readiness in 2017 in each FIAR Plan Status Report to fully meet this recommendation. In making these attestations, it is critical that the FIAR Governance Board consider the effect of known factors (e.g., the lack of integrated systems) and other risks, such as weaknesses and deficiencies identified in examinations and audits, on the status of its financial statement audit readiness. Background: The panel stated, in its report, that effective leadership and oversight are instrumental to bringing forth and sustaining substantial financial management improvement. In addition to financial management reform being a long-term undertaking, the panel stated that it requires the involvement of DOD leadership within, and outside of, financial management operations. In addition, according to the panel, leadership should extend from the top officials, including the offices that comprise the Secretary of Defense, the military departments’ chief management officers, and the military departments’ assistant secretaries (financial management and comptroller) to senior officials in other functional areas, such as logistics and acquisitions. The panel recognized that DOD requires that senior executive performance appraisals include financial audit goals among their evaluation criteria and that this requirement includes appraisals of senior executives in functional areas having a financial impact. The panel considered this requirement a positive step, but added that the effectiveness of this requirement depends on whether the evaluation criteria can be objectively measured, evaluated performances are appropriately rewarded, and senior officials are held accountable. DOD Actions: On April 9, 2013, the Deputy Secretary of Defense issued a memorandum, stating that most, if not all, of DOD’s executives have a role in its effort to achieve audit readiness. In the memorandum, three categories of Senior Executive Service (SES) members were described and their performance plans were to be updated with objective and measurable criteria by April 30, 2013. SES members in category 1 are responsible for managing resources or DOD business processes. The performance plans for these SES members are to include an agency-specific performance requirement for business acumen. The second category is those SES members who have a direct role in their organizations’ FIAR efforts. This category includes headquarters- level financial managers with overall FIAR responsibility as well as SES members in financial management and other functional communities (such as acquisition, logistics, and personnel) who directly affect their organizations’ financial records and FIAR efforts. These SES members should include in their performance plans specific audit-related goals, tied to their organizations’ FIPs, as an agency-specific performance requirement for being results driven. Moreover, to the extent that these SES members manage resources or DOD business processes, the executive performance plans should also include the business acumen element described in the first category. In assessing performance for this element, rating officials are instructed to consider both the individual’s and the organization’s results. The third category consists of SES members who do not manage resources or DOD processes, who do not have a direct role in their organizations’ audit readiness efforts, or both. According to the April 9, 2013, memorandum, these SES members are to request a waiver from having FIAR goals included in their executive performance plans. However, according to an April 19, 2013, memorandum, the instance of SES members falling into the third category should be rare. With regard to the second part of this recommendation, DOD officials told us in December 2014 that officials were considering bonuses to award performance on the FIAR-related goals, but had not determined how to reward executives with bonuses based on evaluated performance for DOD-wide goals, such as the FIAR-related goals. With regard to part 3 of the panel’s recommendation, DOD officials stated that when the FIAR- related goals were first included in the SES performance plans, there was a perceived change in that leaders focused more attention on FIAR goals and efforts. However, DOD officials stated that it is difficult to determine the causality between the inclusion of the FIAR-related goals and achievement of these goals. GAO Status: We consider this recommendation partially met because, although DOD components have inserted FIAR-related requirements in SES performance plans, additional actions are needed to determine how to reward executives based on evaluated performance for FIAR-related goals and to assess the effect of these requirements on accomplishing FIAR activities by tracking evaluated performances over several fiscal years. Without evaluating SES member performance relating to the FIAR goals, DOD only has anecdotal evidence on the effect of the inclusion of these goals. Background: The panel based this recommendation on the fact that while DOD components had senior executive committees to oversee financial improvements efforts, their oversight responsibilities were not effectively being carried out, as demonstrated by the ineffective implementation of FIPs and insufficient evidence to support conclusions of audit readiness. In its report, the panel stated that effective oversight mechanisms must be implemented to ensure that DOD’s components are complying with FIAR Guidance. DOD Actions: As shown in the May 2015 FIAR Plan Status Report, DOD officials determined this recommendation to be met because the FIAR Guidance requires that audit readiness management assertions be signed by the individual representing the organization responsible for the subject matter and that this level of review and approval is appropriate. By signing the audit readiness assertion letter for an assessable unit or line item, the responsible individual is asserting that the component has followed the FIAR Guidance to (1) document and evaluate internal controls and (2) define, assemble, and retain key supporting documentation to support transactions. In addition, the individual is also asserting that the supporting documentation can be retrieved and provided within a reasonable period of time to an IPA firm conducting an examination or subsequent audit. According to DOD officials, the department adopted an alternative action in response to the panel’s recommendation—the assignment of a functional lead to review the audit assertion package. While the functional leads do not sign off on the assertion packages, these leads do participate in the review process for the package. According to DOD officials, this alternative action demonstrates accountability and has addressed the intent of the panel’s recommendation. GAO Status: We consider this recommendation partially met because DOD’s FIAR Guidance does not require senior executive committee reviews of audit assertion packages for compliance with the FIAR Guidance as the panel recommended. DOD has taken alternative actions of having functional leads review audit assertion packages and individuals responsible for the subject matter review and approve the assertion packages. However, recent GAO reports have identified instances in which the Army and DFAS have not completed tasks required by the FIAR Guidance, which could result in premature assertions of audit readiness. For example, we found that DFAS had asserted audit readiness for its contract pay assessable unit even though a deal-breaker was present. As we reported in June 2014, DFAS had not established a general ledger reconciliation process at the time it implemented its contract pay FIP. Therefore, we concluded that as a result of the lack of a general ledger reconciliation process, additional errors may exist in the recorded transactions activity and balances for DFAS contract pay. If DOD were to take additional actions and require senior executive committee reviews of the assertion packages, as recommended by the panel, then the committee could make a collective decision and take responsibility for moving forward with an assertion, even though a deal- breaker exists, rather than an individual or functional lead making that decision alone. Similarly, the panel cited insufficient support for conclusions of audit readiness in its report, reinforcing the need for higher levels of review of the audit readiness assertion packages. Background: In its report, the panel stated that since the mid-1990s, the DOD IG has reported numerous material weaknesses in internal control that affect the department’s ability to obtain a clean opinion on its financial statements. In its audit report for fiscal year 2014, the DOD IG reported previously identified material weaknesses in the following areas: (1) Financial Management Systems; (2) Fund Balance with Treasury; (3) Accounts Receivable; (4) Inventory; (5) Operating Materials and Supplies; (6) General Property, Plant and Equipment; (7) Government Property in Possession of Contractors; (8) Accounts Payable; (9) Environmental Liabilities; (10) Statement of Net Cost; (11) Intragovernmental Eliminations; (12) Other Accounting Entries; and (13) Reconciliation of Net Cost of Operations to Budget. Many of these material weaknesses are so serious that they contribute to GAO being unable to render an opinion on the U.S. government’s consolidated financial statements. DOD Actions: According to DOD officials, components that undergo examinations or audits are required to develop and implement CAPs. The 13 material weaknesses identified by the DOD IG, as well as those self- reported by the department, will be addressed by specific reporting entity corrective actions, DOD-wide corrective actions, or DOD-wide policy initiatives that are carried out at the reporting entity level. Specific reporting entity-level corrective action timelines are included in the May 2015 FIAR Plan Status Report and the April 2015 FIAR Guidance. In addition, details on DOD-wide solutions and policy initiatives are included in the May 2015 FIAR Plan Status Report. With regard to material weaknesses identified through the FIAR effort, the FIAR Directorate, using its notices of findings and recommendations tracking database, is monitoring the implementation of corrective actions. GAO Status: DOD has taken actions to develop CAPs at the component level. However, we consider this recommendation partially met because DOD needs to take additional actions to develop comprehensive CAPs for addressing material weaknesses at the department level. According to the United States Chief Financial Officers Council’s Implementation Guide for OMB Circular A-123, a comprehensive CAP lists the detailed actions that must be taken to resolve the weakness or deficiency, including (1) summary description of the deficiency and year first identified; (2) targeted corrective action date (the date for management follow-up); (3) agency official responsible for monitoring progress; (4) indicators, statistics, or metrics used to measure progress in resolving the weakness or deficiency; and (5) milestone or other characteristic used to report how resolution activities are progressing. Given the critical nature of the long-standing, material weaknesses in internal control reported for fiscal year 2014, their resolution will require a concerted effort by leadership at the component level to implement corrective actions with oversight from the Office of the Under Secretary of Defense (Comptroller), to monitor department-wide initiatives and implement component-level CAPs. DOD, to meet this recommendation, will also need to monitor the development and implementation of corrective actions for weaknesses identified from FIAR-related activities, such as audits and examinations, to ensure that comprehensive CAPs are developed and these weaknesses are fully addressed at both the component and department levels. Background: The panel based this recommendation on GAO’s findings included in a July 2009 report that reported DOD may not be capturing the full extent of its improper payments. Specifically, we found in 2009 that DOD had not conducted risk assessments for all of its payment activities and $322 billion in outlays were excluded from the amounts assessed. In addition, at that time we stated that DOD had not estimated improper payments for commercial pay, its largest payment activity, in accordance with improper payment requirements. The then-DOD Comptroller testified before the panel that DOD is taking steps, based on the Improper Payments Elimination and Recovery Act of 2010, to initiate a statistical sampling program for commercial payments. In its report, the panel acknowledged DOD’s efforts to initiate statistical sampling for commercial payments but recommended that DOD take further action to address this issue. DOD Actions: In its May 2015 FIAR Plan Status Report, DOD officials stated that the department continues to review its sampling methodologies for all payment types to ensure that it can properly estimate improper payment dollars. For example, DFAS reevaluated and enhanced its statistical sampling methodology for DFAS commercial payments for fiscal year 2014 improper payment reporting. Also, for fiscal year 2014, the Defense Health Agency modified its TRICARE improper payment calculation formula in response to our prior findings. DOD stated that the department will be reviewing the sampling methodologies for the other seven programs for which improper payment estimates are reported in its agency financial report. However, DOD stated in its fiscal year 2014 agency financial report that it cannot demonstrate that all payments subject to improper payment requirements were included in the population of payments to review. GAO Status: DOD has taken some steps to improve its improper payment sampling methodologies for some programs, but we consider this recommendation partially met because DOD still needs to reevaluate its methodology for identifying and reporting improper payments for other types of programs. In addition, DOD cannot demonstrate that the payments subject to improper payment estimation that are included in the populations from which the samples are selected are complete, accurate, and valid. As we stated in May 2013, the foundation of reliable statistical sampling estimates is a complete, accurate, and valid population from which to sample. Furthermore, we stated that DOD did not maintain the supporting documentation needed to substantiate reported improper payment estimates. Background: In its report, the panel noted that DOD’s poor internal controls put it at risk of violating the Antideficiency Act (ADA), referring to DOD IG and GAO testimonies. The Deputy IG for Auditing testified that DOD’s control environment weaknesses impaired its ability to determine the amount of funds that it had available to spend, and as a result, DOD was at risk of overobligating and overexpending its appropriations and violating the ADA. He added that a lack of adequate controls and training contributed to potential ADA violations. In 2011, we testified that because the ADA prohibits, and effective funds control should prevent, overobligations and overexpenditures of public funds, the number and dollar amount of ADA violations were indicators of the status of DOD’s funds control. In our testimony, we stated that DOD had issued and periodically updated policies that addressed responsibilities for preventing and identifying ADA violations. In addition, we testified that DOD’s guidance described frequent causes of ADA violations within the department and explained the actions necessary to avoid them, including (1) emphasizing management and supervisory duties, (2) training of key funds control personnel, and (3) effective systems and procedures. DOD Actions: In its Financial Management Regulation (FMR), DOD includes the causes of ADA violations it has identified as well as the department’s policies and procedures to prevent these violations. With respect to ensuring that key funds control personnel are adequately trained, DOD officials, in a memorandum dated December 13, 2011, stated that beginning October 1, 2012, the frequency of training for ADA investigators will change from 5-year to 3-year intervals. According to the May 2015 revised chapter of DOD’s FMR on administrative control of funds, the components are required to submit a memorandum on their annual evaluation of overall administrative control processes and ADA violations. This memorandum is to include a statement that provides the number of key funds control personnel identified and trained as prescribed in the FMR. In the May 2015 FIAR Plan Status Report, DOD officials stated that the military departments and components are required to review and evaluate training records to ensure that personnel certifying and handling funds have financial management and fiscal law training. In addition to increasing the frequency of training for ADA investigators, DOD has implemented its Financial Management Certification Program, which includes training on fiscal law. This training on fiscal law covers funds control and ADA requirements. GAO Status: We believe DOD’s actions have met the requirements of the recommendation. Background: Problem disbursements include both unmatched disbursements and negative unliquidated obligations. Unmatched disbursements are disbursements that have been paid by an accounting office but have not been matched to the correct obligation records. A negative unliquidated obligation is a disbursement transaction that has been matched to an obligation, but the total recorded disbursement exceeds the recorded obligation. In its report, the panel stated that problem disbursements impede DOD’s performance of proper Fund Balance with Treasury reconciliations, which affects DOD’s ability to report reliable information on its financial statements. DOD Actions: DOD’s Deputy Chief Financial Officer testified that problem disbursements are caused by errors or deficiencies that have occurred during the procure-to-pay process. According to DOD officials, problem disbursements can occur when the disbursing function is separated from the entitlement and accounting processes. DOD officials stated that the implementation of modern financial systems, including ERP systems, has temporarily increased the level of problem disbursements because of data quality issues and the need for interfaces with legacy systems that are still in use. However, the officials stated that by the time the ERP systems mature and their operations become stable, the data quality issues should be resolved. Moreover, DOD officials stated that these ERP systems should provide an automated, integrated environment that will significantly reduce the number of problem disbursements. DOD officials also stated that the auditors’ notices of findings and recommendations from examinations and audits should help in resolving issues causing problem disbursements. In its May 2015 FIAR Plan Status Report, DOD officials stated that analyses will be performed on a recurring basis until the department is able to retire all legacy systems and fully implement ERP capabilities. DOD officials added that these analyses and reconciliations should assist in identifying root causes of problem disbursements and implementing corrective actions. GAO Status: We consider this recommendation partially met because DOD needs to continue to address the underlying causes of problem disbursements as it develops and implements ERP systems. Background: In its report, the panel stated that it was encouraged by the testimony of DOD logistics community representatives about their role in efforts to improve financial management and achieve audit readiness. However, the panel stated that engaging the functional communities in the audit readiness effort must continue to be prioritized. The panel, in its report, referred to the Air Force’s testimony in which the Director of Logistics stated that one of the biggest challenges is ensuring that the logistics and acquisition functional communities understand their role in achieving audit readiness. DOD Actions: DOD officials stated, in the May 2015 FIAR Plan Status Report, that the Office of the Under Secretary of Defense (Comptroller), through the FIAR governance process, is aware of solutions and best practices identified and implemented by DOD’s components, including those identified by the functional communities. As cited in the report, some of the best practices shared among the components, including the functional leads serving on the FIAR effort, are as follows: Solution to be used by the military departments for valuing existing real property assets (deflated plant replacement value). Army use of an Air Force environmental liability cost estimation tool. Army and Air Force use of an audit response tool developed by the Navy. Navy use of a Fund Balance with Treasury reconciliation tool developed by the Air Force. GAO Status: We consider this recommendation partially met because the FIAR Directorate needs to take additional actions to validate whether identified best practices have been institutionalized department-wide. In addition, DOD did not comprehensively document lessons learned and best practices, and therefore the department is missing an opportunity to gain institutional knowledge that would facilitate future decision making. Federal internal control standards highlight the importance of documenting significant events in a timely manner. Specifically, these standards state that agencies should identify, record, and distribute pertinent information to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities and ensure that communications are relevant, reliable, and timely. Institutionalizing identified best practices in writing may assist in consistent application and fully engaging functional communities outside of the financial management community in audit readiness efforts, as necessary. Background: In its report, the panel recognized that the majority of transactions recorded in accounting systems are initiated by military commands, installations, and bases. Based on this, the panel stated that for DOD to achieve its FIAR objectives, internal controls over, and the accounting of, these transactions must be improved at these locations. Based on testimony from the Commander of the Naval Air Systems Command, the panel stated that the lessons learned at Naval Air Systems Command should be shared with other military commands and vice versa. DOD Actions: In its May 2015 FIAR Plan Status Report, DOD officials stated that the military departments and defense agencies regularly share lessons learned within their organizations in various forums, such as newsletters, quarterly reviews, and stakeholder meetings. The FIAR Subcommittee, FIAR Committee, and FIAR Governance Board meetings are forums that are regularly used to share information external to an individual DOD component. GAO Status: We believe DOD’s actions have met the requirements of the recommendation. Background: The panel recognized that because DFAS’s activities are integral to the financial activities reported in the DOD components’ financial statements, weaknesses in internal control at DFAS must be addressed for DOD to achieve auditability. In its report, the panel referred to the FIAR Guidance, which states that service providers, such as DFAS, that work with user entities are responsible for audit readiness efforts surrounding service provider systems and data, processes and controls, and supporting documentation that have a direct effect on user entities’ auditability. Therefore, according to the panel’s report, it is critical that these organizations provide documentation demonstrating that controls are properly designed and operating effectively and transactions are properly posted to the accounting records. The panel stated that DFAS should undergo an audit of its major processes that materially affect its users. In addition, timelines for establishing effective controls should be reported in future FIAR Plan Status Reports for all major processes. DOD Actions: DOD included service providers’ FIAR status and milestones in its FIAR Plan Status Reports beginning with the May 2012 report. In the May 2015 FIAR Plan Status Report, DOD officials stated that in addition to including service provider milestones in DOD reports, the Office of the Under Secretary of Defense (Comptroller) and DOD’s components monitor service providers’ milestones, progress, and challenges during service provider working group meetings, as well as during other FIAR oversight meetings, such as FIAR Committee meetings. DOD also stated, in the May 2015 FIAR Plan Status Report, that integrating the audit readiness activities of the service providers with their customers is complex and is one of the challenges the department faces to achieving audit readiness. One reason for this complexity is that the components rarely control transactions from initiation to reporting on the component financial statements. Moreover, the components do not own and operate all of the information systems used to process their transactions. One example provided by the department relates to processing and recording contract pay, for which the components depend on over a dozen systems that are owned and operated by service providers. GAO Status: We consider this recommendation partially met because DOD needs to take additional actions to show the comparison of the service provider milestones with those of their customers so that the consistency of the milestones between service provider and customer can be seen. Currently, the FIAR Plan Status Reports show the service providers’ status and plans for achieving audit readiness and conducting SSAE No. 16 examinations by assessable unit. Ensuring that the component milestones and service provider milestones are consistent for each process included in the FIAR effort remains challenging. Background: The panel reported that at the time of its review, DOD had not yet performed a complete department-wide systematic competency assessment that included an analysis of the financial management workforce abilities, knowledge bases, and skill sets needed now and in the future. DOD developed and issued its department-wide (enterprise- wide) financial management competencies for its civilian workforce in November 2011, immediately preceding the panel report’s issuance. In issuing these competencies, DOD officials stated that they identify the critical knowledge, skills, and abilities that DOD financial managers need to meet the (1) complex 21st century national security mission and (2) unique requirements of the department, including analysis and audit readiness. DOD Actions: DOD has taken steps toward completing a department- wide systematic competency assessment of its financial management workforce. The department plans to do a review or “refresh” of its department-wide financial management competencies for its civilian workforce in fiscal year 2016. With regard to identifying the gaps between current requirements and the competencies of the existing workforce, DOD has used its Defense Competency Assessment Tool to assess competency gaps in its civilian financial management workforce for its four civilian mission-critical financial management occupations in 2014 and its nine civilian non-mission-critical financial management occupations in 2015. As stated in the May 2015 FIAR Plan Status Report, DOD plans to research the feasibility of assessing the civilian financial management workforce in other functional areas. The panel recommended that DOD’s competency assessments be performed for its federal civilian, military, and contracted personnel performing financial- related functions. With regard to military personnel in financial management occupations, DOD officials stated that its research showed that the military departments, through the normal annual military performance review process, have an effective means of assessing the competencies of members of the military workforce in their given functional specialties. Moreover, DOD officials stated that assessing the competency gaps of contracted staff performing financial management- related functions is outside the financial management community’s scope of responsibility because required competencies are to be defined in each contract’s statement of work. In addition, the appropriate contracting officer’s technical representative has the responsibility to perform due diligence over the contractor’s performance. GAO Status: We consider this recommendation partially met because the department has not yet assessed the competencies of all civilian, military, and contracted personnel performing financial management- related functions, as recommended by the panel. Furthermore, the Functional Community Manager for Financial Management concluded that there is not a legislative requirement for a competency skills gap assessment for the military financial management workforce. DOD, however, is required by law to submit a biennial strategic workforce management plan to Congress, which includes, among other things, an assessment of the critical skills and competencies that will be needed in the future. According to the law, this plan shall specifically address the shaping and improvement of DOD’s financial management workforce, including military and civilian personnel. This includes an assessment of the critical skills and competencies for both the civilian and military financial management workforces. As stated above, DOD’s competency assessments have only addressed the civilian workforce in the financial management community. When DOD conducts its review of the department-wide civilian financial management workforce competencies in fiscal year 2016, the department will need to consider projected future requirements (competencies) for its civilian financial management workforce, as the panel recommended. After the department has identified the projected future requirements (competencies), DOD will be in a position to identify the gaps between the projected future requirements for its civilian financial management workforce and those possessed by its existing workforce. Background: In testimony before the panel, DOD stated that the department uses contractors to fill skill sets missing from its existing workforce in certain areas, including audit readiness. Industry officials testified on the importance of hiring certified public accountants (CPA)— either directly or through contracts—with financial audit experience. According to expert testimony, although hiring CPAs is an important aspect of improving the human capital necessary to achieve audit readiness, not all CPAs have the requisite audit readiness expertise. For example, CPAs who specialize in areas such as tax, budgets, or information systems may not have developed the tools necessary to productively participate in improving audit readiness. CPAs who have federal financial statement audit experience are trained to apply the judgment required by generally accepted government auditing standards to determine the relevancy and sufficiency of controls and documentation necessary to successfully prepare DOD for a financial statement audit. DOD Actions: According to DOD officials, DOD uses the expertise of CPAs both as employees and contractors. The FIAR Directorate, within the Office of the Under Secretary (Comptroller), maintains a contractor staff of CPAs with financial audit experience. Among other things, these CPAs provide consulting services, including determining how to implement best business practices that are used in the private sector. The military departments have also contracted with IPA firms. For example, officials stated that the Air Force has contracted a recognized accounting firm with both auditing and audit readiness experience as its advisory and assistance services contractors. According to Navy officials, the Navy perceives a continued need to leverage private sector expertise in future years to support its FIAR efforts and has developed a flexible acquisition strategy to facilitate this leveraging of expertise. According to Army officials, the Army recognizes the importance of a variety of skills that are critical to accomplish the needed changes associated with audit readiness, including individuals with knowledge of the Army, audit and systems experience, and with certifications such as CPAs, certified information systems auditors, and project management professionals. The Army has a mix of these skills on its civilian audit readiness staff and supplements this knowledge base with contractor staff. The service providers are also using the expertise of IPA firms. DFAS has employed multiple IPA firms to conduct its audit readiness efforts, including IPA firms to perform a mock military pay SSAE No. 16 examination as well as its first SSAE No. 16 examination. DLA is leveraging an IPA firm as audit readiness advisors, which supports its audit readiness efforts with CPAs as well as audit and advisory professionals experienced in financial statement audit and information technology audit. GAO Status: We believe DOD’s actions have met the requirements of the recommendation. Background: In its report, the panel recognized the importance of having personnel within DOD’s functional communities, other than financial management, with the skills to perform financial management-related tasks. According to the panel, functional communities, such as the logistics and acquisitions communities, generate and maintain financial information critical to reporting the financial results of DOD operations accurately and reliably. For example, logistics personnel are responsible for entering asset information into inventory records, conducting inventories, and performing reconciliations. Acquisition personnel enter obligations for contracts into the accounting system. The panel concluded that the department must ensure that these personnel receive financial management training as part of the department’s FIAR efforts. DOD Actions: According to DOD Human Capital and Resource Management officials, FIAR courses are available online for members of all DOD functional communities, including members of the financial management community. Specifically, DOD officials stated that the department has developed over 50 web-based financial management courses, and these courses provide credit applicable to the department’s Financial Management Certification Program requirements at the various levels. While one learning platform is restricted to members of the financial management community, the web-based courses on another platform are open to both members of the financial management community and members of other functional communities. Course evaluations are required for each web-based course completed and these evaluations are reviewed monthly. According to DOD officials, course evaluation averages are analyzed and tracked and used to evaluate training effectiveness. DOD officials have stated that DOD has consistently maintained an average course evaluation metric of 4.12 on a 5.0 scale for these courses. GAO Status: We believe DOD’s actions have met the requirements of the recommendation. Background: In its report, the panel stated that implementing effective training programs will be especially important as DOD transitions to increased use of ERP systems. For example, the Army testified that its General Fund Enterprise Business System (GFEBS) requires personnel to obtain proficiencies in skills that are not required in the legacy operating environment and that many of the more than 70,000 eventual users of GFEBS will not reside in the Army’s financial management community. The Army added that the majority of users operate among the acquisition, logistics, public works, and property management functions. The Air Force testified that its financial managers are learning about new ERP systems, including what these systems are designed to do and how to work within them. The Air Force is experiencing a major cultural change for much of its workforce by moving from primarily a bookkeeping financial management system to ERP systems that can produce auditable financial statements. Officials added that the Air Force is working to get ahead of the ERP deployments and is retooling its workforce. DOD Actions: In the May 2015 FIAR Plan Status Report, DOD stated that training exists for all current ERP systems. Officials added that these training programs are coordinated with each ERP system owner and the component’s financial management office. In addition, the military departments require users to complete training prior to obtaining access to the ERP systems. For example, for its Defense Enterprise Accounting and Management System (DEAMS), the Air Force has a library of online training courses that must be taken as part of the process to request access to DEAMS. In addition, the Air Force has included user job aids in DEAMS that provide video demonstrations of common system functions. According to an Air Force official, the lack of well-geared training was identified as an underlying cause for earlier issues that occurred during the DEAMS implementation process because users performing everyday business tasks with DEAMS did not have a clear understanding of how to use the ERP to perform these tasks. Army officials stated that while GFEBS was being deployed, its GFEBS training team provided on-site and classroom training for end users, and that over the years, improvements were made to the tools provided and the courses. For example, the new job aids were improved as well as the scenario-specific training, based on lessons learned and feedback from the end users and the help desk. For the Navy ERP, a Navy official stated that users must complete web-based training or instructor-led training prior to obtaining access to perform financial roles. The Navy, according to this official, has web-based training and detailed knowledge-sharing content available as part of the overall Navy ERP program, but not uploaded to the Navy ERP application itself. The military departments measure the effectiveness of this training in several ways, including end-of-course surveys and analysis of help desk tickets, to identify any gaps in training that need to be addressed. For example, for GFEBS, the Army Financial Management School requires end users to complete end-of-course evaluations. Air Force officials provided an example in which lessons learned and feedback from users resulted in changes to one of its DEAMS courses. During the June 2014 deployment training cycle, end users noted that the DEAMS program needed to adjust its project billing user reimbursement course to more closely align it with the typical scenarios and data combinations experienced at most bases. According to Air Force officials, given this, the DEAMS training team coordinated with the subject matter experts to reevaluate, adjust, and update the project billing user reimbursement course material to better support those end users’ needs. According to a Navy official, Navy ERP training uses several sources of information to continually improve the effectiveness of the training material, including lessons learned from student course evaluations, feedback from the end user community on topics where additional training may be needed, and analyzing help desk tickets for indications of training gaps. GAO Status: We believe DOD’s actions have met the requirements of the recommendation. Background: DOD testified, before the panel, that it would like to implement a pilot program similar to the Information Technology Exchange Program. According to DOD officials, the National Defense Authorization Act (NDAA) for Fiscal Year 2010 authorized a pilot program for the temporary exchange of information technology personnel between DOD and the private sector. DOD officials asserted that a similar exchange program involving the FIAR Directorate would benefit the department’s FIAR Plan through the sharing of best practices, partnering to address common challenges, and enhancing competencies. In its report, the panel stated that it supports improving workforce competencies and therefore welcomes the sharing of greater detail on the proposed program with the committee. DOD Actions: On April 1, 2014, DOD submitted a proposal for a pilot Financial Management Exchange Program between DOD and the private sector to both HASC and the Senate Armed Services Committee for their consideration. The pilot, proposed for inclusion, but not enacted as part of the Carl Levin and Howard P. “Buck” McKeon NDAA for Fiscal Year 2015, was modeled on section 1110 of the 2010 NDAA, authorizing a pilot program for the temporary exchange of personnel working in information technology. According to the proposal, a DOD employee would be eligible for the exchange program only if the employee (1) works in financial management, (2) is considered to be an exceptional employee, and (3) is compensated at least at the General Schedule 11 grade level (or equivalent). According to DOD officials, Senate Armed Services Committee members expressed their support for the program during an April 2014 DOD briefing on this topic. However, in the May 2015 FIAR Plan Status Report, DOD officials stated that the department has not yet received any formal comments from either HASC or the Senate Armed Services Committee on its proposal. The legislative proposal was included in the Senate version of the NDAA for Fiscal Year 2016. However, DOD officials said they were told in July 2015 that HASC will not be proceeding with this proposal. GAO Status: We believe DOD’s actions have met the requirements of the recommendation. Background: In its report, the panel stated that ERP implementation is instrumental to resolving DOD’s financial management weaknesses and achieving audit readiness and included a table that showed, by military department, the ERP systems that are critical to Wave 2 and Wave 3. The panel noted that although information was provided for select ERP systems in the May 2011 FIAR Plan Status Report, full deployment dates were not included for certain ERP systems. For example, the panel noted that the Air Force did not provide a full deployment date for the Expeditionary Combat Support System (ECSS), which was needed for its SBR and mission-critical assets existence and completeness audit. DOD Actions: According to the May 2015 FIAR Plan Status Report, DOD officials agreed with the panel that the FIAR Plan Status Reports should include more detail regarding the ERP programs to better evaluate progress toward auditability and timely implementation of corrective measures and increase confidence in the management of the department’s investments in ERP systems. Consequently, DOD has included separate sections with information on ERP systems and audit readiness, starting with the November 2012 FIAR Plan Status Report and through its November 2014 report. For its May 2015 FIAR Plan Status Report, the information on ERP systems and audit readiness was included in a section on the information technology systems critical to audit readiness. Additional information that has been provided in the FIAR Plan Status Reports includes (1) overview of the ERP systems, (2) program cost, (3) impact on legacy systems, (4) information technology controls, (5) implementation milestones and audit readiness information, (6) financial reporting impact, and (7) status of financial reporting objectives by assessable unit. While the department has considered including additional risk management and remediation action information in the FIAR Plan Status Reports, DOD officials have found that the most effective reporting of the specific risks and potential consequences of failure to meet ERP functionality requirements is achieved through the acquisition governance and oversight reporting. With regard to acquisition governance, the department is managing its business systems, including ERP systems, as portfolios of investments. The goal is to aggregate data from authoritative data sources and tools to track and manage the overall performance of systems portfolios, including ERP systems. For effective control, planning, mitigation, and remediation, the department manages risk as part of acquisition oversight for each of the ERP systems. GAO Status: We consider this recommendation partially met because DOD still needs to include risks and potential consequences of failing to satisfy outstanding ERP functionality requirements or incurring future milestone delays and related mitigation measures in the FIAR Plan Status Reports. Moreover, the department still needs to provide information on actual schedule slippages, cost increases, or both. While the existence of these risks and the resulting effects on audit readiness may be known to DOD management, external readers of the FIAR Plan Status Reports, including those with oversight responsibility, do not have an accurate picture of how much DOD’s financial auditability efforts rely on ERP systems. Background: The panel stated that it was concerned about reported ERP schedule delays and cost overruns because the ERP systems are critical to (1) resolving DOD’s financial management weaknesses and (2) achieving audit readiness. DOD Actions: According to the military departments, their ERP program offices are integrating FIAR requirements and corresponding milestones into the ERP schedules through their normal process of requirements identification and technical solution development. According to the Director, Business Integration, Office of the Deputy Chief Financial Officer, the FIAR requirements are incorporated into the master set of functional needs, business operations, and technical requirements and appropriately integrated into the master program schedules in order to meet the overall program scope and function. Moreover, ERP program managers are accountable to their respective departments, services, and agency organizations, and as such, their performance is evaluated through the plans and performance objectives established within those operations and business functions. In its May 2015 FIAR Plan Status Report, DOD officials stated that each ERP system program office is responsible for including all requirements, including FIAR requirements, in its program schedules for its ERP system. According to the report, the military departments have self- reported that they have included FIAR milestones and requirements in their schedules for those ERP systems still in the acquisition process, such as the Defense Agencies Initiative and the Air Force’s DEAMS. ERP system programs that are in the development phases have been given the FIAR requirements to include in the program schedules. In the May 2015 FIAR Plan Status Report, DOD officials added that the Office of the Under Secretary of Defense (Comptroller) has developed a methodology to include audit readiness in the Investment Decision Memorandum and the Acquisition Decision Memorandum. As stated in the report, during the Investment Decision Memorandum process, and for all systems that affect financial reporting, the Office of the Deputy Chief Financial Officer will provide input on each investment decision approval. The approval decision for each investment decision will depend on the DOD component’s demonstration that audit readiness and related compliance considerations have been included in the work products for each ERP. In addition, according to the report, Acquisition Decision Memorandums represent important checkpoints in the life cycles of DOD systems and are critical to ensuring that the expected outcomes are realized. Further, according to the report, for those systems that affect financial reporting, the Office of the Deputy Chief Financial Officer provides input for each acquisition decision approval at each business capability life cycle milestone. GAO Status: DOD officials have stated that the department has taken actions to integrate FIAR milestones into ERP program schedules and hold program managers accountable throughout the life cycles of the ERP systems. However, we consider this recommendation partially met because, for ERP programs that are still in the development phases, each program office must still incorporate the provided FIAR requirements into its program schedule as recommended by the panel. Moreover, the ERP program managers still need to be continually evaluated on their ability to maintain FIAR milestones as well as program acquisition-related milestones as recommended by the panel. These two actions will need to continue until DOD has completed its FIAR Plan audit readiness activities. Background: In its report, the panel expressed its concern about reported ERP schedule delays and cost overruns and questioned whether ERP schedule and cost estimates were reliable. The panel’s statements were partly based on our October 2010 report in which we stated that the department had identified nine ERP systems under development as critical to transforming the department’s business operations and addressing some of its long-standing weaknesses. The panel stated that our analysis of the schedules and cost estimates for four ERP programs—DEAMS, ECSS, GFEBS, and the Army’s Global Combat Support System (GCSS-Army)—found that none of the programs were fully following best practices for developing reliable schedules and cost estimates. More specifically, none of the programs had developed a fully integrated master schedule that reflected all activities, including both government and contractor activities. In addition, none of the programs established a valid critical path or conducted a schedule risk analysis. DOD Actions: In the May 2015 FIAR Plan Status Report, DOD officials stated that the department agrees that better methods are needed for estimating ERP implementation costs and scheduling. However, DOD officials added that the department’s experience with these programs over the past 10 years, along with industry best practices, has helped shape the strategies that are now being used in the management and oversight of ERP implementations, including the following, among others: Increasing discipline in requirements management. Reengineering business processes before focusing on solutions. Reducing customizations to commercial software. Sustaining leadership involvement throughout the life cycle. Emphasizing organizational change management to ensure that end users understand the impact to their jobs. Using end-to-end processes to better guide and constrain ERP development and interoperability. Measuring business performance consistently to assess ERP impacts. Incorporating portfolio management methods to make the right investment decisions. GAO Status: We consider this recommendation partially met because, while DOD has taken actions to implement best practices in developing reliable schedule and cost estimates, issues remain and additional actions are needed to fully address the recommendation. In February and September 2014, we reported that DOD has not fully implemented best practices in its schedule and cost estimates for DEAMS and GCSS- Army, respectively. We found that the schedule for DEAMS did not meet best practices, although the cost estimate did meet best practices. We reported that the issues associated with the schedule could negatively affect the cost estimate. For example, if there are schedule slippages, the costs for the program could be greater than currently estimated. DOD officials concurred with our recommendation that DOD consider and make any necessary adjustments to the DEAMS cost estimate after addressing our prior recommendation to adopt scheduling best practices. In our review of the schedule and cost estimates for GCSS- Army, we reported that while the Army had made some improvements to the schedule and cost estimates that supported the full deployment decision, the Army did not fully meet best practices in developing cost and schedule estimates for GCSS-Army and we recommended corrective actions with which DOD concurred. Background: The panel made this recommendation based on testimony that the issues surrounding changes to systems requirements are the cause of delays in ERP implementation. The panel noted in its report that the requirements process tends to be underinclusive or overinclusive. Including too many requirements can make ERP systems more complicated than needed; too few requirements may result in ERP systems that do not provide the needed functionality. DOD Actions: In the May 2015 FIAR Plan Status Report, DOD officials stated that the department agrees that ERP requirements must be managed throughout ERP development, both within the program and through involved oversight. DOD officials added that each ERP program and the related “customer” DOD component has experienced project scope creep and user-specific requirements that have driven cost and schedule challenges. According to DOD officials, the lesson learned from these experiences has been to strengthen management discipline through change control boards and engaged knowledgeable senior- leader steering groups. In addition, the Milestone Decision Authority (MDA) monitors the programs at a macro level for cost, schedule, and performance and takes appropriate actions to address risks. The MDA can establish specific criteria in an Acquisition Decision Memorandum that an ERP program manager must meet before a program is authorized to proceed to the next phase of development or limited or full deployment. For example, the MDA signed two separate Acquisition Decision Memorandums on April 18, 2014, that (1) identified decision criteria that the Defense Agencies Initiative ERP must meet before moving forward with a limited deployment and (2) gave specific exit criteria for the Integrated Personnel and Pay System-Army Increment I program to meet before the Army can fully deploy it to all locations. DOD officials stated that each military department has change control boards and senior- leader steering groups to control requirements for ERP systems. The MDA can set specific criteria in an Acquisition Decision Memorandum that must be met by the ERP program manager before the decision is made authorizing a program to proceed to (1) the next phase of development, (2) limited fielding, or (3) full deployment. GAO Status: This recommendation is applicable to DOD’s ongoing and future ERP program efforts, and we consider it partially met because DOD still needs to evaluate changes to ERP requirements as those systems are developed, implemented, and utilized, as the panel recommended. Background: As for recommendation 4.4, the panel was concerned that the requirements process was either underinclusive or overinclusive. For example, the Army’s Chief Management Officer (CMO) stated that underinclusiveness contributed to delays in implementing GFEBS. Conversely, DOD’s DCMO stated that there is a tendency to overrequire, supported by an institutional mindset that there is only one opportunity to establish requirements for an ERP. DOD Actions: According to its May 2015 FIAR Plan Status Report, DOD has evaluated and modified its requirement processes for defense business systems. DOD’s Business Capability Lifecycle, which requires disciplined delivery of capabilities to end users in 18 months, operates within DOD’s governance framework comprising the Investment Review Boards and Defense Business Systems Management Committee, which in turn advise the MDA for the ERP programs. Working through the Major Acquisition Information System MDA for systems, the Deputy Chief Financial Officer can ensure that requirements are being met. DOD officials added that the Office of the DCMO and the military CMOs will continue to assess current practices for governing requirements and implement needed changes. GAO Status: We consider this recommendation partially met because, in accordance with the panel recommendation, DOD needs to continue to assess the requirement decision-making processes at every level of authority for its ongoing and future ERP implementation efforts. Background: The panel was concerned that ERP systems may not provide the capabilities needed to achieve FIAR objectives. For example, we testified before the panel that some ERP systems do not function as intended. According to the panel’s report, if the ERP systems do not provide the intended capabilities, DOD components must continue to rely on legacy systems and manual processes. Therefore, DOD’s goals of modernizing and streamlining its business processes and strengthening its financial management capabilities—leading to financial statement auditability—could be jeopardized. DOD Actions: According to its May 2015 FIAR Plan Status Report, DOD agrees with the panel that effective information technology acquisition requires thorough risk management, including the identification, analysis, and mitigation of risks. DOD officials stated in the report that its FIAR methodology identifies ERP systems and associated feeder systems that relate to achieving FIAR goals. Each military department has established a risk management approach in which major risks are tracked and mitigation plans are developed to identify measures for resolving weaknesses associated with the development, implementation, and use of its ERP systems that could affect the achievement of FIAR goals. Each of the military departments reports on its respective ERP system program, for which it is responsible for resolving weaknesses, at the regularly scheduled FIAR review sessions. Officials from the military departments described the efforts to mitigate risks associated with ERP implementations. For example, Air Force officials stated that the risk management plan for DEAMS describes a comprehensive risk management process. According to Air Force officials, specific risks associated with FIAR compliance have been identified, are currently being assessed, and will be reported and tracked through the DEAMS risk management program. According to Army officials, each Army ERP program has developed its own risk management procedure, based on existing Army regulations. In addition, each Army ERP program has developed documentation on applicable FIAR-related processes, risks, and controls to mitigate risks. The Army’s ERP program offices provide periodic acquisition and program reviews at the MDA level. These reviews include programmatic and audit-related risks associated with an ERP program as well as risk mitigation steps associated within the program. Army officials stated that additional requirements and actions are identified to mitigate risk if a risk is outside of the program’s direct control. Navy officials stated that its Enterprise Business Solutions (formerly the Navy ERP Program Office) has a risk management program consisting of two components: (1) the Risk Committee, which allows any individual to bring a risk to program leadership’s attention, and (2) the Risk Board, which is the program’s senior leadership review and mitigation planning forum for identified risks. GAO Status: DOD has developed risk mitigation plans for its ERP systems. However, we consider this recommendation partially met because DOD and its components need to continually monitor ERP and FIAR efforts to identify actual and potential weaknesses or deficiencies associated with developing, implementing, and using ERP systems that could affect the achievement of FIAR goals. Moreover, DOD components need to identify implementation steps and assign responsibilities for the performance of these steps to resolve the potential weaknesses or deficiencies, as the panel recommended. Further, time frames need to be established for taking these steps. The identification of any alternative arrangements needed outside of the ERP environment to meet FIAR objectives is critical to ensuring that a DOD component will be able to make these arrangements and not hinder DOD’s audit readiness goals. Background: In its report, the panel stated that conversion of data from the legacy systems to the new ERP systems is a difficult and challenging effort. The panel noted that each military department had taken its own approach to data conversion and expressed its concern that poor execution of data conversion efforts could cause delays in implementing ERP systems. DOD Actions: In its May 2015 FIAR Plan Status Report, DOD stated that the military departments have learned from past experience. For example, the Air Force, after its initial data conversion for DEAMS at Scott Air Force Base, decided not to convert legacy data into DEAMS, but instead to use a dual-processing approach. Under this approach, newly initiated transactions are entered into DEAMS, but transactions already initiated in legacy systems continue to be processed in the legacy system until contract closeout. The Director of Business Integration, Office of the Under Secretary of the Comptroller, stated that since 2012, when the panel issued its report, data conversions from legacy systems to ERP systems have become part of standard protocols performed between an individual ERP program office and the deploying DOD component or organization. Data conversions to ERP systems focus on open transactions only for the purposes of providing matching transactions when the disbursements and matching outlays occur. According to this official, since 2012, the data conversions for the Defense Agencies Initiative ERP and Navy ERP, for example, have gone smoothly. With regard to the panel’s recommendation that DOD assess the merits of designating a senior official responsibility for coordinating and overseeing data conversion, a DOD official stated that while the recommendation may have made sense in 2012, when the panel’s report was issued, it is no longer relevant given the quality of data conversions over the last 3 years and the standard protocols used for data conversions for each individual ERP deployment. GAO Status: DOD has evaluated lessons learned from previous data conversion efforts and considered these lessons in its current data conversion efforts. However, we consider this recommendation partially met because the department will need to periodically update its data conversion plans. According to the panel, these updates will need to include assessments of (1) the progress made in converting data into the ERP environment, (2) whether that progress supports the satisfaction of existing requirements, and (3) whether additional data conversion requirements would facilitate achieving FIAR objectives. Background: In its report, the panel stated that the DOD Deputy IG testified that the numerous interfaces between ERP systems and legacy systems may be overwhelming and may not be adequately defined. As stated in the panel’s report, the number of interfaces is driven by the number of legacy systems. The panel is concerned that problems associated with these interfaces could be compromising functionality. According to the panel report, DOD should make every effort to reduce reliance on those legacy system activities that can be effectively and efficiently conducted by ERP systems. The panel also stated that DOD should complete and validate its business process reengineering analysis to ensure that those business processes supported by the ERP systems will be as streamlined and efficient as practicable and that the need to tailor ERP systems to meet unique requirements or to incorporate unique interfaces has been eliminated or reduced to the extent practicable. DOD Actions: According to DOD’s May 2015 FIAR Plan Status Report, the department is increasingly approaching investment decisions with a portfolio view to reduce or eliminate unique requirements and interfaces. As DOD also stated in that report, DOD has begun to implement process improvements across all systems by implementing key strategic initiatives, including its use of the global exchange to increase the interoperability and exchange of standardized data between systems. In its report, DOD also stated that there is a strategy to reduce the number of existing legacy systems over the next several years, which will lessen the need for a large number of interfaces. The military department CMOs and the DCMOs (for the other defense organizations) examine and validate the need for unique requirements and interfaces as they develop their respective organizational execution plans in preparation for review by the Defense Business Council. GAO Status: We consider this recommendation partially met because DOD, through its business process reengineering, needs to ensure that unique requirements and interfaces are reduced to the minimum extent practicable. As the military departments and components proceed with FIAR activities related to ERP systems, the organizational execution plan reviews and other actions to implement these key strategic initiatives will be critical in identifying the causes of interface issues, determining how many and which interfaces can be reduced, and the improvements that can be made to support more effective interfaces. Furthermore, in implementing this recommendation, the department needs to ensure that its components are reengineering current business practices rather than customizing commercial ERP systems. For example, the DOD IG has reported that DOD has not reengineered its business processes to the extent necessary; instead, it has often customized commercial ERP systems to accommodate existing business processes. This customization creates a need for system interfaces and weakens controls built into an ERP. The ERP systems were designed to replace numerous subsidiary systems, reduce the number of interfaces, standardize data, eliminate redundant data entry, and provide an environment for end-to- end business processes, while being a foundation for sustainable audit readiness. However, the DOD IG stated that the numerous interfaces between the ERP systems and existing systems may be overwhelming and inadequately defined. Each interface presents a risk that a system might not function as designed, and each prevents the linking of all transactions in an end-to-end process. Background: In its report, the panel stated that the FIAR Guidance calls for the DOD components to test information systems controls for key systems and processes. Because most financial information is maintained in computer systems, the controls over how those systems operate are integral to the reliability of financial data. For example, the panel noted that if auditors are able to rely on information system controls, the extent of substantive testing can be significantly reduced. According to the panel, DOD should continue to subject its systems, both legacy systems and ERP systems, to information systems controls testing, but it must also ensure that a priority is placed on this testing and that sufficient numbers of appropriately skilled personnel exist within the test and evaluation community. In addition, when implementing ERP systems, DOD should ensure that the systems satisfy the computer control objectives established in GAO’s Federal Information System Control Audit Manual. DOD Actions: According to the May 2015 FIAR Plan Status Report, officials from the Office of the Director, Operational Test and Evaluation (OT&E), do not perform testing for all systems, but provide guidance to assist organizations in performing testing. Officials from OT&E stated that assessing information system control testing needs is difficult because of the different interpretations of information system controls. As a result of these inconsistencies, gaps exist in the types of testing that are actually accomplished. Nevertheless, OT&E officials stated that they are not specifically concerned with testing individual information systems controls, but rather that (1) a typical user can perform operational tasks with the production-representative system in an operationally realistic computing environment and (2) the system has the appropriate computer network defense capabilities. According to OT&E officials, DOD’s components implementing ERP systems, including the military departments, are involved in the information systems control testing of their ERP systems by developing the test and evaluation master plans. According to these officials, these plans, provided by ERP program managers and approved by OT&E officials, specify appropriate testing for ERP systems. For example, Navy officials said that the Navy will coordinate with OT&E and others to ensure that the appropriate workforce and skill sets have been identified for evaluation and testing. OT&E officials stated that they anticipate the department’s increasing need to improve the cybersecurity of its ERP systems and other programs and networks will require increased test resources, especially for cyber ranges. According to these officials, there are three groups of organizations that play a critical role in the information systems control testing of ERP systems. 1. User/requirements communities. It is critical for the user/requirements communities to identify the specific ERP information systems control requirements beginning with the request for proposal. 2. Operational test agencies. These agencies are responsible for executing operational tests for each ERP. 3. ERP program managers. The program managers develop the test and evaluation master plans for approval from the Director, Operational Test and Evaluation. GAO Status: We consider this recommendation partially met because OT&E officials, in consultation with the DCMO and components, will need to continue assessing their role in evaluating information system controls for all ERP systems being deployed by the department and examine necessary skill sets to accomplish such testing to determine if additional training is required within the developmental and operational test communities, as recommended by the panel. In addition, DOD has not yet ensured the testing of general and application controls for its ERP systems that are critical to DOD’s financial audit readiness efforts. As DOD stated in its May 2015 FIAR Plan Status Report, the department must evaluate and remediate controls for hundreds of information technology systems that materially affect the financial statements to achieve and sustain an audit ready systems environment. However, additional actions are needed because DOD’s approach for testing ERP systems may not result in the benefits of testing envisioned by the panel because the focus of DOD’s testing is based more on the operational capability of the systems and their security from attack than the specific application controls of the ERP systems. For example, the panel noted that if auditors are able to rely on information system controls, the extent of substantive testing can be significantly reduced. Under DOD’s broad approach, specific application controls for ERP systems that affect the information under audit may not be tested. In addition to the contact named above, Michael S. LaForge (Assistant Director), Sandra S. Silzer (Auditor-in-Charge), and Laura S. Pacheco made key contributions to this report. Also contributing to this report were Doreen S. Eng, Francine M. DelVecchio, Maxine L. Hattery, and Jared D. Minsk.
A congressional panel examined the capacity of DOD's financial management system for providing timely, reliable, and useful information for decision making and reporting. The panel, in its January 2012 report, included 29 recommendations addressed to DOD in four areas: (1) FIAR strategy and methodology, (2) challenges to achieving financial management reform and auditability, (3) financial management workforce, and (4) enterprise resource planning systems implementation. GAO was asked to review the status of DOD's actions to implement these recommendations. This report examines the extent to which the recommendations have been implemented. GAO reviewed pertinent legislation, including the National Defense Authorization Acts for Fiscal Years 2010 through 2015 as well as the department's FIAR Guidance and FIAR Plan Status Reports. GAO analyzed relevant information and interviewed officials from the Office of the Secretary of Defense, the military departments, and two service providers. Using the three status categories developed for GAO's high-risk work—met, partially met, and not met—GAO determined the extent to which DOD implemented the panel's recommendations. The Department of Defense (DOD) has made progress toward implementing each of the 29 recommendations made by the House Armed Services Committee Panel on Defense Financial Management and Auditability Reform (the panel). GAO determined that DOD's actions met 6 of the panel's recommendations and partially met the other 23. In its May 2015 Financial Improvement and Audit Readiness (FIAR) Plan Status Report, DOD reported that 9 recommendations were met and 20 were partially met. The 3 recommendations for which GAO disagreed with DOD's reported status of met related to (1) attestations on audit readiness in each of the FIAR Plan Status Reports; (2) inclusion of FIAR-related goals in Senior Executive Service performance plans, and rewarding and evaluating performances over time based on those goals; and (3) the review of audit readiness assertions by component senior executive committees. For example, while each FIAR Plan Status Report is coordinated among FIAR Governance Board (Board) members, including the Comptroller/Chief Financial Officer among others, in these reports Board members do not explicitly attest to whether DOD is on track to achieve audit readiness in 2017 as called for by the panel's recommendation and not all Board members provide signed statements about component audit readiness in the reports. GAO and DOD agree that the remaining 20 recommendations were partially met and continued actions are needed, but GAO found that additional actions are needed to address some recommendations. These 20 partially met recommendations cover such diverse topics as a strategy for the consolidation of component financial information, valuation of historical asset costs, and assessing the competencies of the civilian financial management workforce. For example, DOD has made progress in assessing the competencies of its civilian financial management workforce in the financial management community, but has not yet assessed the competencies of all civilian, military, and contracted personnel performing financial-related functions, as recommended by the panel. Other recommendations are related to the implementation of enterprise resource planning systems—automated systems that perform a variety of business-related financial management tasks. DOD officials have stated that these systems are critical to DOD's ability to achieve audit readiness, but none of these recommendations have been fully met. The panel's report and its recommendations touch on some of the most critical challenges DOD faces in achieving lasting financial management improvements and financial statement audit readiness. However, it is important to note that implementation of the panel's recommendations may not include all of the actions needed for DOD to achieve auditable financial statements. As auditors perform examinations and audits, they may identify deficiencies that were not previously known and therefore were not addressed by the panel's recommendations. DOD is monitoring its progress for implementing the FIAR Plan against interim milestones included in its April 2015 FIAR Guidance. However, as the audit readiness date approaches, DOD has emphasized asserting audit readiness by set dates over assuring that processes, systems, and controls are effective, reliable, and sustainable. While time frames are important for measuring progress, DOD should not lose sight of the ultimate goal of implementing lasting financial management reform, among other things, to ensure that it can routinely generate reliable, auditable financial information. GAO is recommending that DOD reconsider the status of three panel recommendations that it determined to be met but that GAO determined to be only partially met. DOD concurred with the recommendation and described planned actions to address it.
Foreign nationals who wish to come to the United States on a temporary basis must generally obtain an NIV to be admitted. State manages the visa process, as well as the consular officer corps and its functions, at 219 visa- issuing posts overseas. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews, in- person interviews, collection of biometrics (fingerprints), and cross- referencing an applicant’s name against the Consular Lookout and Support System—State’s name-check database that posts use to access critical information for visa adjudication. In some cases, a consular officer may determine the need for a Security Advisory Opinion, which is a recommendation from Washington on whether to issue a visa to the applicant. Depending on a post’s applicant pool and the number of visa applications that a post receives, each stage of the visa process varies in length. For an overview of the visa process see figure 1. Congress, State, and DHS have initiated new policies and procedures since the 9/11 terrorist attacks to strengthen the security of the visa process. These changes have added to the complexity of consular workload and have increased the amount of time needed to adjudicate a visa. Such changes include the following: Beginning in fiscal year 2002, State began a 3-year transition to remove visa adjudication functions from consular associates. All NIVs must now be adjudicated by consular officers. Personal interviews are required by law for most foreign nationals seeking NIVs. As of October 2004, consular officers are required to scan visa applicants’ right and left index fingers through the DHS Automated Biometric Identification System before an applicant can receive a visa. In 2005, the Secretary of Homeland Security announced that the U.S. government had adopted a 10-fingerscan standard for biometric collection of fingerprints. In February 2006, State reported that it would begin pilot testing and procuring 10-print equipment to ensure that all visa-issuing posts have collection capability by the end of fiscal year 2007. According to State, consular officers face increased requirements to consult with headquarters and other U.S. agencies prior to visa issuance in the form of Security Advisory Opinions. According to State, as a result of the Patriot Act, consular officers have access to, and are required to consult, far greater amounts of interagency data regarding potential terrorists and individuals who would harm U.S. interests. A number of potential factors can contribute to delays for visa interview appointments at consular posts. For example, increased consular officer workload at posts, which can be caused by factors such as increased security screening procedures or increased visa demand, can exacerbate delays because there are more work requirements for each available officer to complete. Other factors such as staffing gaps and ongoing consular facility limitations could also affect waits because they may limit the number of applicants that can be seen for an interview in a given day. Following the 9/11 terrorist attacks, applications for visas declined from a high of over 10.4 million in fiscal year 2001 to a low of approximately 7 million in 2003. For fiscal years 2004 through 2006, the number of visa applications increased, according to State’s data (see fig. 2). State anticipates that 8.1 million visas applications will be received in fiscal year 2007 and 8.6 million in 2008. State’s visa workload increased by almost 16 percent between 2004 and 2006. In addition, several countries and posts have seen large growth in visa demand, and State has projected these trends to continue well into the future. Following are examples of these trends: India had an 18 percent increase in visa adjudications between 2002 and 2006. Posts in China reported that their visa adjudication volume increased between 18 and 21 percent last year alone, and growth is expected to continue. We have previously reported on visa delays at overseas posts. In particular, we have reported on the following delays in Brazil, China, India, and Mexico: In March 1998, we reported that the post in Sao Paolo, Brazil, was facing extensive delays due to staffing and facilities constraints. In February 2004, we reported delays at consular posts in India and China. For example, in September 2003, applicants at one post we visited in China were facing waits of about 5 to 6 weeks. Also, we reported that, in summer 2003, applicants in Chennai, India, faced waits as long as 12 weeks. In April 2006, we testified that, of nine posts with waits in excess of 90 days in February 2006, six were in Mexico, India, and Brazil. According to State, wait times for visa interviews have improved at many overseas consular posts in the past year. However, despite recent improvements—such as those at posts in India, Mexico, and Brazil—a number of posts reported long waits at times during the past year. Believing the waits at some posts are excessive, in February of this year, State announced its goal of providing all applicants an interview within 30 days. We identified a number of shortcomings in the way in which State’s visa waits data is developed, which could mask the severity of the delays for visa interviews at some posts and limit the extent to which State can monitor whether the visa wait problem has been addressed. To better understand and manage post workload, State has begun to develop a measure of applicant backlog. In recent months, reported wait times for visa appointments have generally improved. For example, in reviewing visa waits data provided to us by the Bureau of Consular Affairs for the period of September 2006 to February 2007, we found that 53 of State’s 219 visa-issuing posts had reported maximum wait times of 30 or more days in at least 1 month—44 fewer posts than had reported this figure when we reviewed the same period during the previous year (see fig. 3). Furthermore, wait times reported by several consular posts have improved during the past year, including for a number of high volume posts in India, Brazil, and Mexico that had previously reported extensive delays. In April 2007, wait times at all posts in India were under 2 weeks, down from previous waits that exceeded 140 days at four key posts, as recently as August 2006, in most cases. For example, Mumbai reported a reduction in wait times from a high of 186 days in September 2006 to 10 days as of April 9, 2007. Reported wait times at some key posts in Mexico also significantly declined, as have wait times for several posts in Brazil in the past year. Furthermore, an additional number of posts with delays experienced large reductions in wait times over a recent 12 month period. Despite recent improvements in wait times at a number of consular posts, at times during the past year, especially during peak processing periods, a number of visa adjudicating posts have faced challenges in reporting wait times of less than 30 days. For example, during typical peak demand season, 29 posts reported maximum monthly waits exceeding 30 days over the entire 6-month period of March through August 2006 (see fig. 4). We observed that long waits had occurred over the summer months in Tegucigalpa, Honduras; San Jose, Costa Rica; and several posts in India. Furthermore, some posts we reviewed developed increased wait times. For example, in Caracas, the reported visa waits significantly increased— from 34 days in February 2006 to 116 days in April 2007. In addition, several other posts, including Sao Paolo, Brazil; Monterrey, Mexico; Tel Aviv, Israel; and Kingston, Jamaica; have experienced increases in wait times since February 2006. Moreover, 20 posts reported experiencing maximum monthly wait times in excess of 90 days at least once over the past year. In February 2007, State’s Bureau of Consular Affairs distributed guidance setting a global standard that all visa applicants should receive an appointment for a visa interview within 30 days. Previously, State had not set a formal performance standard for visa waits but had set a requirement that posts report their wait times on a weekly basis and make this information publicly available through post Web sites. In setting the 30– day standard for visa waits, officials acknowledged that wait times are not only a measure of customer service but also help posts to better manage their workload and visa demand. Furthermore, State identified that such a standard allows it to better track post performance, helps with resource allocation, and provides transparency in consular operations. Consular officials explained to us that posts that consistently have wait times for visa interview appointments of 30 days or longer may have a resource or management problem. In setting its 30-day performance benchmark, State also distributed information to posts on how wait times data is to be used by Bureau of Consular Affairs management. For example, State indicated it will review all posts that have reported waits over 20 days to determine if remedial measures are needed. State has provided guidance indicating that posts are required to report wait times on a weekly basis, even if the times have not changed from the previous week. However, we found posts are not reporting waits data consistently, which impacts the reliability of State’s visa waits figures. In September 2005, our analysis of State’s data on reported wait times revealed significant numbers of posts that did not report this information on a weekly basis during the 6-month period we reviewed. In reviewing data over the past year, we again found that a large number of posts were not consistently reporting waits data on a weekly basis, as required by State. For example, post reporting of wait times from January 2006 to February 2007 showed that, while a large number of posts (about 79 percent) had reported waits at least monthly, only 21 posts (about 10 percent) reported waits at least weekly. Inconsistencies among posts in the reporting of visa waits data impacts the reliability of visa waits figures and limits State’s ability to assess whether the problem has been addressed by posts. However, State does not appear to be enforcing its weekly reporting requirement. State acknowledges that it has had difficulties in getting all 219 consular posts to report this data consistently. According to cables provided to us by State, posts are directed to provide the “typical” appointment wait time applicable to the majority of applicants applying for a given category of visas on a given day. Several of the posts we visited calculated wait times based on the first appointment available to the next applicant in a given visa category; however, other posts we reviewed calculated waits differently. For example, one post we visited computed wait times by taking the average of several available appointment slots. In addition, several consular officials we spoke with overseas said that they are still unclear on the exact method posts are to use to calculate wait times, and some managers were unsure if they were calculating wait times correctly. Additionally, we observed that some posts artificially limit wait times by tightly controlling the availability of future appointment slots—such as by not making appointments available beyond a certain date, which can make appointment scheduling burdensome for the applicant who must continually check for new openings. State officials admitted that posts should not be controlling the availability of appointment slots to artificially limit wait times but, to date, there has not been specific guidance distributed to posts on this issue. We determined that State’s data are sufficiently reliable for providing a broad indication of posts that have had problems with wait times over a period of time and for general trends in the number of posts that have had problems with wait times over the period we reviewed; however, the data were not sufficiently reliable to determine the exact magnitude of the delays because the exact number of posts with a wait of 30 days or more at any given time could not be determined. Until State updates and enforces its collection standards for visa waits data, precise determinations about the extent to which posts face visa delays cannot be made. State officials acknowledge that current wait times data is of limited reliability. State officials have also said that visa waits data was not originally designed for the purpose of performance measurement but to provide applicants with information on interview availability. According to State, a current goal of the Bureau of Consular Affairs is to refine collection standards for wait times information to provide more uniform and transparent information to applicants and management; however, the bureau has not yet done so. State’s reported wait time data generally reflect the wait, at a moment in time, for new applicants, and do not reflect the actual wait time for an average applicant at a given post. Furthermore, wait times generally do not provide a sense of applicant backlog, which is the number of people who are waiting to be scheduled for an appointment or the number of people who have an appointment but have yet to be seen. To better understand and manage post workload, State officials we spoke with said that they were in the process of developing a measure of applicant backlog. Although State has not yet developed the measure of backlog, officials we spoke with said that they expect to begin testing methods for measuring applicant backlog by the end of 2007. State has implemented a number of measures to increase productivity and better manage visa workload, as well as measures to address shortcomings in staffing and facilities for a number of consular posts experiencing visa delays. State has provided temporary duty staff to assist in adjudicating visas at several locations with long wait times, particularly at posts in India, and recently developed a plan to relocate consular positions to locations where large disparities in staff and visa demand were apparent. In addition, State has continued to upgrade embassies and consulates overseas to aid in processing visa applicants. Furthermore, State has implemented some procedures and policies to maximize efficiency and better manage visa workload. However, despite the measures State has taken to address staffing, facilities, and other constraints at some posts, State’s current efforts are generally temporary, nonsustainable, and are insufficient to meet the expected increases in demand at some posts. State has recently taken action at several posts to address current staffing gaps to minimize the impact on visa wait times. State has deployed temporary duty staff from other consular posts and from headquarters to help process and adjudicate visa applicants. For example, State deployed 166 officials to staff consular sections in fiscal year 2006 and through April of fiscal year 2007. In addition, at the order of the Ambassador to India, beginning in 2006, posts in India utilized consular-commissioned officials from other offices in the embassy and consulates to assist the consular section in handling its workload, including fingerprinting applicants and interviewing some applicants, which helped reduce the wait times at posts. According to consular officials, the additional assistance in India was necessary as posts there did not have enough permanent consular staff to handle the demand and reduce wait times. In addition, in February 2007, State completed a review of consular officer positions that examined the disparity between visa workload and the number of consular officers at posts. As a result of this study, State will transfer consular positions from certain posts that are capable of handling the workload without reporting long visa waits to posts where there has not been adequate staff to handle the visa workload. The majority of the positions are being transferred from posts in the European and Eurasian Affairs Bureau to posts in the Western Hemisphere, East Asia and Pacific, and South and Central Asian bureaus. Of these transferred and newly created consular officer positions, the majority will be located in Brazil, China, India, and Mexico—posts with a history of long wait times and high demand for visas. State acknowledges that the repositioning of consular staff, while necessary, may not adequately address the increasing demand for visas worldwide. Despite the measures State has taken to address the staffing issues at some posts, State’s current consular staffing efforts are generally temporary, nonsustainable, and insufficient to meet the expected increases in demand at some posts. First, when-actually-employed staff are only allowed to work 1,040 hours per year due to federal regulations. Second, posts are typically required to cover the housing costs of assigned temporary staff, which is not always feasible if posts are facing budget constraints. Third, embassy or consulate officials that were temporarily assigned to support consular operations indicated that their new duties negatively affected their ability to perform their regular assignments, as they were spending time performing consular duties instead of their typical functions at post. Fourth, although temporary staff have helped to improve wait times at select posts, current efforts—and some recent temporary assignments, such as over the past 7 months in India—have been undertaken during a period of lower applicant volume. It is unknown whether State will be able to maintain the improved wait times during the summer of 2007, as the period between May and August is typically when posts have the largest influx of visa applicants and, in turn, longer waits. For example, one post in India recently reported wait times now exceed 30 days. Moreover, the temporary staff assisting with visa adjudications during our visit to posts in India was expected to leave by the end of May 2007. According to State’s Assistant Deputy Secretary for Visa Services, surges in temporary duty staff, such as the ones State employed for India, can be useful in tackling short-term situations but are not a viable long- term solution in places with high visa demand. Furthermore, consular staffing gaps are a long-standing problem for State and have been caused by such factors as State’s annual staffing process, low hiring levels for entry-level junior officer positions, and insufficient numbers of midlevel consular officers. We have previously reported that factors such as staffing shortages have contributed to long wait times for visas at some posts. A number of State’s visa-adjudicating posts reported shortages in consular staff for 2006, and we observed gaps that contributed to visa wait times at several posts overseas. Furthermore, we reviewed reports for 32 select consular posts abroad to assess visa workload, consular staffing and facilities, as well as other issues affecting visa wait times. We found that of the 32 posts, 19 posts (or about 60 percent) indicated the need for additional consular staff to address increasing workload. State has improved a number of consular sections at embassies and consulates worldwide. According to the Bureau of Overseas Buildings Operations, since September 2001, State has improved almost 100 embassies and consulates, improving the consular section facilities at a number of these locations. For example, between fiscal years 2003 and 2005, State obligated $26.9 million to fund consular workspace improvement projects at 101 posts. Although these improvement projects have been completed, according to the Bureau of Consular Affairs, most were designed as temporary solutions that may require additional construction in the future. Moreover, although some consular improvement projects were recently completed or were under way when we visited Mumbai and Chennai, India, these posts did not have adequate office, waiting room, security screening, or window space to accommodate the volume of visa applicants. State’s construction project in Chennai to add windows and additional processing areas was expected to be completed by May 2007, and State has begun construction on a new consulate in Mumbai that will be completed in 2008 and will add more space for additional consular staff and 26 more windows for interviewing. In addition, State is planning new consulate and embassy construction projects for New Delhi and Hyderabad, India, as well as at a number of other posts. We also found that a number of posts we reviewed currently face facility constraints, which limit the number of visa interviews that can take place in a given day and, in some cases, prevent posts from keeping pace with the current or expected future demand for visas. For example, 21 of 32 posts reported, in their consular packages, that limitations to their facilities affected their ability to increase the number of applicants they could interview, which can contribute to longer wait times. Although State has taken steps to improve consular facilities and has plans to rebuild a number of posts, it is unclear whether the facilities will be adequate to handle the future demand. Two posts that we reviewed are already predicting that future increased demand will outstrip visa processing capacities given existing facilities constraints. For example, in Seoul, South Korea, post officials report that, despite recent improvements to the facility, the post will soon have no additional space to accommodate future applicant growth. Moreover, there is no current viable option to build a new facility due to continuing land negotiations between the U.S. and South Korean governments. In addition, a number of State’s recent facilities projects have not incorporated planned projections of increased workload growth and are expected to soon face challenges meeting demand. For example, even though a new embassy construction project is currently under way in Beijing, China, State officials indicated that the number of planned interviewing windows and space in the new facility will be insufficient to allow for future increases in visa demand. In addition, in Shanghai, China, even though the consular section was moved to an off- site location to process visa applications, the post has indicated that it already has reached visa-adjudicating capacity because it cannot add any more interviewing windows in the current space, and construction on a new consulate will not begin until 2009. According to the Director and Chief Operating Officer of the Bureau of Overseas Buildings Operations, the bureau designs and constructs consular facilities with input from Consular Affairs; therefore, Consular Affairs needs to provide more defined assessments of future needs at a facility. The director stated that proper planning and stronger estimates of future needs will help in building facilities that can better address wait times at post over the long term. Since the 9/11 terrorist attacks, Congress, State, and DHS have initiated a series of changes to visa policies and procedures, which have added to the complexity of consular officers’ workload and, in turn, exacerbated State’s consular staffing and facilities constraints. For example, most visa applicants are required to be interviewed by a consular officer at post, and applicants’ fingerprints must be scanned. Furthermore, additional procedural changes are expected, including the expansion of the electronic fingerprinting program to the 10-fingerscan standard, which could further increase the workload of officers and the amount of time needed to adjudicate an application. For example, consular officers in London, which is one of the posts piloting the 10-fingerprint scanners, indicated that the 10-fingerscan standard would significantly affect other posts’ operations given that they had experienced about a 13 percent reduction in the number of applicants processed in a day. However, as each post faces slightly different circumstances, it is unclear whether this reduction would take place at all posts. To lessen the increase in wait times caused by of some of these legislative and policy changes, State has promoted some initiatives to aid posts in processing legitimate travelers. For example, State has urged all posts to establish business and student facilitation programs intended to expedite the interviews of legitimate travelers. State also continues to use Consular Management Assistance Teams to conduct management reviews of consular sections worldwide, which have provided guidance to posts on standard operating procedures, as well as other areas where consular services could become more efficient. In addition, according to State officials, State has developed a Two-Year Plan, an overall visa processing strategy to coordinate changes to the visa process that will ensure consular officers focus on tasks that can only be accomplished overseas, and is also contemplating other changes to reduce the burden placed on applicants and consular officers. These changes include the following: the deployment of a worldwide appointment system, use of a domestic office to verify information on visa petitions, a revalidation of fingerprints for applicants who have already completed the 10-fingerprint scan, and the implementation of an entirely paperless visa application process and remote or off-site interviewing of visa applicants. Furthermore, some posts have taken action to reduce their increased workload. For example, the following actions have been taken: The consular sections in South Korea and Brazil have established expedited appointment systems for certain applicant groups, including students. Consular officers in Manila, Philippines, redesigned the flow of applicants through the facility to ease congestion and utilized space designated to the immigrant visa unit to add three new visa processing stations. Posts in Brazil have waived interviews for applicants who were renewing valid U.S. visas that were expiring within 12 months and had met additional criteria under the law. The embassy in Seoul, South Korea, implemented a ticketing system that tracks applicants through the various stages of processing and provides notification to consular section management if backups are occurring. The system will also automatically assign applicants to the first available interviewing window in order to balance the workload of applicant interviews between all available interviewing windows. The embassies in El Salvador and South Korea have conducted workflow studies in order to identify obstructions to efficient applicant processing. Although State has recently implemented a number of policy and procedural changes to address increased consular workload and is considering additional adjustments, more could be done to assist posts in their workload management. Moreover, the effective practices and procedures implemented by individual posts that help manage workload and assist in improving applicant wait times are not consistently shared with the other consular posts. While recognizing that not all the policies and procedures used by posts to help manage visa workload are transferable to other posts, State officials indicated that, although there is currently not a forum available for consular officers to share such ideas, State is in the process of developing some online capabilities for posts to share visa practices and procedures. With worldwide nonimmigrant visa demand rising closer to pre-9/11 levels, and current projections showing a dramatic increase in demand over time, State will continue to face challenges in managing its visa workload and maintaining its goal of keeping interview wait times under 30 days at all posts. State has not developed a strategy for addressing increasing visa demand that balances such factors as available resources and the need for national security in the visa process against its goal that visas are processed in a reasonable amount of time. In 2005, State contracted with an independent consulting firm to analyze several factors to help predict future visa demand in 20 select countries, which, according to State officials, constituted approximately 75 percent of the visa workload at the time. The consulting firm identified some demographic, economic, political, commercial, and other factors that it believed would affect visa demand over a 15-year period, beginning in 2005, and estimated a likely rate of growth in demand in those select countries. The study predicted the growth in demand in these countries would range between 8 percent and 232 percent, with Argentina, Brazil, China, India, Mexico, and Saudi Arabia all projected to experience significant growth of more than 90 percent (see fig. 5). State officials indicated that they used the futures study to assist in determining consular resource allocations and in the repositioning of consular staff in State’s review of consular positions in February 2007. However, State has not analyzed the 5-, 10-, or 20-year future staffing and other resource needs based on the demand projections found in the study. Although officials indicated that State continues to use the visa demand projections in the Consular Affairs Futures Study to assist in making staffing and resource decisions, some of the study’s projections have already been proven to underestimate growth in demand. In addition, State has not taken action to update the study to reflect changes in visa workload since 2005. More than half of the countries reviewed are already facing surges in visa demand greater than the levels predicted in the Consular Affairs Futures Study for fiscal year 2006 and beyond. For example, Brazil adjudicated more visas in 2006 than the volume of applications the study projected for Brazil for 2010. In addition, Mexico adjudicated approximately 126,000 more visas in 2006 than the study projected. Also, the Ambassador to India recently stated that all posts in India would process over 800,000 applications in 2007, which exceeds the study’s forecasts for India’s demand in 2016. The Deputy Assistant Secretary for Visa Services testified to Congress in March 2007 of the need to consider and implement viable long-term solutions for posts with high visa demand and indicated that State needed to ensure it aligns consular assets to meet the demand. In November 2006, State developed a plan for improving the visa process that details several steps it intends to implement, or pilot, by 2009. Although the visa improvement plan can assist State in improving the visa process, and State has taken some steps to address wait times at a number of overseas posts, State has not determined how it will keep pace with continued growth in visa demand over the long term. For example, the strategies in the plan do not identify the resources State would need to increase staff or construct adequate facilities to handle the projected demand increases. Moreover, State has not proposed plans to significantly reduce the workload of available officers or the amount of time needed to adjudicate a visa if such resources are not available. Without a long-term plan to address increasing demand, State does not have a tool to make decisions that will maximize efficiency, minimize wait times, and strengthen its ability to support and sustain its funding needs. In order to develop a strategy addressing future visa demand, State may want to make use of operations research methods and optimization modeling techniques. These approaches can allow State to develop a long- term plan that takes into account various factors—such as State’s security standards for visas, its policies and procedures to maximize efficiency and minimize waits, and available resources. Researchers have developed statistical techniques to analyze and minimize wait times in a wide variety of situations, such as when cars queue to cross toll bridges or customers call service centers. These techniques consider the key variables that influence wait times, such as the likely demand, the number of people already waiting, the number of staff that can provide the service required, the time it takes to process each person, and the cost of each transaction; consider a range of scenarios; and provide options to minimize wait times, bearing in mind the relevant factors. The analyses can, for instance, provide quantitative data on the extent to which wait times could be reduced if more staff were assigned or the time for each transaction were decreased. For example, State could determine the approximate number of additional resources it would need in order to meet its stated goal of providing an appointment to all applicants within 30 days despite increased visa demand. Such a response would either require State to provide additional staff through new hires or by using other staffing methods, such as utilizing civil servants to adjudicate visas overseas. Alternatively, State could require consular officers to process applicants more efficiently and quickly. State may require multiple new facilities to support an increase in the number of Foreign Service officers and allow posts to process more applicants daily. However, if State were to determine that a significant increase in resources for staffing and facilities is not feasible, then State would have to evaluate the efficacy of its 30-day standard for visa appointments or consider requesting Congress to allow for changes in the adjudication process, such as allowing additional flexibility in the personal appearance requirement for visa applicants. It is dependent upon State to determine the specific techniques and appropriate variables or factors required to optimize its capability to address the demand for visas. Expediting the adjudication of NIV applications is important to U.S. national interests because legitimate travelers forced to wait long periods of time for a visa interview may be discouraged from visiting the country, potentially costing the United States billions of dollars in travel and tourism revenues over time. Moreover, State officials have previously testified that long waits for visa appointments can negatively impact our image as a nation that openly welcomes foreign visitors. Given projected increases in visa demand, State should develop a strategy that identifies the possible actions that will allow it to maintain the security of the visa process and its interest in facilitating legitimate travel in a timely manner. The development of such a plan will strengthen State’s ability to manage visa demand, support and sustain its funding needs, encourage dialogue with relevant congressional committees on the challenges to addressing waits, and promote consensus by decision makers on funding levels and expectations for eliminating visa delays. Furthermore, there are several measures State could take in the short run to improve the wait times for interviews of NIV applicants and the reliability of visa waits information for management purposes. To improve the Bureau of Consular Affair’s oversight and management of visa-adjudicating posts, we recommend that the Secretary of State take the following actions: Develop a strategy to address worldwide increases in visa demand that balances the security responsibility of protecting the United States from potential terrorists and individuals who would harm U.S. interests with the need to facilitate legitimate travel to the United States. In doing so, State should take into consideration relevant factors, such as the flow of visa applicants, the backlog of applicants, the availability of consular officers, and the time required to process each visa application. State’s analysis should be informed by reliable data on the factors that influence wait times. State should update any plan annually to reflect new information on visa demand. Improve the reliability and utility of visa waits data by defining collection standards and ensuring that posts report the data according to the standards. Identify practices and procedures used by posts to manage workload and reduce wait times and encourage the dissemination and use of successful practices. We provided a draft of this report to the Departments of State and Homeland Security. The Department of Homeland Security did not comment on the draft but provided a technical comment. State provided written comments on the draft that are reprinted with our comments in appendix II of this report. State concurred with our recommendations to enhance methods of disseminating effective management techniques, to improve the reliability and utility of visa waits data, and to develop a strategy to address increases in visa demand. State noted that any appropriate strategy to address worldwide increases in visa demand must address the need for resources to meet national security goals for both travel facilitation and border security. Furthermore, State said that any suggestion of trade-offs between these two goals would be inappropriate. Clearly we agree that in developing a strategy, State must maintain its security responsibilities while also facilitating legitimate travel to the United States. Our report does not suggest that one of these goals should be sacrificed at the expense of the other. State also provided a number of technical comments, which we have incorporated throughout the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees. We will also send copies to the Secretary of State and the Secretary of Homeland Security. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions are listed in appendix III. We reviewed (1) Department of State (State) data on the amount of time visa applicants were waiting to obtain a visa interview, (2) actions State has taken to address visa wait times, and (3) State’s strategy for dealing with projected increases in visa demand. To accomplish our objectives, we interviewed officials from State’s bureaus of Consular Affairs, Human Resources, and Overseas Buildings Operations. We also interviewed officials from the Department of Commerce’s Office of Travel and Tourism Industries. In addition, we observed consular operations and interviewed U.S. government officials at 11 posts in eight countries—Brazil, China, Costa Rica, El Salvador, Honduras, India, South Korea, and the United Kingdom. For our site visits, we selected posts that had either (1) recently reported wait times of 60 days or more, (2) had previously experienced long-standing wait time problems, (3) were projected to experience a large future volume of visa adjudications, or (4) were able to process a large volume of visas with little or no wait for applicant interviews. During these visits, we observed visa operations; interviewed consular staff and embassy management about visa adjudication policies, procedures, and resources; and reviewed documents and data. In addition, to obtain a broader view of visa workload, consular staffing and facilities, as well as other issues affecting visa wait times in consular sections, we selected an additional 21 posts for a document review based on the same selection criteria we used for selecting our site visits. Our selection of posts was not intended to provide a generalizable sample but allowed us to observe consular operations under a wide range of conditions. To determine the amount of time visa applicants were waiting to obtain a visa interview, we analyzed interview wait times data for applicants applying for visas for temporary business or tourism purposes, but not for other types of visas, including student visas. Specifically, the data provided to us showed the minimum and maximum wait times for visa-issuing posts for the period January 2006-February 2007. Data were also provided for the same period that indicated the number of posts that reported maximum wait times of 30 or more days in at least 1 month and the number that reported wait times in excess of 30 days for this entire 6- month period. In addition, at various points-in-time, we received information on the most recently reported wait times for visa-issuing posts and the date of last entry. To determine the reliability of State’s data on wait times for applicant interviews, we reviewed the department’s procedures for capturing these data, interviewed the officials in Washington who monitor and use these data, and examined data that was provided to us electronically. In addition, we interviewed the corresponding officials from our visits to select posts overseas and in Washington, who input and use the visa waits data. We found that data was missing throughout the 13-month period because posts were not reporting each week. Based on our analysis, we determined that the data were not sufficiently reliable to determine the exact magnitude of the delays because the exact number of posts with a wait of 30 days or more at any given time could not be determined. Consular officials who manage consular sections overseas acknowledged that many posts are not reporting on a weekly basis. However, we determined that the data are sufficiently reliable for providing a broad indication of posts that have had problems with wait times over a period of time and for general trends in the number of posts that have had problems with wait times over the 13 months we reviewed. To determine the actions State has taken to address visa wait times and its strategy for addressing waits, we analyzed consular policies and procedures cables and staffing and facilities plans developed by the department. In addition, we analyzed consular workload and staffing data. We also reviewed the methodology for the Change Navigations Study and found it to be one of a number of fairly standard approaches that are available for a forecasting exercise of this nature. However, we did not attempt to replicate the methodology or test alternative models that relied on different techniques, data, or assumptions. We conducted our work from August 2006 through May 2007 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of State’s letter dated June 25, 2007. 1. State’s Deputy Assistant Secretary for Visa Services has acknowledged that visa applicants may be deterred from visiting the United States by long appointment wait times and that this could have negative economic consequences and could adversely affect foreign opinions of our country. The Department of Commerce points out that foreign visitors bring economic benefits to our country in excess of $100 billion each year. We agree that it is difficult to correlate visa wait times with specific dollar value losses in travel and tourism revenues. However, given that wait times for interviews are very high at a number of posts, we believe that the loss in economic benefits to our country over time could potentially be significant. Our report acknowledges that visa issuances have increased over the last several years. 2. We believe our report, as well as past GAO reports, shows that long waits for visa interviews have been a long-standing problem for the department. Furthermore, State’s data show that there have been long waits at some posts during peak and nonpeak periods (see fig. 2) and that long waits are not solely cyclical in nature. State acknowledges a number of cyclical factors that affect visa demand and resource availability, such as staffing gaps and the personnel transfer cycle. We believe these and other factors can contribute to chronic as well as cyclical backlogs. In addition, we have modified the draft to acknowledge the fact that wait times may reoccur cyclically as well as unexpectedly. However, the report points toward the need for a strategy for addressing such delays, which State has not developed to address either cyclical or chronic visa waits. 3. We agree that increasing consular staff levels may ultimately be necessary to address increasing visa demand. This is why we recommended that State develop a strategy to address wait times and that, in doing so, identify its resource needs. Such actions could promote consensus by decision makers on funding levels and expectations for eliminating visa delays. 4. We agree that State has taken a number of actions to share information with posts on reducing wait times. However, as noted in the report, during our fieldwork, we found that there were instances where posts were not aware of certain practices and procedures implemented by other posts to help manage workload and assist in improving applicant wait times. We understand that all practices may not be transferable to all posts, but we believe that all posts would benefit from knowing the options that are available for more efficient operations. 5. Our report discusses State’s efforts to estimate visa demand and gives ample credit to the 2006 repositioning exercise to shift some consular staffing to posts with the greatest need. Furthermore, neither the annual consular package exercise nor the Consular Affairs Future Study estimated the resources needed to meet long-term future demand. Our point is that State has not estimated what resources will be required to keep up with the increase in future demand that State forecasts. Because these resources could be substantial, we think it is incumbent on State to develop a long-term strategy now. 6. We based our statements on the testimony of State’s Deputy Assistant Secretary for Visa Services before Congress in March 2007, where he stated, “we strive to constantly strike the right balance between protecting America’s borders and preserving America’s welcome to international visitors.” We acknowledge that in striking this balance security is the primary concern. Clearly the time it takes to process an application affects how many applications an officer can process in a given day. We are not suggesting that State sacrifice security in order to avoid visa waits, but rather that State develop a plan for how it will cope with rising demand, taking these various circumstances and responsibilities into consideration. 7. We agree that these are important factors and have modified the text accordingly. 8. We understand that there are spikes in visa demand for various reasons, some of which are difficult to predict. However, State is aware that such spikes in visa demand can occur. We believe that State needs a strategy to address growing visa demand that includes consideration of how it will meet unanticipated spikes in demand. The development of such a plan would allow State to use its visa surge teams of temporary duty staff to deal with unanticipated spikes, rather than using them to handle the anticipated increasing demand. 9. We have modified language in the report. State’s comment reinforces our belief that it is time for State to develop a strategy for addressing long-term visa demand. If State determines it needs more staff to handle projected demand, then it should detail these needs in its strategy. 10. We based our comment on a cable prepared by the U.S. Embassy in London. State acknowledges that the 10-fingerprint requirement could reduce the number of applicants processed. Applicants are not interviewed until after their fingerprints are taken, so a reduction in the number of applicants processed would subsequently result in a reduction of applicants interviewed. We have modified language in the draft to clarify our point. 11. We have incorporated information on the Visa Office’s Two-Year Plan into the report. 12. State does not have a plan that outlines how it will cope with growing visa demand, which is why we recommend that State develop a strategy that identifies the actions it will take to address increasing demand. We believe that there may be opportunities to achieve efficiencies at some posts and that more resources may be needed. The short-term, temporary measures that State is currently taking to address visa demand are not adequate to handle the projected visa demand. We suggest that State take advantage of available analytical tools in order to identify options for the development of an overall strategy that will address the projected increase in visa demand worldwide. A wide range of sophisticated techniques are available to help manage customer waiting times in many areas of government operations, such as testing drivers at departments of motor vehicles and treating patients at public health clinics. Our report does not recommend that State reduce the processing time at the expense of security. We agree that State must maintain its security responsibilities while facilitating legitimate travel to the United States. In addition to the individual named above, John Brummet, Assistant Director; Joe Brown; Joe Carney; Martin de Alteriis; Jeff Miller; Mary Moutsos; and Melissa Pickworth made key contributions to this report.
After the 9/11 terrorist attacks, Congress and the Department of State (State) initiated changes to the visa process to increase security, but these changes also increased the amount of time needed to adjudicate a visa. Although maintaining security is of paramount importance, State has acknowledged that long waits for visas may discourage legitimate travel to the United States, potentially costing the country billions of dollars in economic benefits over time, and adversely influencing foreign citizens' opinions of our nation. GAO testified in 2006 that a number of consular posts had long visa interview wait times. This report examines (1) State's data on visa interview wait times, (2) actions State has taken to address wait times, and (3) State's strategy for dealing with projected growth in visa demand. According to State, the amount of time that applicants must wait for a visa interview has generally decreased over the last year; however, some applicants continue to face extensive delays. State's data showed that between September 2005 and February 2006, 97 consular posts reported maximum wait times of 30 or more days in at least 1 month, whereas 53 posts reported such waits for the same period 1 year later. However, despite recent improvements, at times during the past year, a number of posts reported long wait times, which could be expected to reoccur during future visa demand surges. In 2007, State announced a goal of providing applicants an interview within 30 days. Although State's data is sufficiently reliable to indicate that wait times continue to be a problem at some posts, GAO identified shortcomings in the way the data is developed that could mask the severity of the problem. State has implemented steps to reduce wait times at several posts including using temporary duty employees to fill staffing gaps at some posts and repositioning some consular positions to better utilize its current workforce. However, these measures are not permanent or sustainable solutions and may not adequately address the increasing demand for visas worldwide. In addition, State has made improvements to several consular facilities and has identified plans for improvements at several other posts with high workload. Some posts have utilized procedures that enable them to process applications more efficiently. However, not all of these procedures are shared among posts in a systematic way and, therefore, not all posts are aware of them. State has not determined how it will keep pace with growth in visa demand over the long-term. State contracted for a study of visa demand, in select countries, over a 15-year period beginning in 2005, which projected that visa demand will increase dramatically at several posts. However, at some posts, demand has already surpassed the study's projected future demand levels. State has not developed a strategy that considers such factors as available resources and the need for maintaining national security in the visa process, along with its goal that visas are processed in a reasonable amount of time. Given dramatic increases in workload expected at many posts, without such a strategy State will be challenged in achieving its current goal for wait times.
SOCOM is one of ten combatant commands directly responsible to the Secretary of Defense. The command was established by the National Defense Authorization Act for Fiscal Year 1987, and codified in 10 USC Section 167. As a functional command, SOCOM’s primary responsibility is to prepare the special operations forces (SOF) to carry out assigned missions. When appropriate, SOCOM may be called upon to conduct special operations activities unilaterally or provide support to other U.S. military forces. In 2003, the Secretary of Defense expanded SOCOM’s role to include leading the DOD’s GWOT operations. In this central role, SOCOM plans, directs, and executes special operations in the conduct of the GWOT in order to disrupt and destroy terrorist networks that threaten the United States, its citizens, and its interests worldwide. SOCOM also organizes, trains, and equips SOF warriors provided to the geographic combatant commanders and to the American ambassadors and their country teams. In keeping with this expanded role, DOD has begun to re-tool SOCOM from primarily a supporting command into a command responsible for planning, synchronizing, and executing missions in the GWOT. SOCOM is headquartered at MacDill Air Force Base in Tampa, Florida, and has four component commands, and one sub-unified command located at different military bases. The Marine Corps Special Operations Command joined SOCOM on February 24, 2006. Table 1 shows the end strength of each of the component commands. Congress created SOCOM to improve the ability of the United States to conduct special operations. Congress vested the command with the responsibility and the authority for the development and acquisition of SOF-peculiar equipment, the authority to exercise the functions of the head of agency, and the authority to execute its own budget. SOF-peculiar equipment is defined as equipment, materials, supplies, and services required for SOF activities for which there is no service-common requirement. According to SOCOM, these are limited to items and services initially designed for, or used by, SOF until adopted for service-common use by other DOD forces; modifications approved for application to standard items and services used by other DOD forces; and items and services critical for the immediate accomplishment of a SOF activity. To fund the acquisition of SOF-peculiar equipment, SOCOM was also given responsibility for supervising a separate Major Force Program-11 budget account. Congress determined that a dedicated funding mechanism was necessary because, in the past, the military departments had tended to give lower priority to SOF’s equipment needs than to their own needs. For fiscal year 2006, SOCOM’s total budget was $7.2 billion, of which $1.9 billion was for development-and-acquisition-related purposes. In acquiring SOF equipment, SOCOM falls under the same DOD acquisition policies and guidelines and workforce requirements that apply to the military departments and other defense agencies. The military departments and SOCOM are governed by DOD’s 5000 Series for the Defense Acquisition System. Similarly, each military department, along with SOCOM, has its own policies and procedures to implement higher level directives and guide the management of acquisition activities within the military departments or command. SOCOM’s acquisition workforce training and tenure is governed by the Defense Acquisition Workforce Improvement Act (DAWIA), enacted in 1990. The Act specifically created a formal acquisition corps and defined educational, experience, and tenure criteria needed for key positions, including program managers, contracting officers, and other personnel involved in the acquisition process. According to DOD, members of the acquisition corps may earn three progressive certification levels—basic (Level I), intermediate (Level II), and advanced (Level III). Each certification level is comprised of a combination of education, experience, and training elements. Certification recognizes the level to which a member of the acquisition workforce has achieved functional and core acquisition competencies required by a specific career field. Members of SOCOM’s acquisition workforce are required to meet the same training and certification requirements as those in the military departments. SOCOM’s approach to acquisition management also has some distinctive features. The command is unique in DOD in that it plans, funds, acquires, and sustains weapon systems all under one roof. Specifically, all the key entities involved in the acquisition life-cycle process—requirements developers, comptroller, contracting personnel, logistics planners, and program offices—are colocated. SOCOM also uses a centralized approach to assess and prioritize requirements and select programs based on competing needs and available resources. SOCOM’s customers—the SOF warriors—are directly involved in determining what weapon systems are pursued. In addition, SOCOM can arrange to transfer program management and milestone decision authority responsibilities to one of the military departments to execute the program on behalf of SOCOM. SOCOM has done this with many of its programs that involve some modification of military department-provided equipment or in cases where the military departments may have greater technical and program management expertise. Further description of how SOCOM is structured to manage its acquisitions is provided in appendix II. SOCOM has undertaken a diverse set of acquisition programs since January 2001 that are consistent with the command’s mission to address unique SOF needs and those needs for which there are no service-common requirement. SOCOM has committed about $6 billion to date on these programs. The vast majority of SOCOM’s acquisition programs are ACAT III level in size, have short acquisition cycles, and use modified commercial off-the-shelf and nondevelopmental items or modify existing service equipment and assets. In acquiring systems, SOCOM has emphasized the need for “80 percent” solutions that provide improved capabilities incrementally to the warfighter in reasonable time frames, rather than major development efforts that require advanced technologies and years of research and development. Both the ASDS and CV-22 programs were started in the 1990s. Since 2001, SOCOM has undertaken only one ACAT I level program. It was to develop a common avionics package for its fleet of transport, tanker, and gunship aircraft. SOCOM’s acquisition plans for the future—as reflected in its current Future Year Defense Program—continue to maintain its SOF-peculiar focus. SOCOM initiated 86 acquisition programs from 2001 to 2006 to meet SOF- peculiar requirements, which can be grouped into five major areas: rotary wing, fixed wing, maritime systems, information and intelligence systems, and special operations forces warrior equipment (e.g., vehicles and weapons). Table 2 shows the number and funding for these programs by each major grouping. Funding ($M) As table 3 shows, 76 of SOCOM’s 86 acquisition programs are ACAT III level in size, and the majority of these programs use nondevelopmental and commercial off-the- shelf items to meet SOF-peculiar needs. A further breakdown of these programs, depicted in table 4, indicates that most cost less than $25 million. The small number of larger, ACAT I and II level programs are fixed and rotary wing systems, costing $200 million or more. These larger programs involve modifications to existing platform systems and more substantial technology development efforts. The one ACAT I level program SOCOM initiated since 2001—the Common Avionics Architecture for Penetration (CAAP) program—is intended to provide specialized capabilities for MC-130H and AC-130H/U transport, tanker, and gunship aircraft, including low probability of detection and improved terrain following and avoidance radar. Several key examples of the types of programs SOCOM has undertaken are described below. The leaflet delivery system is an ACAT III program that was fielded by SOCOM at a cost of about $20 million. The system uses a fully reusable, commercial-off-the-shelf, unmanned aerial vehicle as a component of the autonomously guided parafoil system it has developed. The delivery system is capable of delivering leaflets or psychological operations materials to target audiences in peacetime and in war. It took SOCOM about 8 months to field this capability to the SOF warrior. It can be ground launched from the back of a high-mobility multiwheeled vehicle and air launched from a C-130, C-141, or C-17 cargo aircraft. Figure 1 below shows the leaflet delivery system. SOCOM’s current family of sniper rifles was acquired as nondevelopmental and commercial off-the-shelf items, which according to the program office, enables rapid acquisition of an initial capability as well as efficient spiral development of enhanced capabilities as mission requirements direct. SOCOM currently has four rifles in its family of sniper rifles, the MK 11—7.62mm Sniper Support Rifle, the MK 12—5.56mm Special Purpose Rifle, the MK 13—.300 Winchester Magnum, and the MK 15—.50 caliber. Each will only fire one type of ammunition and with varying effective ranges. Two of the sniper rifles, MK 11 and MK 12, will be replaced by the Sniper Support Rifle variant of the SOF Combat Assault Rifle, which is an ACAT III program consisting of a modified commercial off-the-shelf system, and is estimated to cost about $50 million. The new sniper rifle is a modular design, and the caliber of the rifle can be changed by replacing the barrel, bolt, and trigger modules. The life expectancy of the SOCOM rifles shown in figure 2 is about 5 years. Therefore, according to the SOF Warrior program office, SOF plans a phased replacement of like or enhanced capability every 5 years. SOCOM has an ACAT II program underway, estimated to cost about $200 million, which modifies the Army’s service-common CH-47 helicopter to meet its SOF-peculiar requirements. Several features on the aircraft are SOCOM-peculiar such as the long aerial refueling probe on the front of the aircraft, the standardized extended range fuel tank, and the common aviation architecture systems cockpit. The CH-47 helicopter, when modified by SOCOM, becomes a MH-47G helicopter that provides SOCOM with a heavy assault helicopter with the latest avionics, sensors, aircraft survivability features, and weapons systems. All MH-47 helicopters in SOCOM’s inventory—which includes the MH-47D and the MH-47E aircraft—will be converted to the MH-47G configuration over time. According to SOCOM, at least two of the SOF-peculiar features on the MH-47G helicopter were adopted by the Army and are now service- common features. SOCOM developed standardized engines and an enhanced air transportation kit that were designed to meet a SOF-peculiar requirement. However, once they operational, the Army decided it could use the capability as well and adopted it. Figure 3 shows some of the basic modifications to the CH-47 that were provided by the Army and those that were provided by SOCOM. In addition to regular acquisition programs, SOCOM has acquired various equipment and material to meet urgent needs related to planned and ongoing military operations. According to SOCOM officials, urgent needs qualify for consideration if they meet one of two criteria: a potential mission failure or loss of life. Because of the urgency of these needs, SOCOM’s focus is on acquiring readily available equipment in short time frames. Since 2001, SOCOM has addressed about 50 urgent mission needs and fielded equipment to its deployed SOF warriors at cost of about $339 million. For example, to address an urgent operational need to move personnel and materiel more effectively in Afghanistan and Iraq without attracting local attention or projecting an overt military presence, SOCOM acquired and modified about 150 commercial off-the-shelf 4x4 trucks, sedans, and sport utility vehicles and fielded them in about 4 weeks. Figure 4 below shows an example of a modified commercial truck used by SOCOM. According to SOCOM officials, urgent needs are not to be used as a means of circumventing or accelerating the normal program approval or funding processes. To that end, equipment acquired via the urgent needs process is fielded and sustained only for the duration of the military operation. The sponsoring Component Commander is responsible for determining post- operation disposition of any equipment acquired as a result of an urgent needs request. SOCOM has also fielded critical combat-related technologies through DOD’s Advanced Concept Technology Development program. DOD initiated the program in 1994 to help get new technologies that meet critical military needs into the hands of users faster and at less cost than the traditional acquisition process. Over the past 5 years, SOCOM has fielded seven Advanced Concept Technology Development programs at a cost of about $385 million. For example, as shown in the figure 5, SOCOM fielded the MANPACK radio threat detector which was an Advanced Concept program. The MANPACK is designed to provide the basic capability to identify and locate threat and friendly emitters, locate unknown emitters, and provide situational awareness to the SOF operator with little or no interaction from the user. SOCOM’s acquisition plan for the future—as reflected in its current Future Year Defense Program—continues to maintain a focus on providing SOF-peculiar equipment. The acquisition programs SOCOM plans to start over the fiscal year 2007 to 2011 time frame are similar to the programs that SOCOM is currently acquiring. There are 13 acquisition programs remaining in SOCOM’s fiscal year 2007 to 2011 plan, and all are at the ACAT III level. These programs continue to be small scale, low cost, and will employ modified commercial-off-the-shelf and nondevelopmental items. For example, the SOF Combat Assault sniper rifle was among the remaining 2007 to 2011 programs and is SOF-peculiar and a nondevelopmental item. Fifty-one (about 60 percent) of the 86 acquisition programs SOCOM has undertaken since 2001 have progressed as planned, either staying within original cost and schedule estimates or experiencing cost increases unrelated to progress, such as for adding quantities to support ongoing combat operations . The other 35 (40 percent) of SOCOM’s 86 programs have experienced or are likely to experience modest to, in a number of cases, significant cost increases and schedule delays due to a range of technical, programmatic, or funding issues. Although fewer in number, these programs make up about 50 percent of SOCOM’s total funding for its acquisition programs. Ten of the programs have an estimated schedule slip of at least one year, and several programs were canceled because of a need to fund higher priorities or because of technical issues encountered in developing the weapon system. The programs that have not progressed as planned tend to be the larger, more complex platform-based programs SOCOM is developing and programs where SOCOM is dependent on the military departments for the basic platform or for equipment and/or other resources, such as program management support. Programs that are smaller, with less development risk, have better results. As shown in table 5, there are some differences in the type of programs that are and are not progressing as planned, but the overall picture is mixed. In terms of the number of programs, fixed wing and SOF warrior systems comprise a large proportion (25 out of 35) of those that are not meeting original cost and schedule estimates. However, when viewed by the amount of funding allocated to these programs, fixed and rotary wing systems make up the majority ($1,844 million out of $2,521 million) that are not progressing as planned. We were not able to put these results in context, that is, to compare them with DOD as a whole to determine whether SOCOM’s performance was typical or atypical. This is primarily because of the fact that DOD does not keep aggregate performance data on ACAT III programs—which comprise most of SOCOM’s acquisition portfolio. Many of the fixed and rotary wing programs are the larger programs in SOCOM’s portfolio, involving modifications to existing military-service or special-operations platform systems. As such, these programs require more systems engineering and design/integration efforts than other smaller programs being acquired by SOCOM. For example, the estimated costs for SOCOM’s fixed-wing AC-130U 30-millimeter gun-modification program has increased 92 percent because of technical and design issues, and the program has been deferred until fiscal year 2008 when additional funding may be available. Likewise, the AC-130U+4 program, which is intended to modify the C-130 aircraft into a side-firing gunship, has been delayed by 7 months because of technical issues with the aircraft ‘s configuration and design. Many of SOCOM programs that are not progressing as planned are also programs in which the military departments are involved in a management capacity. As shown in table 6, 22 of the 35 programs that have not stayed within original cost and schedule estimates have one of the military departments in a management role—either as the milestone decision authority or program manager or both. All of the fixed and rotary wing programs that are not progressing as planned are in this category. In contrast, however, SOCOM does manage its five largest information and intelligence system programs, but they are not progressing as planned. In assessing how programs have progressed, we identified a small number of programs (8 out of 86) that SOCOM canceled or deferred because of a need to fund higher priorities or because of technical issues encountered during development. Most of these programs were canceled early before significant funding and time were committed. In the other few programs, however, we found significant time and effort was invested before they were cancelled. For example, SOCOM’s High Power Fiber Optic Towed Decoy program, which was being developed to provide a fiber optic towed decoy capability to SOCOM’s fleet of AC and MC-130 aircraft, was canceled after spending about $85 million because of higher funding priorities. SOCOM’s one ACAT 1 program, the Common Avionics Architecture for Penetration (CAAP) program was also subsequently terminated. The CAAP program, which was managed by the U.S. Air Force, was being designed to provide SOF-peculiar avionics capability to the U.S. Air Force’s Avionics Modernization Program (AMP) on the MC-130 H and AC-130H/U aircraft. It was designed to give SOF-peculiar capabilities to the aircraft, including enhanced abilities to follow terrain and avoid detection while using Air Force-provided radar. However, SOCOM terminated all funding for the CAAP program in its fiscal years 2008 to 2013 program objective memorandum. SOCOM determined that it was cost prohibitive to continue the program after the Air Force ran into problems with the AMP program and determined that the cost to complete development of both AMP and CAAP would more than double the original estimates. SOCOM faces management and workforce challenges in ensuring its acquisition programs are completed on time and within budget. Urgent requirements arising from SOCOM’s role in Iraq and Afghanistan, and its new role in the GWOT have and will continue to challenge SOCOM’s ability to balance near- and long-term needs against available funding resources. For example, in order to fund almost 50 urgent deployment acquisitions in the past 5 years, SOCOM has had to reallocate $259 million from existing and planned acquisition programs. Additionally, even though SOCOM employs elements of a knowledge-based acquisition approach, it is not consistently applied, and some programs have started without a good match between requirements and resources. SOCOM also has difficulty tracking progress on programs for which it has delegated management authority to the military departments and addressing problems earlier in these programs. Moreover, a key SOCOM tool for managing its acquisition programs has not been consistently maintained with up-to-date information. In addition, SOCOM has encountered workforce challenges such as being able to hire civilian personnel in reasonable time frames and ensuring that its military personnel are fully compliant with DOD standards. Addressing high-priority urgent needs from the field will continue to challenge SOCOM’s ability to complete existing programs on time and within budget. In its roles in Iraq, Afghanistan, and GWOT, SOCOM will continue to fulfill urgent needs with acquisition programs. But because of the short time frames involved, funding for these programs is not built into the budget. In the past 5 years, SOCOM reallocated about $259 million from budgeted programs to fund almost 50 urgent deployment acquisitions. In fiscal years 2006 and 2007, SOCOM did begin to receive money from Congress in its budget—about $80 million and $22 million respectively—to help defray some of the costs of its urgent deployment acquisition programs. According to SOCOM’s Acquisition Executive, urgent deployment acquisitions are expected to continue over the next several years, and the command anticipates requesting about $20 to $25 million each year from 2008 to 2013 to help pay for these needs. Although funding shifts are disruptive in SOCOM, as they are in the military departments, SOCOM’s strategic planning structure for assessing and selecting programs is well-suited for making the trade-offs among priorities needed to address urgent needs. SOCOM also has difficulty tracking progress and addressing problems early in programs where it has delegated management authority to the military departments. Having access to all the military departments provides SOCOM the means to leverage resources and expertise that may not reside at SOCOM, such as program management, engineering and technical services, testing and evaluation support, and logistical support. However, in some cases when SOCOM has relied on the military departments for technical or basic capabilities, its programs have been adversely affected when the department-provided capabilities are delayed. When delays occur, there tends to be a cascading effect on SOCOM programs. For example, initial schedule delays in the U.S. Air Force’s AMP for C-130 aircraft resulted in delays in SOCOM’s ability to acquire the CAAP program on the C-130 aircraft. The AMP program was to provide a basic cockpit configuration and avionics capability for different C-130 aircraft, and SOCOM’s CAAP capability would provide additional avionics capabilities for SOF missions. The AMP program encountered technical and integration problems during installation trials and is now being restructured. Because of delays and cost growth with AMP, cost to complete the CAAP program increased significantly leading to SOCOM’s decision to cancel the CAAP program and defer this capability. According to SOCOM’s acquisition executive, although SOCOM has over- arching memorandums of agreement establishing program management arrangements with each of the military departments, not all of the agreements are signed at the appropriate levels of authority within the military departments. While the agreement with the Army is signed by the Secretary of the Army, the Air Force and Navy agreements are signed by the chiefs of staff. This is a challenge to SOCOM because acquisition and budget authority resides with the military department secretary and not with the chief of staff. When problems occur in programs managed by the Air Force or Navy, SOCOM may have less standing to make a case that they are not living up to the memorandums of agreement, than the command would with the Army. SOCOM also acknowledges that memorandums of agreement for specific programs—particularly the larger, more complex programs SOCOM delegates to the military departments—have not been detailed enough in terms of laying out the roles, responsibilities, and expectations for executing programs, nor detailed enough in laying out how SOCOM will be able to track progress and participate in regular program reviews with the military departments. While written agreements by themselves may not result in better SOCOM-military department programs, they are important in that they provide a foundation for effective program management. SOCOM is currently taking steps to update the written agreements with the military departments and also examining whether some of its programs would be better under SOCOM management. SOCOM employs elements of a knowledge-based acquisition approach, but it is not consistently applied. We have frequently reported on the need to develop a solid, executable business case before committing resources to a new product development effort. A business case should be based on DOD’s acquisition policy and lessons learned from leading commercial firms and other successful DOD programs. Our work has shown that the business case in its simplest form demonstrates evidence that (1) the warfighter’s needs are valid and that they can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources—that is: proven technologies, design knowledge, adequate funding, and adequate time to deliver the product when it is needed. We found that although SOCOM has a systematic strategic planning process to prioritize and select programs, it has started some programs, particularly the larger and more complex programs, without ensuring that there was a solid match between the requirements and resources to complete the development. For example, SOCOM terminated the Common Avionics Architecture for Penetration Program because of excessive cost growth resulting from technical problems and schedule delays with the Air Force’s Avionics Modernization Program. While SOCOM attributes the cause of program problems in part to poor contractor performance, it also acknowledges that technology challenges and development costs were significantly underestimated when the program started. In addition, the Navy-managed Advanced SEAL Delivery System (ASDS), which has been one of SOCOM’s largest investments since ASDS started in the mid-90s, encountered significant problems because the capabilities required for the delivery system outstripped the developer’s resources in terms of technical knowledge, time, and money. Although the first boat was accepted for operational use in 2003, it did not meet technical or performance requirements. Currently, reliability issues with the boat are being examined, and an assessment of alternate material solutions are underway to determine how best to address the remaining operational requirements. SOCOM’s tool for managing its acquisition programs—called the Special Operations Acquisition and Logistics Information System (SOALIS)—lacks sufficient oversight and maintenance. At the time of our review, we found that information for most programs was out of date and that some programs had not been updated in years, even though the program executive officers and program directors are required to keep SOALIS accurate and up to date on at least a monthly basis. Further, we found no enforcement mechanism to ensure oversight of this important management tool. According to SOCOM’s Standard Operating Procedures Directive, SOALIS is intended to give SOCOM decision makers and stakeholders essential information on the status and progress of ongoing acquisition efforts. Although regular progress reviews take place on individual programs, the lack of up-to-date information on all programs can impede SOCOM’s ability to conduct effective oversight. SOCOM’s acquisition workforce has remained relatively small for many years, but plans are underway to increase the size of the acquisition workforce about 75 percent by the end of 2008. This is being done to address the growth in acquisitions work that has taken place over the past several years as well as expected future growth in acquisitions with SOCOM’s expanded role in the GWOT. Since 2001, SOCOM’s workforce has remained fairly stable, growing by only 10 positions to a total of 185 government—civilian and military—acquisition employees. SOCOM plans to expand its governmental acquisition workforce to about 300 employees. Currently, the governmental workforce is heavily supplemented by contractors. Specifically, contractors comprise about two-thirds of the overall workforce supporting SOCOM’s acquisition activities. The contractor support includes logistics, training, education, and testing support, and engineering and technical services. In order to prepare for the upcoming workforce expansion, SOCOM is conducting a manpower study. The study, which is scheduled to be completed in fiscal year 2008, is designed to assess the composition of the workforce and determine workloads associated with each SOCOM position—including all acquisition positions—to aid SOCOM officials in their placement of newly hired government employees. Also, to lower costs, SOCOM’s acquisition executive anticipates a reduced reliance on contractors in conjunction with the expansion of the governmental acquisition workforce. How much of a reduction will be based on the outcome of the ongoing manpower study and resource considerations. As can be seen in table 7, the majority of SOCOM’s current civilian acquisition workforce has attained DOD’s level III certification. Additionally, SOCOM’s senior level civilian acquisition workforce at the GS-14, GS-15, and senior executive service levels, along with those assigned to Critical Acquisition Positions that require level III certification, have all earned level III certification. We found that the vacancy rate for civilian acquisition positions is about 10 percent and that the bulk of the unfilled positions are at the GS-14 and GS-15 levels, leaving vacancies in some key management positions. The command has encountered challenges in filling vacancies in the upper-level, civilian- acquisition-workforce positions. According to SOCOM’s acquisition executive, the difficulty in hiring qualified personnel to fill these critical vacancies is due, in part, to the lengthy process required to hire qualified acquisition personnel. SOCOM uses the Air Force personnel system as its executive hiring agency. However, this process has taken as long as 240 days to hire at the upper levels. SOCOM’s military acquisition workforce certification rated at level III is not as high as its civilian counterparts. This is particularly true for critical acquisition positions, which usually involve significant supervisory or management responsibilities (e.g., program manager). As table 8 shows, about 40 percent of these positions are held by officers who do not meet the level III certification standards required by DOD. While DOD guidelines allow acquisition officers to attain the appropriate certification up to 24 months after being assigned to a critical position, we found that 3 of SOCOM’s 22 military officers filling these positions are still lacking the required certification. Although waivers are permitted on a case-by-case basis, at the time of our review SOCOM did not have a process in place to review and grant required waivers for those officers not in compliance with DOD standards. One of the challenges SOCOM faces in filling military acquisition positions is that the command often requires military operational experience and/or specialized skills. According to SOCOM, Army and Navy policies require their acquisition officers to have operational assignments before being assigned to the acquisition career field, but officers in the Air Force do not have to gain prior operational experience. In addition, some of the acquisition positions at SOCOM require unique special operations experience. For instance, some of the Navy’s acquisition positions at SOCOM are designated to be filled by Navy SEAL personnel, a group in short supply and generally not trained in acquisition. Since SOCOM is reliant on the services to provide military acquisition personnel to the command, SOCOM runs the risk of not being able to fill acquisition positions if it turns down candidates sent forward by the services who do not meet all the position requirements. Thus far, SOCOM has done well with small acquisitions that modify readily available commercial technologies and nondevelopmental items. It has had more difficulty delivering the more complex systems that involve significant development and reliance on the military departments. As SOCOM prepares for more growth in its acquisition function to meet the expanding needs for special operations forces, it will be important for the command to leverage its experience into better results in the future. For those more complex acquisitions that must be undertaken, opportunities exist for SOCOM to improve its results by ensuring that better business cases exist before embarking on such acquisitions, especially if they depend on acquisitions being managed by other military departments. In addition, the foundation for all acquisitions can be improved by (1) ensuring that the size and composition of the workforce is a good match for the acquisition workload undertaken by SOCOM and (2) having a sound management information system to track programs. To better position SOCOM to achieve the right acquisition program outcomes, we recommend that the Secretary of Defense take the following three steps to ensure: SOCOM establishes sound business cases for its more complex and military department-managed acquisition programs. Integral to this is applying the elements of a knowledge-based acquisition strategy (That is: programs match requirements with resources.) and having effective agreements in place with the military departments that specify clear roles, responsibilities, and expectations for executing programs. as SOCOM increases its acquisition workforce, it (1) obtains personnel with the skills and abilities needed for more complex acquisitions, (2) makes sure personnel meet DOD acquisition certification level requirements, and (3) has the ability to make the hiring process as efficient as possible. SOCOM improves the accuracy, timeliness, and usefulness of its acquisition management information system. To accomplish this, SOCOM should (1) establish enforcement mechanisms to make sure program managers submit updated information on a regular basis and (2) conduct quality checks to make sure the information is reliable. In DOD’s letter commenting on a draft of our report, DOD partially concurred with the first recommendation and fully concurred with the other two recommendations. In partially concurring with the first recommendation, DOD agreed with the need to update memorandums of agreement between SOCOM and the military departments and apply elements of a knowledge-based acquisition strategy but only after it is defined by DOD within the 5000 series of documents. This should not result in a delay in action on DOD’s part as DOD’s acquisition policy already includes the key elements of a knowledge-based acquisition approach particularly regarding technology, design, and production. It is important that SOCOM follow this policy because we have found that programs experience cost, schedule, and performance problems when they proceed into system development and initial manufacturing with lower levels of knowledge than specified in DOD’s acquisition policy. We believe that if properly implemented and enforced, a knowledge-based acquisition approach, as defined in DOD acquisition policy, can help reduce development risks and lead to better program outcomes on a more consistent basis. DOD’s written comments appear at appendix III. Additionally, SOCOM provided technical comments, which we incorporated where appropriate. We are sending copies of this report to this report to the Secretary of Defense, Secretaries of the Air Force, Army, and Navy, and other interested parties. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To assess what type of programs SOCOM has undertaken and whether they have progressed as planned, we collected and reviewed information on all programs undertaken by the command between 2001 and 2006. We collected specific information on each program pertaining to its size, use of commercial off-the-shelf and non-developmental items, and acquisition strategy. In addition, we collected data on planned versus actual cost, schedule and quantities to be fielded. We analyzed this information to determine what types of systems were being acquired and the extent to which programs were meeting planned cost, schedule, and quantity objectives. We relied on GAO’s Applied Research and Methodology teams to array and analyze the acquisition programs in our review. Further, we interviewed SOCOM’s senior-level program executive officers to access and review available data on about 50 urgent acquisition systems programs, and a small number of the Advanced Concept Technology Demonstration programs transitioned by SOCOM to its forces. To assess and determine the management and workforce challenges facing SOCOM, we (1) reviewed and analyzed the current impact that unfunded near-term requirements had on the regular approved acquisition programs; (2) we reviewed and analyzed the command’s key acquisition program management tool—the Special Operations Acquisition and Logistics Information System—for managing its acquisition programs; and (3) to assess the workforce challenges that SOCOM faces, we interviewed key SOCOM acquisition officials from SOCOM’s Special Operations Acquisition and Logistics Center and key civilian and military personnel management officials at Tampa, Florida. We relied on previous GAO work as a framework for knowledge-based acquisition. We performed our review from July 2006 through June 2007 in accordance with generally accepted government auditing standards. Unlike the military departments, which have geographically dispersed acquisition organizations, SOCOM’s acquisition activities are geographically consolidated. All acquisition support functions integral to SOCOM’s acquisition activities—contracting, budgeting, and requirements setting—are located at SOCOM headquarters. The SOCOM Commander has duties analogous to both service Secretaries and the service Chiefs. For example, like the Secretaries, he has budget, programming, research, development and acquisition, contracting, and procurement authority, and he can direct investigations and audits. Similar to the service Chiefs, the Commander of SOCOM is charged with organizing, training, and equipping SOF personnel, establishing requirements, conducting operational testing, and providing operational logistics. Unlike other combatant commanders, the SOCOM Commander has both command and acquisition authorities—he is the only combatant commander with a “checkbook.” This arrangement allows SOCOM officials to plan, resource, and acquire SOF-peculiar equipment. SOCOM decides what weapon systems and equipment to acquire through a centralized strategic-planning and resource allocation process where requirements are assessed and prioritized and programs are selected based on competing needs and available resources. The process has many of the characteristics of an integrated portfolio management framework that GAO recently reported as lacking at DOD in its departmentwide approach to weapon system investments. That is, SOCOM addresses weapon system programs collectively from an enterprise level, rather than as independent and unrelated programs. Proposed programs are assessed through a screening process that weighs the relative costs, benefits, and risks of each, and selects those that help SOCOM balance near and future term opportunities, different SOF component capability needs, and available resources against the demand for new and ongoing systems and equipment. SOCOM has a close relationship with its customers—the SOF community—and receives inputs regarding capability needs directly from SOF operators and component commands on an ongoing basis. SOCOM officials with operational experience and expertise in different program areas assess and prioritize the requests from the component commands on a bi-annual basis. These officials rate each proposal in terms of its potential to fulfill required military operational tasks. The officials then forward their assessments to SOCOM’s central decision-making body— the Board of Directors—for a final determination of what acquisition programs should be undertaken by the command and where resources should go. The Board of Directors is composed of the SOCOM commander, all SOF component commanders, as well as the Assistant Secretary of Defense, Special Operations and Low-Intensity Conflict (ASD(SO/LIC))—OSD’s principal advisor on special operations activities and the organization charged with interfacing with SOCOM. ASD(SO/LIC)’s position on the Board of Directors allows DOD insight and a voice into what acquisition programs SOCOM undertakes. Although DOD has an oversight role and decision authority over ACAT I programs, as previously discussed, over 95 percent of SOCOM’s acquisition programs are below the ACAT I level. Therefore, ASD(SO/LIC) has no direct day-to-day oversight role in the bulk of SOCOM programs. The Board of Directors is SOCOM’s primary and final approval authority regarding regular planned SOF-peculiar acquisition programs. Once the need for a SOF capability is verified and approved through SOCOM’s strategic planning process, it is reviewed through DOD’s Joint Capabilities Integration and Development System (JCIDS) to verify that it is a SOF-unique requirement, and not duplicative of a Service-common system. However, according to SOCOM officials, JCIDS often fails to resolve time-sensitive SOF capabilities gaps that may be identified during active combat. Therefore, to support SOF acquisition priorities, SOCOM established its own version of the larger joint-requirement-setting process—the SOF Capabilities Integration and Development System— which interoperates with the command’s Acquisition Management System and Strategic Planning Process. SOCOM employs a two-tiered SOF Capabilities Integration and Development System—standard and fast track—to support SOF priorities. The standard capabilities process parallels the JCIDS process although it is internal to SOCOM to specifically address SOF-unique capability gaps. The fast track process is used when a SOCOM component identifies an urgent and critical capability gap—derived from a combat-mission need statement. This process is not intended as a means to circumvent the command’s standard acquisition portfolio management process, rather it is SOCOM’s method to accelerate its response to compelling and time- sensitive SOF-peculiar needs. Under the SOF Capabilities Integration and Development System, validation and approval of a combat mission need statement mandates an offset of resources as it constitutes a “must-pay” bill for SOCOM. Once the mission need statement is approved through the Fast-Track CIDS process, SOCOM officials initiate an urgent deployment acquisition to expedite the acquisition and field the required equipment. At this point, command officials reallocate resources to fund the urgent deployment acquisition. SOCOM’s goal is to field equipment within 180 days of approval. SOCOM can arrange to transfer program management and milestone decision authority (MDA) responsibilities to one of the military departments to execute the program on behalf of the command. SOCOM has delegated responsibilities to the military departments in many of the acquisition programs underway that involve some modification of military department-provided equipment or in cases where the services have greater technical and specific platform program management expertise, such as fixed and rotary wing aircraft or submarine programs. SOCOM’s Acquisition Executive is the milestone decision authority for all SOCOM acquisition programs, unless the executive delegates that authority. However, through memorandums of agreement with the Army, Navy, and Air Force, SOCOM employs a range of program management structures. The command has the following three basic options for managing individual programs: SOCOM can manage a program in-house by designating both a SOCOM program manager and MDA to execute the program. SOCOM, through a program specific memorandum of agreement with a military department, can agree on appointment of a department program manager to manage the program under the direction of a SOCOM MDA. SOCOM can transfer both program management and MDA responsibility to a military department through a program-specific memorandum of agreement, to execute the program on behalf of SOCOM. Applicable policies and procedures vary somewhat for each of the program management options just described. For example, for SOCOM MDA and SOCOM managed programs, SOCOM’s acquisition and logistics directives and standard operating procedures apply, and according to SOCOM, any exceptions are noted in the acquisition program’s Acquisition Decision Memorandum. Secondly, for SOCOM MDA and military department managed programs, responsibilities and exceptions to SOCOM procedures are intended to be defined in program specific memorandums of agreement. Finally, for programs with a military department MDA and program manager, the military department’s policies and procedures normally apply. Table 9 illustrates how the acquisition executive has delegated or retained decision authority for programs undertaken from 2001 to 2006. SOCOM is the MDA for over 60 percent of its acquisition programs. The SOCOM MDA could be the Acquisition Executive or a program executive officer, depending on the size and importance of the program. The Acquisition Executive has delegated the MDA role to the military departments for approximately 37 percent of SOCOM’s acquisition programs. For programs managed directly by SOCOM, the command has a hierarchical management structure, as shown in figure 6, which resembles the military departments in its internal acquisition organizational make-up. The program executive offices utilize program managers and system acquisition managers organized by program. System acquisition managers are charged with assisting the military department in program planning and execution and also representing SOCOM at military department-led integrated-product teams, technical conferences, and program reviews. System acquisition managers are normally used when the MDA and program manager or both options are assigned to a military department. In addition to the contact above, John Oppenheim, Assistant Director, Leon S. Gill, John Ortiz, Michele Williamson, Julia Kennon, Greg Campbell, and Marie Ahearn made key contributions to this report. Defense Acquistions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Defense Acquistions: Assessments of Selected Major Weapon Programs. GAO-06-391. Washington, D.C.: March 31, 2006. Defense Acquistions: Assessments of Selected Major Weapon Programs. GAO-05-301. Washington, D.C.: March 31, 2005. Defense Acquistions: Assessments of Major Weapon Programs. GAO-04-248. Washington, D.C.: March 31, 2004. Defense Acquistions: Assessments of Major Weapon Programs. GAO-03-476. Washington, D.C.: May 15, 2003. Defense Acquistions: Advanced SEAL Delivery System Program Needs Increased Oversight. GAO-03-442. Washington, D.C.: March 31, 2003. Defense Acquistions: Readiness of the Marine Corps’ V-22 Aircraft for Full-Rate Production. GAO-01-369R. Washington, D.C.: February 20, 2001. Navy Aviation: V-22 Cost and Capability to Meet Requirements Are Yet to Be Determined. NSIAD/GAO-98-13. Washington, D.C.: October 22, 1997.
Special Operations Command's (SOCOM) duties have greatly increased since the attacks of September 11, 2001. Today, Special Operations Forces are at work in Afghanistan and Iraq, and SOCOM has been assigned to lead U.S. efforts in the Global War on Terrorism. SOCOM's acquisitions budget has also greatly increased in this period--more than doubling from $788 million in 2001 to approximately $1.91 billion in 2006. In light of SOCOM's expanded duties, Congress requested that GAO review SOCOM's management of its acquisition programs. GAO's evaluation includes an assessment of: the types of acquisition programs SOCOM has undertaken since 2001 and whether the programs are consistent with its mission; the extent to which SOCOM's programs have progressed as planned; and the challenges SOCOM faces in managing its acquisition programs. SOCOM has undertaken a diverse set of acquisition programs that are consistent with the command's mission to provide equipment that addresses the unique needs of the Special Operations Forces. SOCOM has committed to spend about $6 billion on these programs. About 88 percent of the programs are relatively small, have short acquisition cycles, and use modified commercial off-the-shelf and nondevelopmental items or modify existing service equipment and assets. SOCOM's acquisition plans--as reflected in its current 5-year plan--continue to focus on relatively small-scale, short-cycle programs with modest development efforts. Overall, SOCOM's acquisition program performance has been mixed. About 60 percent of the acquisition programs SOCOM has undertaken since 2001 have progressed as planned, staying within the original cost and schedule estimates. Included in this grouping are programs that had cost increases because of the need to buy additional quantities of equipment for ongoing combat operations. The other 40 percent of SOCOM's acquisition programs have not progressed as planned and experienced modest to, in a small number of cases, significant cost increases and schedule delays because of a range of technical and programmatic issues. Although fewer in number, the programs that experienced problems comprise about 50 percent of acquisition funding because they tend to be the larger and costlier, platform-based programs that SOCOM is acquiring and those where SOCOM depends on one of the military departments for equipment and program management support. SOCOM faces management and workforce challenges to ensure its acquisition programs are consistently completed on time and within budget. Urgent requirements to support SOCOM's ongoing combat missions have and will continue to challenge SOCOM's ability to balance near- and long- term needs against available funding resources. In addition, SOCOM has difficulty tracking progress on programs where it has delegated management authority to one of the military departments and has not consistently applied a knowledge-based acquisition approach in executing programs, particularly the larger and more complex programs. Furthermore, SOCOM has encountered challenges ensuring it has the workforce size and composition to carry out its acquisition work.
Federal programs to prepare for and respond to chemical and biological terrorist attacks operate under an umbrella of various policies and contingency plans. Federal policies on combating terrorism are laid out in a series of presidential directives and implementing guidance. These documents divide the federal response to terrorist attacks into two categories—crisis management and consequence management. Crisis management includes efforts to stop a terrorist attack, arrest terrorists, and gather evidence for criminal prosecution. Crisis management is led by the Department of Justice, through the Federal Bureau of Investigation. All federal agencies and departments, as needed, would support the Department of Justice and the Federal Bureau of Investigation on-scene commander. Consequence management includes efforts to provide medical treatment and emergency services, evacuate people from dangerous areas, and restore government services. Consequence management activities of the federal government are led by the Federal Emergency Management Agency in support of state and local authorities. Unlike crisis management, the federal government does not have primary responsibility for consequence management; state and local authorities do. Crisis and consequence management activities may overlap and run concurrently during the emergency response and are dependent upon the nature of the incident. In a chemical or biological terrorist incident, the federal government would operate under one or more contingency plans. The U.S. Government Interagency Domestic Terrorism Concept of Operations Plan establishes conceptual guidelines for assessing and monitoring a developing threat, notifying appropriate agencies concerning the nature of the threat, and deploying necessary advisory and technical resources to assist the lead federal agency in facilitating interdepartmental coordination of crisis and consequence management activities. In the event that the President declares a national emergency, the Federal Emergency Management Agency also would coordinate the federal response using a generic disaster contingency plan called the Federal Response Plan. This plan—which has an annex specific for terrorism— outlines the roles of federal agencies in consequence management during terrorist attacks. More specifically, the plan outlines the planning assumptions, policies, concept of operation, organizational structures, and specific assignment of responsibilities to lead departments and agencies in providing federal assistance. The plan categorizes the types of assistance into specific “emergency support functions.” Examples of emergency support functions include mass care and health and medical services. In addition, several individual agencies have their own contingency plans or guidance specific to their activities. Our September 20, 2001, report found significant coordination and fragmentation problems across the various federal agencies that combat terrorism. In May 1998, the President established a National Coordinator within the National Security Council to better lead and coordinate these federal programs; however, the position’s functions were never detailed in either an executive order or legislation. Many of the overall leadership and coordination functions that we had identified as critical were not given to the National Coordinator. In fact, several agencies performed interagency functions that we believed would have been performed more appropriately above the level of individual agencies. The interagency roles of these various agencies were not always clear and sometimes overlapped, which led to a fragmented approach. For example, the Department of Justice, the National Security Council, the Federal Bureau of Investigation, and the Federal Emergency Management Agency all had been developing or planning to develop potentially duplicative national strategies to combat terrorism. In a more recent report and testimony, we provide additional examples of coordination difficulties specific to biological terrorism. To improve overall leadership and coordination of federal efforts to combat terrorism, the President announced the creation of an Office of Homeland Security on September 20, 2001, and specified its functions in Executive Order 13228 on October 8, 2001. These actions represent potentially significant steps toward improved coordination of federal activities and are generally consistent with our recent recommendations.Some questions that remain to be addressed include how this new office will be structured, what authority the Director will have, and how this effort can be institutionalized and sustained over time. There appears to be additional uncertainties about the terrorist threat in general since the September 11 attacks. Before those attacks, the Federal Bureau of Investigation had identified the largest domestic threat to be the “lone wolf” terrorist—an individual who operated alone. U.S. intelligence agencies had reported an increased possibility that terrorists would use chemical or biological weapons in the next decade. However, terrorists would have to overcome significant technical and operational challenges to successfully produce and release chemical or biological agents of sufficient quality and quantity to kill or injure large numbers of people without substantial assistance from a foreign government sponsor. In most cases, specialized knowledge is required in the manufacturing process and in improvising an effective delivery device for most chemical and nearly all biological agents that could be used in terrorist attacks. Moreover, some of the required components of chemical agents and highly infective strains of biological agents are difficult to obtain. Finally, terrorists may have to overcome other obstacles to successfully launch an attack that would result in mass casualties, such as unfavorable meteorological conditions and personal safety risks. On September 11, terrorists redefined the term “weapon of mass destruction.” Up to that point, that term generally referred to chemical, biological, radiological, or nuclear agents or weapons. As clearly shown on September 11, a terrorist attack would not have to fit that definition to result in mass casualties, destruction of critical infrastructures, economic losses, and disruption of daily life nationwide. The attack increased the uncertainties regarding the threat, although terrorists would still face the technical challenges described above in conducting chemical or biological attacks. The uncertainty has increased because the attacks on the World Trade Center and the Pentagon were conducted by a large group of conspirators rather than one individual. In addition, the terrorists were executing a long-planned coordinated attack, showing a level of sophistication that may not have been anticipated by the Federal Bureau of Investigation—the agency responsible for monitoring national security threats within the United States. Also, the terrorists were willing to commit suicide in the attacks, showing no concern for their own personal safety, which was considered one of the barriers to using chemical or biological agents. And most recently, the threat of anthrax has gone from a series of hoaxes to actual cases under investigation by the Federal Bureau of Investigation. Given the uncertainty about the threat, we continue to believe that a risk management approach is necessary to enhance domestic preparedness against terrorist threats. Risk management is a systematic and analytical process to consider the likelihood that a threat will endanger an asset, individual, or function and to identify actions to reduce the risk and mitigate the consequences of an attack. While the risk cannot be eliminated entirely, enhancing protection from known or potential threats can reduce the risk. This approach includes three key elements: a threat assessment, a vulnerability assessment, and a criticality assessment (assessing the importance or significance of a target). This approach would include a threat assessment to determine which chemical or biological agents are of most concern. Without the benefits that a risk management approach provides, many agencies have been relying on worst case chemical, biological, radiological, or nuclear scenarios to generate countermeasures or establish their programs. By using worst case scenarios, the federal government is focusing on vulnerabilities (which are unlimited) rather than credible threats (which are limited). As stated in our recent testimony, a risk management approach could help the United States prepare for the threats it faces and allow us to focus finite resources on areas of greatest need. A terrorist attack using chemical or biological weapons presents an array of complex issues to state and local first responders. These responders would include police, firefighters, emergency medical services, and hazardous material technicians. They must identify the agent used so as to rapidly decontaminate victims and apply appropriate medical treatments. If the incident overwhelms state and local response capabilities, they may call on federal agencies to provide assistance. To provide such assistance, the federal government has a variety of programs to prepare for and respond to chemical and biological terrorism, including response teams, support laboratories, training and equipment programs, and research efforts, as follows. Federal agencies have special teams that can respond to terrorist incidents involving chemical or biological agents or weapons. These teams perform a wide variety of functions, such as hands-on response; providing technical advice to state, local, or federal authorities; or coordinating the response efforts of other federal teams. Figure 1 shows selected federal teams that could respond to a chemical or biological terrorist incident. Federal agencies also have laboratories that may support response teams by performing tests to analyze and test samples of chemical and biological agents. In some incidents, these laboratories may perform functions that enable federal response teams to perform their role. For example, when a diagnosis is confirmed at a laboratory, response teams can begin to treat victims appropriately. Federal agencies also have programs to train and equip state and local authorities to respond to chemical and biological terrorism. The programs have improved domestic preparedness by training and equipping over 273,000 first responders. The programs also have included exercises to allow first responders to interact with themselves and federal responders. Finally, federal agencies have a number of research and development projects underway to combat terrorism. Examples of recently developed and fielded technologies include products to detect and identify chemical and biological weapons. Additional research and/or development projects include chemical monitoring devices and new or improved vaccines, antibiotics, and antivirals. There are a variety of chemical agents potentially used by terrorists. These chemical agents could be dispersed as a gas, vapor, liquid, or aerosol. A chemical agent could be disseminated by explosive or mechanical delivery. Some chemicals disperse rapidly and others remain toxic for days or weeks and require decontamination and clean up. Rapid exposure to a highly concentrated agent would increase the number of casualties. Federal, state, and local officials generally agree that a chemical terrorist incident would look like a major hazardous material emergency. According to the International Association of Fire Chiefs, over 600 local and state hazardous material teams will be the first to respond to a chemical incident. If local responders are unable to manage the situation or are overwhelmed, the incident commander has access to state and federal assets. A variety of federal teams could be deployed to provide assistance. Terrorists also can potentially use a variety of biological agents. Biological agents must be disseminated by some means that infects enough individuals to initiate a disease epidemic. According to a wide range of experts in science, health, intelligence, and biological warfare and a technical report, the most effective way to disseminate a biological agent is by aerosol. This method allows the simultaneous respiratory infection of a large number of people. A few biological agents (e.g., plague and smallpox) are communicable and can be spread beyond those directly affected by the weapon or dissemination device. The release of a biological agent or weapon may not be known for several days until victims present themselves to medical personnel in doctors’ offices, clinics, and emergency rooms where the symptoms might easily be confused with influenza or other less virulent illnesses. Accordingly, the critical detection of the biological agent begins with the public health infrastructure that detects outbreaks of illness, identifies the sources and modes of transmission, and performs rapid agent laboratory identification. Once diagnosis of a biological agent is confirmed, treating victims may require the use of federal consequence management teams and the items from the National Pharmaceutical Stockpile. Again, a variety of federal teams could be deployed to provide assistance. We have identified a number of problems that require solutions in order to improve preparedness for chemical and biological terrorism. Some of these are included in our recent reports and testimony. For example, our report on the West Nile Virus outbreak identified specific weaknesses in the public health system that need to be addressed to improve preparedness for biological terrorism. Our recent report on biological terrorism examined evaluations of the effectiveness of federal programs to prepare state and local authorities. For this statement, we also conducted an analysis of federal exercise evaluations to identify problems associated with chemical and biological terrorism that needed to be solved. In doing this, we examined 50 evaluations representing 40 separate exercises with chemical or biological scenarios. Based upon our review, the problems and their solutions fell into two categories. These categories were (1) generic problems and solutions that are generally applicable to any type of terrorist incident, major accident, or natural disaster, and (2) problems and solutions that are applicable to both chemical and biological terrorist events. Specific examples of each category follow. The first category of problems and their solutions are generally applicable to any type of terrorist incident. These would apply not only to chemical and biological terrorism but also to all hazards including emergencies unrelated to terrorism, such as major accidents or natural disasters. Command and control. The roles, responsibilities, and the legal authority to plan and carry out a response to a weapon of mass destruction terrorist incident are not always clear, which could result in a delayed and inadequate response. Planning and operations. State and local emergency operations plans do not always conform to federal plans. The operational procedures for requesting federal assistance are not always compatible with state and local procedures. Resource management and logistics. State and local governments can be overwhelmed with the resource management and logistical requirements of managing a large incident, particularly after the arrival of additional state and federal assets. For example, state and local officials could have difficulty providing support to numerous military units that might be needed. Communication. Interoperability difficulties exist at the interagency and intergovernmental levels. Also, the public health community lacks robust communication systems, protocols, equipment, and facilities. Exercises. Many exercises focus primarily on crisis management, which often ends in a successful tactical resolution of the incident and do not include more likely scenarios where terrorist attacks are successful, requiring a consequence management exercise component. Mass casualties. Overall planning and integration among agencies are needed for mass casualty management, including conventional terrorist incidents. Also, medical surge capacity for any type of weapon of mass destruction event may be limited. Disposition of bodies would also be an issue. The second category of problems and their solutions are applicable to chemical or biological incidents. They would not be relevant in a conventional, radiological, or nuclear terrorist incident; however, they would be relevant in other chemical or biological events not related to terrorism, such as an accidental release of chemicals or a natural outbreak of a disease. They vary in their level of applicability, with some only being applicable to specific chemical or biological agents. Public health surveillance. Basic capacity for public health surveillance is lacking. Improved public health-coordinated surveillance for biological terrorism and emerging infectious diseases is an urgent preparedness requirement at the local level. Detection and risk assessment. The capability of first responders and specialized response teams to rapidly and accurately detect, recognize, and identify chemical or biological agents and assess the associated health risks can be slow. Also, following the release of a chemical or biological agent, emergency hazardous material teams do not always conduct a downwind analysis of the toxic cloud, which could delay a decision to evacuate potentially affected populations. Protective equipment and training. First responders often lack special personal protective equipment (level-A protective clothing and masks) to safeguard them from chemical or biological agents and could become contaminated themselves. Training curricula deal with the technical level of response, such as treatment protocols, but do not describe operational guidelines and strategies for responding to large-scale public health emergencies. Physicians sometimes lack adequate training to recognize chemical and biological agents. Chemical and biological-specific planning. Emergency operations plans and “all-hazard” plans do not adequately address the response to a large- scale chemical or biological terrorism event. Plans often do not address chemical or biological incidents. Hospital notification and decontamination. Delays could occur in the notification of local hospitals that a biological incident has occurred. By the time the hospitals are notified, they could become contaminated by self-referred patients, have to close, and not treat other victims. First responders could become victims themselves and contaminate emergency rooms. Distribution of pharmaceuticals. State and local health officials have found it difficult to break down and distribute tons of medical supplies contained in push-packages from the National Pharmaceutical Stockpile. Vaccines and pharmaceuticals. Some pharmaceuticals, such as antibiotics, are generic and can be used to treat several different biological agents, whereas others, such as vaccines, are agent-specific. An example would be the smallpox vaccine, which would only be useful if terrorists used smallpox in an attack. Laboratories. Even a small outbreak of an emerging disease would strain resources. There is a need for broadening laboratory capabilities, ensuring adequate staffing and expertise, and improving the ability to deal with surges in testing needs. Medical and veterinary coordination. Problems exist in communication between public health officials and veterinary officials. The local and state veterinary disaster response plan may not adequately address the impact of a biological incident on the animal population, which could have dramatic health, economic, and public relations implications. Quarantine. Quarantine would be resource-intensive and would require a well-planned strategy to implement and sustain. Questions that have to be addressed include implementation authority, enforcement, logistics, financial support, and the psychological ramifications of quarantine. The Congress may want to consider several factors before investing resources in the rapidly growing budget for combating terrorism. Even before September 11, funding to combat terrorism had increased 78 percent from the fiscal year 1998 level of about $7.2 billion to the proposed fiscal year 2002 budget of about $12.8 billion. After September 11, the Congress approved the President’s request for $20 billion in emergency assistance and provided an additional $20 billion to supplement existing contingency funds. Thus, terrorism-related funding in fiscal year 2002 may exceed $50 billion. Further, a number of additional funding proposals have been introduced in the Congress that could further raise that amount. The challenge facing the Congress and the nation is to invest new resources where they will make the most difference in protecting people and responding to terrorist attacks, including those involving chemical and biological agents or weapons. The terrorist attacks of September 11 have profoundly changed the management agendas of the Congress, the White House, federal agencies, and state and local governments. However, as we respond to the urgent priorities and the enduring requirements of combating terrorism, our nation still must address the short-term and long- term fiscal challenges that were present before September 11 and that remain today. It is important to remember that the long-term pressures on the budget from competing programs have not lessened. In fact, long-term pressures have increased due to the slowing economy and the spending levels expected for fiscal year 2002. As a result, the ultimate task of addressing today’s urgent needs without unduly exacerbating our long- range fiscal challenges has become more difficult. As discussed above, the nature of the threat appears to have become more uncertain since the September 11 attacks. Despite this uncertainty, preparing for all possible contingencies is not practical because vulnerabilities are unlimited, so a risk management approach is needed to help focus resource investments. Efforts to better prepare for chemical and biological attacks include solutions that have broad applicability across a variety of contingencies and solutions that are applicable to only a specific type of attack. For example, efforts to improve public health surveillance would be useful in any disease outbreak, whereas efforts to provide vaccines for smallpox would be useful only if terrorists used smallpox in a biological attack. Given the uncertainty of the chemical and biological terrorist threat and continued fiscal concerns, the Congress may want to initially invest resources in efforts with broad applicability rather than those that are only applicable under a specific type of chemical or biological attack. As threat information becomes more certain, it may be more appropriate to invest in efforts only applicable to specific chemical or biological agents. This approach would focus finite resources on areas of greatest need using a risk management approach. As stated initially, this testimony is based largely upon recent GAO reports. In addition, we sought to determine what types of problems might arise in responding to chemical and biological terrorist attacks. To do so, we analyzed after-action reports and other evaluations from federal exercises that simulated chemical and biological terrorist attacks. The scope of this analysis was governmentwide. Our methodology initially identified and catalogued after-action reports and evaluations from federal exercises over the last 6 fiscal years (fiscal years 1996 to 2001). The analysis was limited to those 50 after-action reports (representing 40 different exercises) that had a chemical and/or biological terrorism component. The analysis did not include exercises involving radiological and/or nuclear agents, and it does not represent all federal after-action reports for combating terrorism exercises during that period. We then identified specific problems and issues associated with chemical and biological terrorism exercises. We compared those specific problems and solutions to determine which ones were specific to chemical and to biological incidents. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the Committee may have. For further information about this testimony, please contact me at (202) 512-4300. For information specifically on biological terrorism please contact Janet Heinrich at (202) 512-7250. Stephen L. Caldwell, Mark A. Pross, James C. Lawson, Harry L. Purdy, Jason G. Venner, and M. Jane Hunt made key contributions to this statement. Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, Oct. 12, 2001). Bioterrorism: Review of Public Health Preparedness Programs (GAO-02-149T, Oct. 10, 2001). Bioterrorism: Public Health and Medical Preparedness (GAO-02-141T, Oct. 9, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, Oct. 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01-915, Sept. 28, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-666T, May 1, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01-14, Nov. 30, 2000). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, Sept. 11, 2000).
Since the attacks against the World Trade Center and the Pentagon, the terrorist threat has risen to the top of the national agenda. Preparing for all possible contingencies is impractical, so a risk management approach should be used. This would include a threat assessment to determine which chemical or biological agents are of greatest concern. The federal government has various programs to prepare for and respond to chemical and biological terrorism, including response teams, support laboratories, training and equipment programs, and research efforts. Evaluations of chemical and biological preparedness have identified several problems and their solutions. Congress faces competing demands for spending as it seeks to invest resources to better prepare our nation for chemical and biological terrorism. Given the uncertainty of the chemical and biological threat, Congress may want to initially invest resources in efforts with broad applicability rather than in those that are applicable to a specific type of chemical or biological attack.
Multiple agencies and organizations within DHS and DOD have key roles and responsibilities for different steps of the personnel security clearance revocation process. In 2008, Executive Order 13467 designated the DNI as the Security Executive Agent. As such, the DNI is responsible for developing policies and procedures to help ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of eligibility for access to classified information and eligibility to hold a sensitive position. Within DHS, the Office of the Chief Security Officer develops, implements, and oversees the department’s security policies, programs, and standards, among other things. The DHS Chief of Personnel Security Division, under the direction of the Chief Security Officer, is responsible for issuing department-wide policy for the Personnel Suitability and Security Program, maintaining a departmental database for tracking personnel security cases, and determining employees’ eligibility for access to classified information. DHS component Chief Security Officers implement personnel security and suitability programs within their respective component. Executive Order 12968, Access to Classified Information (Aug. 2, 1995, as amended). DHS and DOD can revoke an employee’s eligibility for access to classified information based on 13 adjudicative guidelines. While the personnel security clearance revocation process varies by agency and type of employee, the general process for DHS and DOD military and federal civilian personnel, and for government contractors, is summarized in figure 2. According to DHS officials, the revocation process will end if the employee chooses to resign before a decision has been made; if a DOD military or civilian employee has initiated an appeal of a revocation decision, the appeal will be decided even if an employee has separated. The process begins with adverse information that can come from a variety of sources, including but not limited to individual self-reporting, federal or contract investigators who are conducting an investigation, Inspector General channels, hotlines, civilian law enforcement agencies, and reporting by persons such as security officers. According to DHS and DOD officials, the steps and time frames associated with investigating and verifying the credibility of the adverse information can vary considerably according to the nature and source of the adverse information. Some of these steps may include notifying the employee that adverse information was reported against him or her, allowing the employee an opportunity to provide a response, obtaining information from other government agencies, and conducting an updated background investigation to obtain court records, criminal records, and financial checks. The February 2014 OMB report found that clear and consistent requirements do not exist across government for employees or contractors to report information that could affect their continued fitness, suitability, or eligibility for federal employment and that there was not consistent guidance in place to direct contractors or contract managers to The report noteworthy or derogatory information regarding employees.report recommended acceleration of the implementation of a continuous evaluation program that would notify security officials of noteworthy events or incidents in near-real time. If incident reporting increases as a result of these recommendations, it raises the potential that such incidents could lead to an increase in the number of revocation cases in the future. As part of an ongoing review on the quality of the personnel security background investigation process, we are examining the implementation status of the recommendations in this OMB report. DHS’s and DOD’s data systems track varying levels of detail related to personnel security clearance revocations. DHS’s and DOD’s data systems could provide data on the number of and reasons for revocations, but they could not provide some data, such as the number of individuals who received a proposal to revoke their eligibility for access to classified information, which means that the total number of employees affected by the revocation process is unknown. DHS data show that about 125,000 DHS civilian and military employees were eligible to access classified information as of March 2014, and that DHS revoked access to classified information for 113 employees, or less than 1 percent, in fiscal year 2013. An official from the DHS Office of General Counsel explained that many employees resign before the final determination is made to revoke their security clearance. Importantly, the total population affected by the revocation process is unknown because the number of individuals who received a proposal to revoke their eligibility for access to classified information is unknown, as discussed below. Table 1 shows the number of DHS employees eligible to access classified information as of March 2014, and the number of personnel security clearance revocations for each DHS component in fiscal year 2013, with the U.S. Coast Guard having the largest number of revocations. Coast Guard officials stated that the increase in the number of revocations for Coast Guard military personnel in fiscal year 2013 could be explained in part because that was the first year the Coast Guard enforced the use of position sensitivity codes. They said that, as a result, some administratively withdrawn clearances were counted as revoked, which artificially inflated the revocation number. Table 2 provides additional information on the number of personnel security clearance revocations for each DHS component in fiscal years 2011 through 2013. DHS data show that personal conduct, financial considerations, and criminal conduct were the most common reasons personnel security clearances were revoked in fiscal year 2013. Figure 3 provides details about the issues underlying personnel security clearance revocations for each DHS component in fiscal year 2013. DHS employees whose access to classified information was revoked can first appeal the adverse decision with an initial appeal to a second-level deciding authority, and then appeal this decision with a final appeal to a three-person Security Appeals Board. DHS data show that, in fiscal year 2013, 24 employees appealed a revocation decision to the DHS Security Appeals Board. Of those 24 employees, 1 had his or her security clearance reinstated. DOD data show that DOD revoked eligibility for access to classified information for more than 16,000 military and civilian employees from fiscal years 2009 through 2013, and for almost 2,500 contractors government-wide during this same period. Because of potential inaccuracies in DOD eligibility data, which are discussed below, we were unable to determine the percentage of DOD clearance holders whose clearances were revoked. However, as we found with DHS, the total population affected by the revocation process is unknown because the number of individuals who received a proposal to revoke their eligibility for access to classified information is unknown, as discussed in the next subsection in this report. Table 3 shows the number of personnel security clearance revocations in fiscal years 2009 through 2013 for each DOD component, with Army military personnel having the largest number of revocations, and for contractors government-wide working in the industrial security program. The most common reasons for revoking a personnel security clearance for DOD civilian and military personnel in fiscal year 2013 were criminal conduct, drug involvement, and personal conduct. The most common reasons for revocation of security clearances for contractor personnel in fiscal year 2013 were financial considerations, personal conduct, and criminal conduct. Figure 4 provides details about the issues underlying personnel security clearance revocations for each DOD component and for contractors in fiscal year 2013. Although DHS’s and DOD’s data systems could provide data on the number of and reasons for revocations, neither department is currently required to track or report security clearance revocations data or any related metrics outside of the DHS and DOD elements of the intelligence community. As a result, neither system could provide data on how many individuals separated before a revocation decision was made, appeals, and time to complete a revocation case. Notably, neither the DHS nor the DOD system was able to provide data about the total number of individuals who received a proposal to revoke their security clearance, which would likely exceed the total number of revocations. Therefore, we are unable to comment on the total number of employees who might be affected by the revocation process. In order for organizations to measure performance, it is important that they have sufficiently complete, accurate, and consistent data to document performance and support decision making, while balancing the cost and effort involved in gathering and analyzing data. DHS’s system for managing and standardizing personnel security data, the Integrated Security Management System (ISMS), has not typically been used to track additional information about security clearance revocations, such as (1) the number of employees who received a proposal to revoke their clearance, (2) the number of employees who separated from the department before a revocation decision was made, (3) the number of employees that filed an initial appeal of a revocation decision, and (4) the length of time to complete a revocation case. First, DHS officials could not provide us with data on the number of individuals who had received a proposal to revoke their clearance. They said that this information could be recorded in ISMS, but that this capability may not be used by all of the components. Second, DHS officials said that ISMS does not track cases where an individual separated from the department before a decision was made regarding a proposal to revoke a personnel security clearance. For example, DHS officials said that if an employee was issued a proposal to revoke his or her clearance and he or she resigned and never responded to the proposal, then the security clearance was never revoked and the case would not be counted as a revocation. Once an initial decision is made to revoke a clearance, the decision is entered into ISMS and that decision will become final even if the employee does not respond, so those cases would be counted. Third, DHS data on the number of employees that filed an initial appeal of a revocation decision were not available. Officials from the Office of the Chief Security Officer told us that ISMS has a module that could provide this information, but because use of this module is not required, only a few DHS components use it. Finally, while officials at DHS components stated that the entire revocation process can take over a year to complete, DHS data on the average amount of time it takes to complete a revocation case were not available. Officials from the Office of the Chief Security Officer said that while ISMS can identify this information in individual records, it cannot track this type of data as a whole across the DHS components, because each appeal level would be saved as a different module entry. They said they try to complete a revocation case as quickly as possible. However, in some cases, employees request extensions of time in order to obtain representation or to obtain documents to refute or explain the revocation decision, which lengthens the process time. Until DHS considers whether tracking additional revocation and appeals information would be beneficial, and modifies its system to provide such information as is deemed beneficial, the department will continue to lack visibility over certain aspects of the security clearance revocation and appeal process, which may hinder its ability to effectively oversee these processes. Similarly, DOD’s Joint Personnel Adjudication System (JPAS) system, which is designated as DOD’s system of record for personnel security management to record and document personnel security actions, captures varying levels of detail related to security clearance revocations. also We found certain JPAS data fields partially completed or incomplete, such as fields showing whether an employee received a proposal to revoke his or her clearance, whether the employee chose to appeal the revocation decision in writing or in person, the time taken at different stages of the employee’s revocation appeal, and the number of employees who separated from the department before a revocation decision was made. For example, although more than 16,000 military and federal civilian employees had their personnel security clearances revoked from fiscal years 2009 through 2013, JPAS data reflected that fewer than 3,000 individuals had received a statement of reasons, which is DOD’s initial proposal to revoke a personnel security clearance, because the JPAS field to record this information had not been filled. The JPAS system of record notice, dated May 3, 2011, states that the categories of records in JPAS include records documenting the personnel security adjudicative and management process. However, officials from the Defense Manpower Data Center (DMDC), DOD’s JPAS administrator, and the DOD CAF stated that DOD users instead generally used component-specific case-management systems to keep track of adjudication information. DMDC officials explained that the final eligibility determination, and not all the other adjudication data, from the different case-management systems was uploaded to JPAS. Officials from the Office of the Under Secretary of Defense for Intelligence, which is responsible for overseeing DOD’s personnel security program, stated that their oversight efforts have been hindered by the lack of available data in JPAS, and that they do not have access to the component-specific case-management systems. DMDC officials stated that JPAS and the different case-management systems are going to be replaced, by 2016 and the end of fiscal year 2014, respectively. ODNI officials stated that it would be important for DOD to improve the data in JPAS before the new systems are implemented. DOD is already aware that data in JPAS are not being updated as frequently as needed. For example, the November 2013 DOD report in response to the Navy Yard shooting found that DOD does not have policies addressing roles, responsibilities, and standards for security managers to ensure the upkeep of data in JPAS. The report recommended that the department establish, reinforce, and enforce roles and responsibilities for updates to JPAS. Similarly, in April 2014, the DOD Inspector General issued a report assessing the personnel security clearance processes for contractors in four defense intelligence agencies. This report found a lack of effective recordkeeping that occurred because the appropriate investigative and personnel security databases, including JPAS, were not being reliably populated with investigative and security information. The report recommended that the Under Secretary of Defense for Intelligence direct the defense intelligence agencies to review the procedures used to ensure that JPAS and other systems are being properly populated. The report also found that DOD did not have any overarching policy documents governing JPAS operation, and recommended that DOD develop and issue an overarching policy for JPAS.improve the data in JPAS. Until DOD takes steps to ensure that information is recorded and updated in its systems, the department will continue to lack visibility over the security clearance revocation and appeal process, which may hinder its ability to effectively oversee these processes. Inconsistent implementation of the requirements in the governing executive orders by DHS, DOD, and some of their components, and limited oversight over the revocation process, have resulted in employees in some agency components and workforces experiencing different protections and processes than employees in other agency components and workforces. DHS and DOD have implemented the requirements in Executive Orders 12968 and 10865 in different ways for different groups of personnel, but these differences are required or permitted by the executive orders. However, some components’ implementation of the clearance revocation process could potentially be inconsistent with the executive orders in two areas: having an opportunity to be provided with certain information upon which a revocation appeal determination is based, and communicating the right to counsel. Although DHS and DOD have performed some oversight over the revocation process at the component level, they have not evaluated the quality of the process or developed performance measures to measure quality department-wide. Finally, while ODNI has exercised oversight of security clearance revocations by reviewing policies and procedures within some agencies, ODNI has not established any metrics to measure the quality of the process government-wide and has not reviewed revocation processes across the federal government to determine the extent to which policies and procedures should be uniform. DHS and DOD have implemented some requirements in the governing executive orders in different ways for different groups of personnel, but these differences are required or permitted by the executive orders. The areas of inconsistency include implementation of the personal appearance requirement, cross-examination of witnesses, and administration of the appeal boards within DOD. The right to a personal appearance during the personnel security clearance revocation process has been implemented differently across the two departments in a manner that provides different protections for contractors than for military and civilian personnel in two areas: the timing of the personal appearance and the information provided to the employee about the rationale supporting the revocation decision and the effect of the personal appearance. Executive Order 12968 provides that employees shall be provided an opportunity to appear personally at some point in the process before an adjudicative or other authority; it does not specify when during the process this personal appearance should occur. Executive Order 10865 provides that a contractor shall be provided an opportunity to appear personally after he or she has provided a written reply to the proposal to revoke eligibility to access classified information. Defense Office of Hearings and Appeals officials explained that the personal appearance is a significant opportunity to refute, explain, extenuate or mitigate critical facts, and stated that the later timing of this significant procedural protection for military and civilian personnel can adversely affect the individual’s continued employment while the appeal process is completed. The timing of the personal appearance for contractors is earlier in the revocation process than for DHS employees and DOD military and civilian employees. Contractors who receive a proposal to revoke their clearance may choose to respond to the proposal by requesting a personal appearance before an administrative judge. The administrative judge, in turn, issues a written decision to revoke or sustain the clearance after the employee has had his or her hearing. The contractor can appeal this decision to an appeal board. Thus, contractors have their personal appearance before the revocation decision is made. In contrast, military and civilian personnel within DHS and DOD who receive a notice that their clearance may be revoked can only submit written documentation prior to a revocation decision. Adjudicators issue a written decision to revoke or sustain the clearance before any personal appearance by, and without any in-person discussion with, the employee. The employee can appeal this written decision and request a personal appearance during the appeal process. Furthermore, DHS military and civilian employees, and contractor employees government-wide, have a better opportunity than DOD military and civilian employees to understand the rationale for the revocation decision and the effect their personal appearance may have had on the revocation decision. DHS military and civilian employees receive a written decision letter to revoke or sustain the clearance from the individual who presided over the personal appearance. Similarly, contractors government-wide are also provided a copy of the administrative judge’s written decision. However, for DOD military and civilian employees, the administrative judge who presided over the personal appearance during the appeal makes a written recommendation rather than a decision. This recommendation is sent directly to one of DOD’s Personnel Security Appeals Boards (PSAB), based on the agency to which the employee is assigned, and the recommendation generally is not shared with the DOD military or civilian employee. The DOD PSABs consider the administrative judge’s recommendation and other evidence when they reach and issue a final written decision regarding the security clearance to the employee, but they are not required to follow the judge’s recommendation. The employee is provided a final written decision from one of the three military department PSABs, which cannot be appealed, but the employee generally is not privy to the administrative judge’s recommendation. An exception is the Washington Headquarters Services appeal board which, in its written decision, typically provides the employee with a copy of the administrative judge’s recommendation and the hearing transcript. Army PSAB officials explained that providing the judge’s recommendation to the employee could be misleading because the individual might assume that was the final decision, and would be disappointed if the PSAB reached a different decision. The level of detail contained in the written decisions received by employees after the personal appearance also varied, with contractors having more information about the rationale for the decision than military and federal civilian personnel in the military departments. When we reviewed Defense Office of Hearings and Appeals administrative judge decisions that are provided to contractors, we found that they contained detailed findings of fact, discussions of applicable law and policy, and analysis, which provides an employee an in-depth understanding of the rationale for the judge’s decision. In reviewing versions of the PSAB decisions that are provided to military and civilian employees, however, we found that the Army and Air Force PSAB decisions were in a short memorandum format that state that case records have been reviewed and the board either sustains the revocation decision or reinstates eligibility for access to classified information. We found that only the Navy PSAB decisions provided a more detailed explanation of the rationale for the revocation of a security clearance. DOD guidance states that the PSAB’s written decision will provide the reasons that the PSAB either sustained or overturned the original determination of the adjudication facility, and that the PSAB’s final written determination shall state its rationale. According to Defense Office of Hearings and Appeals officials, DOD’s process for its military and civilian workers provides less transparency, quality, and accountability compared to contractor personnel. Specifically, these officials stated that DOD’s process for military and civilian employees makes it difficult to determine by reviewing the decision how or why component PSAB cases are decided the way they are. The officials also stated that they would like more transparency with regard to whether the component PSABs agreed or not with the administrative judge’s recommendation, and stated that as of summer 2013, they are now able to track this information. DHS and DOD employees are provided different rights to present and cross-examine witnesses during personal appearances, as the departments have implemented the executive orders differently, resulting in contractors, DOD employees, and some DHS employees receiving greater opportunities to cross-examine witnesses than other DHS employees. Executive Order 10865 explicitly provides contractors the opportunity to cross-examine persons who have made oral or written statements adverse to the employee, subject to certain exceptions. In contrast, Executive Order 12968, which covers military and civilian employees and contractors, is silent on the opportunity to do so. DOD military and civilian employees are permitted to cross-examine witnesses according to a memorandum from the Under Secretary of Defense for Intelligence issued in November 2007. Officials from the Defense Office of Hearings and Appeals and the Office of the Under Secretary of Defense for Intelligence stated that this was done as a matter of fundamental fairness, to give military and civilian employees an opportunity that had been provided to contractors for years. DHS Instruction Handbook 121-01-007, The Department of Homeland Security Personnel Suitability and Security Program (June 2009). witnesses during the personal appearance, while employees at other DHS components, such as U.S. Citizenship and Immigration Services and U.S. Immigration and Customs Enforcement, have been allowed to cross-examine witnesses during the personal appearance. U.S. Immigration and Customs Enforcement officials stated that employees were allowed to call and question witnesses during the personal appearance on a case-by-case basis. DHS officials from the Office of the Chief Security Officer told us that all employees should be treated the same across DHS’s components. They said that they would clarify the wording in the instruction, a draft of which has been under revision for more than a year; however, the officials had not decided whether they would revise the instruction to allow or prohibit the testimony or cross- examination of witnesses, and they could not tell us when the revised instruction would be finalized. Until the processes are consistent for all employees, and such processes are finalized in an instruction, employees within DHS may continue to have different rights concerning cross- examination of witnesses during the revocation process, depending on which component they work for. Each of DOD’s three military departments—of the Army, the Navy, and the Air Force—has a PSAB that reviews cases and makes final eligibility determinations for access to classified information for that department’s military and civilian employees. A fourth appeals board is administered by DOD’s Washington Headquarters Services, which reviews civilian employee cases for all other DOD agencies. A fifth appeals board is administered by the Defense Office of Hearings and Appeals, which reviews cases for all contractors in the industrial security program, including DOD and DHS. We have previously reported that overlap occurs when programs have similar goals, devise similar strategies and activities to achieve those goals, or target similar users, and duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. While overlap in efforts may be appropriate in some instances, especially if agencies can leverage each others’ efforts, in other instances overlap may be unintended, may be unnecessary, or may represent an inefficient use of U.S. government resources. DOD’s multiple different PSABs could constitute inefficient overlap because more than one component within DOD provides the same service. In 2010, the Secretary of Defense directed a series of initiatives designed to reduce duplication, overhead, and excess and instill a culture of savings and cost accountability across the department. As part of this initiative, in March 2011, the Secretary approved a recommendation to colocate and consolidate the overlapping security clearance appeal boards with the Defense Legal Services Agency, similar to the colocation and consolidation of the service adjudication activities that were previously directed by the base realignment and closure process and the Deputy Secretary of Defense. The Secretary directed a completion date of September 30, 2011, for this recommendation. However, this recommendation had not been implemented at the time of our review. A Defense Office of Hearings and Appeals official explained that this direction had not been cancelled, but it had not been implemented because of opposition from the military departments. Officials from the Navy PSAB stated that the direction had not been implemented because the PSABs had not received any instructions or guidance to implement this direction from the Defense Legal Services Agency. Similarly, the Army PSAB attributed the lack of action to a focus on completing the consolidation of DOD’s adjudication facilities as well as the absence of policy direction from the Under Secretary of Defense for Intelligence. An official from the Office of the Under Secretary of Defense for Intelligence explained that there has been an impasse since 2011 over a legal question regarding whether the PSAB consolidation directed by the Secretary of Defense is consistent with Executive Order 12968. Specifically, Army and Air Force PSAB officials stated that PSAB consolidation is not consistent with Executive Order 12968, explaining that the review proceedings outlined in the executive order provide an employee with revoked access to classified information the opportunity for a final appeal in writing to an agency head–appointed high-level panel. Army and Air Force PSAB officials stated that “agency head” refers to the Secretary of the Military Departments, not the Secretary of Defense. Air Force PSAB officials stated that the Secretary of Defense direction for PSAB consolidation would require modifying section 5.2 of Executive Order 12968, and that removing PSABs from the services would neither enhance due process nor national security. Air Force PSAB officials also explained that the procedures used to review the DOD efficiency proposals did not include the opportunity for the service Secretaries to review and comment, and thus the memo directing consolidation of the PSABs was signed before military department equities in maintaining their department PSABs were captured for consideration. However, an official from the Defense Office of Hearings and Appeals explained that the term “agency head” as used in the executive order includes the Secretary of Defense. Further, an official from the Office of the Under Secretary of Defense for Intelligence explained that by law, the Secretary of Defense has authority, direction, and control over the Department of Defense, to include the Secretaries of the military departments, and the Secretary of Defense’s efficiency decisions are decisions as the head of DOD and apply to all subordinate components of the department, including the Secretaries of the military departments. This official stated that the interpretation of the language in the executive order was ultimately a legal question. DOD guidance provides that the DOD General Counsel shall provide advice and guidance as to the legal sufficiency of procedures and standards involved in implementing the DOD personnel security program. In addition to the disagreement about the legal authority to consolidate the PSABs, there is disagreement within the department about the risks and benefits of implementing the Secretary of Defense direction to consolidate the PSABs. Officials from the Army, the Navy, and the Air Force PSABs explained that consolidating PSABs would limit the military department Secretary’s ability to consider circumstances and risk in light of that specific service’s special or sensitive programs, missions, or needs. Washington Headquarters Services officials stated that separate PSABs were more likely to be sensitive to their component’s special programs, missions, and needs than a central DOD PSAB. Air Force PSAB officials stated that, from their past experience, the DOD Consolidated Adjudications Facility’s (CAF) statement of reasons for revoking access to classified information is often narrowly focused and fails to weigh all issues appropriately, and that in personal hearings the Defense Office of Hearings Appeals administrative judges sometimes fail to challenge statements made by employees that immediately raise flags with PSAB members based on their background and experience. They stated that with the DOD CAF making initial DOD-wide risk assessments for the military departments, the final revocation appeals should be decided by the individual departments. In contrast, officials from the Office of the Under Secretary of Defense for Intelligence and the Defense Office of Hearings and Appeals agree that DOD PSAB consolidation is in keeping with the principles of reciprocity where risk is managed DOD-wide, not on a component basis. They stated that with the DOD CAF, the components have already lost their ability to manage risk with respect to favorable adjudications because the CAF is making those decisions for the component when personnel security clearances are initially granted. Officials from the Defense Office of Hearings and Appeals stated that the requirement that agencies grant clearance reciprocity has removed the role that service-specific programs may play in clearance determinations that were completed by another agency. Officials from the Office of the Under Secretary of Defense for Intelligence explained that consolidation would bring standardization and consistency of quality, objectivity, and experience to the process for personnel security appeals, and would result in legal expertise being part of every appeal process, which would help ensure that national security needs and procedural fairness are appropriately balanced. Further, Defense Office of Hearings and Appeals officials stated that contractors have the benefit of independent fact-finding and an independent written decision by officials who do not work for the component, which provides an important check against unfairness and the taint of undue influence. These officials stated that having decision makers outside of the component’s chain of command helps to reduce the opportunity for the perception or reality that those in the individual’s component or chain of command can influence the outcome of the process. Officials from the DOD CAF cautioned that DOD needs to study the implications of moving to a consolidated appeal board to make an informed decision on any process modifications, efficiencies, and resource implications prior to executing the direction to consolidate the appeal boards. Army officials also suggested that establishment of a working group to review the efficiencies, feasibility, way ahead, and timelines would be beneficial in formulating a course of action in implementing the direction to consolidate the PSABs. Until DOD General Counsel resolves the disagreement within the department about the legal authority to consolidate the PSABs, and collaborates with the PSABs and the Under Secretary of Defense for Intelligence to address any other obstacles to consolidation, the department will continue to face delays implementing the Secretary of Defense’s direction. Our review of DHS and DOD department- and component-level guidance, as well as the components’ communication letters to employees undergoing a revocation proceeding, found that both departments generally provided information to employees about their rights under the two executive orders. However, some components’ implementation of the clearance revocation process could potentially be inconsistent with the executive orders or agency policy in two areas: having an opportunity to be provided any additional information upon which a revocation appeal determination is based, and communicating the right to counsel. Navy and Army policies could allow the Navy and Army PSABs to collect and consider new information related to the revocation decision without informing the employee or giving the employee the opportunity to review or respond to the new information. For example, Navy Manual M-5510.30 strongly encourages the employee’s command to submit additional information directly to the Navy PSAB after military and civilian personnel have made their personal appearance in front of the administrative judge.an appeal of a revocation decision might be denied, to be introduced without the individual’s awareness. Executive Order 12968, however, states that employees who are determined not to meet the standards for access to classified information shall be provided with a reasonable opportunity to reply in writing to and request a review of the determination, and to request any documents, records, and reports upon which a revocation is based, to the extent that the documents would be provided under the Freedom of Information Act or Privacy Act. DOD Regulation 5200.2-R, Personnel Security Program (January 1987, incorporating administrative change Feb. 23, 1996). Similarly, Army Regulation 380-67 could allow the Army PSAB to collect information without informing the employee or giving the employee the opportunity to respond to the new information. The Army regulation regarding appeal of a revocation decision requires the employee to respond to the decision through his or her immediate commanding officer. The Army regulation further requires that the commanding officer must recommend for or against reinstatement of the security clearance, and provide a rationale addressing the issues in the decision. As written, the Army regulation is silent on whether the comments will be provided to the individual to review and respond to the information contained in it. Army PSAB officials said that the PSAB is not responsible for providing employees with this information. Further, Army PSAB officials noted that in cases where a security clearance was revoked because of financial considerations, the Army PSAB would request additional documentation concerning any actions that the employee has taken to resolve delinquent debts, but stated that the Army PSAB will obtain credit reports directly from the credit reporting bureaus and compare them to the documents in the appeal package. Army PSAB officials explained that the credit report is accessed solely to verify the existence or resolution of disqualifying financial information that formed the basis of an unfavorable determination by the DOD CAF, so it is not routinely provided to the employees, but they said that it would be provided upon request. This raises concerns about whether the employee has an opportunity to review or respond to information in the credit reports obtained directly by the PSAB, because credit reports may not always be accurate. Until the Army regulation is revised to specify that all information provided to the Army PSAB by the command or obtained by the Army PSAB itself must also be shared with the individual, along with an opportunity to respond to this information, the Army PSAB could potentially deny employees some of the protections provided in the executive order. DOD security clearance revocation prehearing memorandums provided to employees inform all types of employees—military personnel, DOD civilians, and contractors—of their right to obtain legal representation, and allow for discussion of any relevant issues. In contrast, at the time of our review, one DHS component—the Coast Guard—was not notifying its military personnel of their right to be represented by counsel or other representative at their own expense, but rather was erroneously informing military personnel that they had no right to counsel. While Executive Order 12968 and DHS Instruction Handbook 121-01-007 specify that employees shall be informed of their right to be represented by counsel or other representative at their own expense, letters the Coast Guard sent its military personnel appealing to the second-level deciding authority stated “you may not have an attorney or anyone else with you during this administrative process.” The existing Coast Guard Instruction states that if the final decision results in a revocation, the employee will be advised of his or her rights, but does not specify what these rights are. During our review, the Coast Guard Security Center Director acknowledged this disparity and stated the letters would be changed to provide the required notification to military personnel that they have a right to be represented by counsel or other representative at their own expense during the personal appearance before the second-level deciding authority. We subsequently reviewed a revised letter, and it had been modified to inform military personnel of their right to be represented by counsel. In addition, the Director said he would advocate for modifying the Coast Guard instruction to formalize this change. Currently, this Coast Guard instruction is undergoing revision, and the updated version is expected to be published in the fall of 2014. According to a Coast Guard official, the revised instruction will address this issue, but we have not reviewed the revision to determine whether this change was included. In addition, although the Coast Guard’s communication letters inform Coast Guard civilian employees of their right to be represented by counsel or other representative at their own expense during the personal appearance, they impose some stipulations. The Coast Guard letters, unlike those sent by other DHS components, state that only the employee’s account of the issues can be heard during the meeting, the employee’s counsel or representative cannot instruct the employee during the meeting, and the employee is limited to only 30 minutes to appear in person and present any relevant information. The Coast Guard Security Center Director said these stipulations are imposed because the intent is to avoid allowing the administrative review from becoming a protracted and adversarial legal proceeding where objections are injected or cross- examinations are sought. To his knowledge, the “30 minutes” has never been enforced and is now under review for removal from the Coast Guard instruction. However, until the Coast Guard instruction and related communication letters are revised to clearly and consistently communicate rights provided by the executive orders, military and civilian employees within the Coast Guard are at risk of not being treated similarly to one another or to employees in other DHS components. DHS has taken recent steps in response to recommendations made in a December 2013 DHS Office of Inspector General report, and individual DHS and DOD components perform some oversight over aspects of the revocation process. But neither department has performed an overarching, department-wide evaluation of the quality of the revocation process or has developed performance measures and collects data to measure the quality of the process. DHS has taken some recent steps to improve the quality of the revocation process. Specifically, the DHS Office of Inspector General report found that appointments to the DHS Security Appeals Board and the composition of the board had not been made in accordance with DHS policy. For example, it found that one member served on the Security Appeals Board when an employee in his chain of command was the appellant, even though DHS guidance provides that board members cannot have a current supervisory relationship with the employee whose appeal is being heard. The Inspector General report recommended that the Director of the U.S. Secret Service ensure that the Uniformed Division Assistant Chief, or other officials in the agency’s chain of command, do not rule on appeals by Uniformed Division employees. In March 2014, the Secret Service issued a new directive describing the composition of the board and how a board member would be replaced if a case involved an employee in his or her chain of command. Further, some DHS component officials told us that their component provides oversight during the revocation process. For example, officials from U.S. Citizenship and Immigration Services said that revocation data are reviewed throughout the process, at the initial stage of determining whether the action is warranted and by management at the initial stage and each subsequent stage, and by legal counsel prior to approval and signature of the revocation letter. Similarly, at DHS Headquarters, U.S. Immigration and Customs Enforcement, and the Federal Emergency Management Agency, officials stated that revocation determinations undergo multiple stages of review, including by the adjudicator’s first-line supervisor, the Personnel Security Division Director, and an attorney. Other components, such as Transportation Security Administration and DHS Headquarters, perform reviews after the process has been completed to determine whether policies and procedures were consistently followed prior to reaching the final case determination. In addition, DHS Headquarters officials said that they review all DHS component case files before the cases are sent to the Security Appeals Board. Within DOD, although the Under Secretary of Defense for Intelligence is responsible for developing, coordinating, and overseeing the implementation of DOD policy, programs, and guidance for personnel security, the extent of oversight over the clearance revocation process and the use of related metrics varies across the department. Officials explained that the Office of the Under Secretary of Defense for Intelligence conducts annual quality reviews of DOD security clearance adjudicative determinations, but explained that their oversight efforts have been hindered by the lack of available data in JPAS, as we previously discussed. They explained that they do not have access to the agency- specific case-management systems, and that they have sent out manual data requests in the past, but have experienced difficulties in receiving responses to these requests that all have a consistent interpretation of the data. Furthermore, officials from the four PSABs we met with stated they collect appeal data—such as number of cases reviewed, favorable decisions, unfavorable issues, and number of days to process an appeal—and that they generated and submitted internal reports with this information to their respective leadership, but these appeal board officials did not elaborate how the information provided to their superiors was used to perform oversight. ODNI has exercised some oversight of security clearance revocations by reviewing policies and procedures within some agencies; however, it has not established any metrics to measure the quality of the process government-wide and has not reviewed security clearance revocation processes across the federal government to determine the extent to which policies and procedures should be uniform. In addition to DHS and DOD, ODNI also has oversight responsibility for the security clearance process government-wide. In June 2008, Executive Order 13467 designated the DNI as the Security Executive Agent to, among other things, direct the oversight of determinations of eligibility for access to classified information or to hold a sensitive position, and assigned the DNI responsibility for developing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of investigations and adjudications relating to determinations of eligibility for access to classified information or to hold a sensitive position. Executive Order 13467 also provides the DNI the authority to issue guidelines and instructions to the heads of agencies to ensure appropriate uniformity, centralization, efficiency, effectiveness, and timeliness in processes relating to determinations by agencies of eligibility for access to classified information or eligibility to hold a sensitive position. This executive order further states that agency heads shall assist in carrying out any function under the order, which includes implementing any policies or procedures developed pursuant to the order. Executive Order 13467 designated the Director of OPM as the Suitability Executive Agent, responsible for developing and implementing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of investigations and adjudications relating to determinations of suitability for government employment. Executive Order 13467, Reforming Processes Related to Suitability for Government Employment, Fitness for Contractor Employees, and Eligibility for Access to Classified National Security Information (June 30, 2008). was added to address an agency’s process to deny or revoke a clearance. Despite these efforts at the component level, neither DHS, DOD, nor ODNI have evaluated the quality of the revocation process across the specific departments or government-wide. DHS and DOD do not perform overarching, department-wide oversight over the revocation process, and neither department has developed metrics or collected data to measure the quality of the revocation process. Furthermore, ODNI officials acknowledged that metrics have not been established to measure the quality of the security clearance revocation process. In November 2013, we testified that executive-branch agencies do not consistently assess quality throughout the personnel security clearance process, in part because they have not fully developed and implemented metrics to measure quality in key aspects of the personnel security clearance process. Having assessment tools and performance metrics in place is a critical initial step toward instituting a program to monitor and independently validate the effectiveness and sustainability of corrective measures. Our work has also found that agency managers need performance information as a basis for decision making to improve programs and results, identify problems in existing programs and develop corrective actions, and identify priorities and make resource decisions. ODNI officials stated that they currently report some limited metrics on revocations for the intelligence community as part of their reporting in response to the Intelligence Authorization Act for Fiscal Year 2010. They said that they would like to establish and make more robust metrics for reciprocity, quality, and out-of-scope periodic reinvestigations, and from there it would be a natural progression to look at developing some metrics for revocations and denials, and other areas. However, they stated that due to constrained resources and other priorities they were uncertain whether they could make a business case to allocate the resources. The absence of data on the number of persons who receive a proposal to revoke their eligibility to access classified information, as discussed above, combined with the likelihood that the shift to increase continuous evaluation may result in increased instances of revocation proposals, make it increasingly important for agencies to have performance measures and data to ensure a high-quality revocation process. Without performance measures and data to assess the quality of the personnel security clearance revocation process, individual departments, such as DHS and DOD, and ODNI lack information to identify and resolve potential problems in the process, and make informed decisions about potential changes to the program. Furthermore, the security clearance revocation process implementation differences we identified at DHS and DOD continue in part because ODNI has not reviewed security clearance revocation processes across the federal government to determine the extent to which policies and procedures should be uniform. Specifically, ODNI has not assessed whether the existing security clearance framework, with its parallel processes for contractors and government employees, or a single process applicable to all types of employees would best facilitate the effective, efficient, consistent, and timely completion of security clearance revocation proceedings. When asked about the different processes, ODNI officials stated that the executive orders provide broad guidelines that give agencies the flexibility to implement a review and appeal process that best fits the agency’s needs, and there is no single solution that all agencies must follow. Additionally, Executive Orders 12968 and 10865 do not require a uniform government-wide process, and in fact establish two parallel processes, one for contractors and one for government employees.perspective, standardization of the security clearance revocation process makes sense, but said that ODNI has not had a reason or purpose to perform an extensive review of the revocation processes. The ODNI officials stated that they had not heard complaints regarding fairness while conducting their reviews, and had only heard anecdotal concerns that the process took too long. Furthermore, ODNI has not established any policies and procedures to facilitate government-wide consistency in security clearance revocation proceedings. ODNI officials stated that The ODNI officials explained that from an efficiencies publishing guidance for the appeal process might be worth pursuing, but would have to be prioritized in light of competing priorities and limited resources. Given the inconsistencies we have identified in the revocation processes at DHS and DOD discussed previously, combined with the requirement of clearance reciprocity and the recommendations to implement continuous evaluation, the DNI’s new role as Security Executive Agent places ODNI in a unique position to examine whether any changes to the existing structure with its parallel revocation processes might be warranted. Until ODNI reviews the effectiveness and efficiency of all aspects of the security clearance revocation process, and DHS and DOD take specific actions, it is difficult to determine whether the existing structures, with different processes for military and civilian personnel and for contractors, are the most appropriate approach to meet national security needs. DHS and DOD employees whose eligibility to access classified information has been revoked may not have consistent employment outcomes, such as reassignment or termination, because these outcomes are generally dependent on several factors, including the agency’s mission and needs and the manager’s discretion. Communication between personnel security and human capital offices at DHS and DOD varies, because human capital and personnel security processes are intentionally managed separately, and most components could not readily ascertain the employment outcomes of individuals whose clearances had been revoked. Employment outcomes, such as reassignment or termination, for DHS and DOD civilian and military employees whose personnel security clearance has been revoked are generally dependent on a number of factors, including the agency’s mission and needs. Key to the decision is the judgment of the employee’s supervisor or commander, and also whether there is a job available that the employee is qualified to perform and the supervisor or commander considers it appropriate or possible to reassign the employee. DHS officials elaborated that if an individual’s clearance was revoked, then he or she is no longer qualified to perform the job he or she was hired for, and so, depending on the policies at the component where the employee works, the agency may have no obligation to reassign the individual to another position or find another position for the employee. DOD officials stated that in many places within the department, all positions are sensitive, so there may be no positions to which an employee could be reassigned. DHS officials stated that in such agencies where all positions require a clearance, holding a clearance is usually a condition of employment. Components within DHS and DOD varied as to whether they reassign an employee after a security clearance revocation. Officials from five DHS components—U.S. Customs and Border Protection, U.S. Citizenship and Immigration Services, U.S. Coast Guard, U.S. Immigration and Customs Enforcement, and Transportation Security Administration—stated that management at their component could decide whether to reassign a civilian employee to a position with duties not requiring access to classified information. For two DHS components, U.S. Secret Service and DHS Headquarters, reassignment is generally not an option because all or almost all positions in these components require a security clearance. A DHS Headquarters general counsel official stated that DHS has no official policy regarding reassignment, so that it can preserve its administrative options. However, for DHS military personnel, Coast Guard officials said their component has guidance stating that in cases where a clearance is terminated for cause and the employee is not recommended for separation from the Coast Guard, the employee will be reassigned to a position that does not require a security clearance. For most DOD civilian and military personnel, officials said that supervisors or commanders have discretionary authority to determine how to treat employees whose security clearance has been revoked. For DOD civilian employees, Army, Air Force, Marine Corps, and Washington Headquarters Services officials stated that supervisors have discretion to reassign employees, while Navy officials said that civilian employees will undergo a removal action after all appeals are completed if access to classified information is revoked. Additionally, while DOD department- level and Air Force guidance does not require separation of officers whose clearances have been revoked, with DOD guidance stating that officers may be separated from military service, Army and Navy guidance requires the discharge of an officer who receives a final revocation of a security clearance. However, two Army regulations concerning officers appear to contradict each other. While one Army regulation states that revocation of an officer’s security clearance requires that the officer be discharged, and further states that this requirement cannot be waived, a different Army regulation regarding reassignment of officers provides guidance for the reassignment of officers whose security clearance has been revoked. For enlisted military personnel whose security clearance has been revoked, officials from the military services stated that the Army and the Marine Corps reassign military personnel to the extent that an alternative position is available, and the Air Force may reassign military personnel, while the Navy will generally only reassign military personnel until a final revocation decision is made by the PSAB. Army officials noted, however, that a clearance revocation should affect a soldier’s ability to reenlist, because as of 2005 all soldiers enlisting in the Army are subject to an investigation for eligibility to access classified information at the secret level, regardless of the access requirements of their position. Navy officials said that since 2011, all Navy positions require secret clearance eligibility as a condition of employment, regardless of whether the position requires access to classified information. As a result, a sailor who has lost his or her security clearance generally will be separated from the Navy. Given the component’s policies and procedures regarding reassignment, officials from four DHS components—U.S. Citizenship and Immigration Services, U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and Transportation Security Administration—told us that it would be possible for similarly situated employees under investigation for the same infraction to be treated differently if their clearances were revoked. When asked how the quality of the process could be improved once a final revocation decision has been made, Immigration and Customs Enforcement officials suggested that the agency could identify a single human capital deciding official to review all employment outcomes, to ensure consistency of employment status decisions across the agency. The officials explained that knowing and tracking the employment outcomes of individuals who lost their clearances would benefit the agency, because disparate treatment would not be an appropriate outcome. For DHS Headquarters and the Secret Service, which do not reassign personnel whose clearances have been revoked, all employees will be treated similarly because employees who lose their clearances will be terminated. Given the varying policies and procedures at DOD components, similarly situated civilian and military personnel whose security clearances have been revoked may be treated differently. Communication between personnel security and human capital offices at DHS and DOD varies, but lack of communication between these offices could result in adverse employment actions being taken prematurely or in inappropriate use of personnel security or human capital processes. According to DHS and DOD officials, the personnel security revocation processes and human capital disciplinary or adverse action processes are intended to be separate and distinct processes, to help ensure independence and protect national security. DHS and DOD officials stated that an adverse disciplinary personnel action could be taken based on the same underlying offense that led to the revocation proceeding, and if that were to occur, the misconduct and personnel security processes can run in parallel or they can run consecutively. However, after a final decision is made to revoke a personnel security clearance, DHS and DOD personnel security officials said that their role in the process is over, and that it is a human capital decision as to what next happens to the individual. A DHS Headquarters general counsel official further stated that any personnel actions that result due to the revocation of a personnel security clearance are based exclusively on the fact that the individual is no longer qualified for his or her position, not on the reasons underlying the revocation action. Good human capital policies and practices, to include appropriate practices for evaluating, counseling, and disciplining personnel, are critical factors that affect the quality of internal controls. Moreover, to run and control operations and achieve goals, agencies must have relevant, reliable, and timely communications relating to internal as well as external events; effective communications should occur in a broad sense, with information flowing down, across, and up the organization. Personnel security offices at some DHS and DOD components said they worked very closely with human capital officials throughout the personnel security clearance revocation process, while at other components there was very little interaction between the offices. For example, Secret Service officials said that they have excellent communication between the personnel security and the human capital offices, and that personnel from both offices meet at every step of the process. Similarly, Coast Guard officials stated that their human capital and personnel security offices work closely with each other throughout the revocation process with respect to civilian employees. In contrast, Immigration and Customs Enforcement officials stated that they are unaware of any specific DHS human capital policies and procedures that align or support the security process. These officials also stated that better coordination and communication between human capital and personnel security offices is needed during the revocation process, and that increased coordination and communication could improve the quality of the process. Similarly, DHS Headquarters employee relations officials said that their office is not involved or informed by the personnel security office throughout the security revocation and appeal process, which includes the initial decision to revoke an employee’s security clearance through the three levels of appeal. They explained that their office gets involved after the decision to revoke the employee’s security clearance is final, and the human capital office communicates with the personnel security office when a personnel action is necessary. They explained that this communication is not to share the details of the underlying offense, but to notify the human capital office or supervisor of the status of the investigation. An official from the DHS Office of General Counsel stated that the office is involved throughout the revocation process to provide legal sufficiency reviews of clearance determinations and to advise management during any clearance-related personnel action. Within DOD, Army human capital officials stated that the appropriate offices are not informed of the revocation of the security clearance due to weaknesses in information sharing with other Army offices. They explained that there is no standard time frame or process for the civilian personnel office to be notified about a civilian employee’s clearance revocation, but the office is typically notified when the supervisor seeks advice regarding what action to take now that the employee’s clearance has been revoked. In contrast, Navy human capital officials stated that the nature of the adverse information may trigger employee misconduct actions as well as actions to revoke a security clearance, thus making communication among the commanding officer, security manager, and the serving human resource office essential. They said that, generally, the security officer and the human resource office interact at all stages of the incident. Similarly, Marine Corps headquarters officials stated that its human resources office works with the local command and includes its local security manager into the process from the very beginning of the revocation process. Air Force officials stated the local human capital office is normally informed by the organization when the employee’s security clearance is revoked. Washington Headquarters Services human capital officials stated that their personnel security office and occasionally the local component security manager notifies their human capital office when an employee’s security clearance has been revoked. A lack of communication between the human capital and personnel security offices could result in adverse employment actions being taken prematurely or in the inappropriate use of personnel security processes in lieu of human capital processes. For example, DOD officials stated that one issue that can arise is that human capital officials could fire an individual before all of the appeals associated with a revocation action are completed. If the termination was based upon a separate adverse action proceeding, that action would be appropriate; however, if the action was based on the clearance revocation, then, under DOD regulation, subject to certain exceptions, termination should not take place until after the revocation decision is final, after all appeals have been completed. Defense Office of Hearings and Appeals and other DOD officials stated that some components are inappropriately terminating employees due to loss of a security clearance before the personnel security clearance appeal process is completed. In addition, ODNI officials explained that some agencies could use the personnel security process to handle personnel disciplinary issues, which is not appropriate. For example, Defense Office of Hearings and Appeals Officials said that retaliation against whistleblowers is perceived, fairly or not, as a continuing problem in the personnel security clearance arena. Ordinarily, most federal civilian employees have a right to appeal serious adverse employment actions taken against them to the Merit Systems Protections Board. However, in the security clearance context, federal case law has limited the scope of the board’s review of adverse actions. Specifically, the board may review appeals of adverse employment actions resulting from a denial or revocation of a security clearance or a determination that an employee is not eligible to hold a sensitive position for specific procedural issues,clearance denial or revocation, or a finding that an employee is not eligible to hold a sensitive position. DOD officials said that the personnel security and human capital processes are designed and intended to be but the board cannot review the substance of a security separate in part to protect the employee from someone trying to exercise undue influence over the disciplinary process, as well as to protect national security. ODNI officials stated that there are legal restrictions on the type of information that can be shared between the human capital and personnel security offices, but said that further review of what information should be shared between the two offices could be beneficial. Until DHS and DOD develop guidance specifying what information can and should be communicated between human capital and personnel security officials, and at what decision points during the revocation process that information should be communicated, DHS and DOD will be hampered in their ability to combat the perception that the personnel security process is being used to circumvent procedural protections ordinarily provided to federal employees subject to adverse employment actions, and that individuals are not being treated in a fair and consistent manner. DOD and most DHS components cannot readily ascertain the employment outcomes of individuals whose clearances had been revoked, because these data are not readily available. Within DOD, officials representing all DOD civilian and military personnel—in the Army, the Navy, the Marine Corps, the Air Force, and Washington Headquarters Services—stated that they do not track and would not be able to report the human capital outcomes of employees with revoked security clearances. For example, Army officials explained that there is a resignation code in their human capital database, but that code covers all resignations for any reason, and there may or may not be a remark on the agency’s personnel action form (known as an SF-50) that would relate the resignation to a security clearance issue. Moreover, Army officials explained that if an individual were removed as a result of a security clearance revocation, the removal code could be attributed to failing to meet any one of several conditions of employment, if maintaining eligibility for a security clearance was one of the requirements listed in an individual’s position description. An official from the Office of the Under Secretary of Defense for Personnel and Readiness explained that the separation codes applied for military personnel are similarly broad in nature, and would include separations for reasons other than revocation of a security clearance. Officials in some DHS components said they could manually gather information about employment outcomes from clearance revocations, but they explained that doing this would be labor-intensive because their human capital system would need to be cross-referenced against the personnel security system. For example, U.S. Immigration and Customs Enforcement officials commented that there is no DHS or Immigration and Customs Enforcement policy that requires the collection of data and reporting of outcomes for employees with revoked security clearances, but stated that they could determine the employment outcomes on a piecemeal basis by making a data query for each employee record. However, Coast Guard officials said that they maintain a spreadsheet of all disciplinary and adverse actions taken against its civilian employees. Similarly, Transportation Security Administration personal security officials also stated that their component can identify the outcomes of employees with revoked security clearances with help from human capital officials. The Intelligence Authorization Act for Fiscal Year 2010 requires the President to submit an annual report to Congress on, among other things, the total number of personnel security clearances across the government, categorized by government employees and contractors who held or were approved for a security clearance. In response to this requirement, ODNI has prepared and submitted a report each year, with the most recent report being issued for fiscal year 2013. However, we found that the DOD data that are included in this report to Congress likely overstate the total number of DOD employees eligible to access classified information, in part because JPAS does not have up-to-date information about the current population of DOD employees. Without accurate data, DOD’s ability to reduce the total population of clearance holders and minimize risk and reduce costs to the government will be hampered. To measure performance, it is important that organizations have sufficiently complete, accurate, and consistent data to document performance and support decision making. Further, one of the five internal control standards that define the minimum level of quality acceptable for internal control in the federal government states that information should be recorded within a time frame that enables management to carry out responsibilities, and that operational information is needed in part to determine whether the agency is complying with applicable laws and regulations. The number of employees eligible to access classified information was obtained from JPAS, and includes all employees who had an active or valid confidential, secret, top secret, or sensitive compartmented information eligibility at the end of fiscal year 2013, and who did not have a separation date recorded in JPAS prior to the end of the fiscal year. Washington Headquarters Services provides human capital support and manages the personnel security process for several DOD components and agencies. When we asked DMDC officials for their opinions on why the number of employees eligible to access classified information was greater than the total number of employees for some of the DOD components, they provided some possible explanations for the discrepancy. For example, DMDC officials explained that the database includes individuals who have newly enlisted into the military services but who may not have begun their enlistment period yet, and this would not be included in the employee totals. However, we reviewed data reported by the Under Secretary of Defense for Personnel and Readiness for fiscal years 2010 through 2012, and found that the total number of all personnel who joined each year (not just those who joined with a delayed entry date) each year ranged from about 43,000 to 44,000 for the Air Force and around 40,000 to 42,000 for the Marine Corps, which is too few to explain the discrepancy of almost 200,000 for Air Force military personnel and 125,000 for Marine Corps military personnel. Furthermore, DOD officials said that the information in JPAS may not reflect changes in personnel status such as separations due to retirements, employee job transfers, and deaths. DMDC officials explained that JPAS receives data from the components’ personnel centers, and DMDC is dependent on the components to send separation information. As a result, the number of DOD clearance holders included in the report to Congress likely overstates the total number of DOD employees eligible to access classified information because it may include people whose clearance eligibility has not yet expired, but who have separated from the department, since JPAS was not updated to reflect that separation information. ODNI officials stated that because DOD has the largest number of eligible persons in the federal government, any overstatement of DOD’s data will have a greater effect on the reported totals than for other agencies. DMDC officials stated that since management of JPAS transitioned to DMDC in June 2010, DOD has conducted an extensive study on the quality of JPAS data. Specifically, they stated that DMDC has conducted more than 127 data-quality initiatives affecting 165 million records. These initiatives include examining records where the access level did not match the eligibility level (such as where a person has top-secret access but only secret eligibility) and identifying duplicate records. In addition, a DMDC official said that the team working on the migration from JPAS to the new system has identified data-quality issues that they are working to resolve. Until DOD takes steps to review and analyze the discrepancies in the total number of employees and the number of employees eligible to access classified information, and address any problems identified, DOD will be unable to rely on the information provided by JPAS to get an accurate understanding of the total number of DOD employees eligible to access classified information. The lack of visibility over this total will impede the department’s ability to implement recommendations to improve the security clearance process. For example, the February 2014 OMB report on the security, suitability, and credentialing processes recommended that federal agencies reduce the total population of clearance holders to minimize risk and reduce costs. However, until DOD has an accurate baseline of the number of clearance holders in the department, DOD will be unable to determine the extent that it can or has reduced the number of clearance holders in accordance with this recommendation. Furthermore, having inaccurate data about the number of clearance holders within DOD will hinder the department’s ability to provide oversight and accurate, complete information about security clearance eligibility to Congress as required by statute, to other offices within the department, and to interagency stakeholders. In an environment where reciprocity of personnel security clearances is required among federal agencies, the consistent and transparent application of the processes governing whether individuals should retain their access to classified information has become increasingly important, so that all agencies can have reasonable assurance that only trustworthy individuals obtain and keep security clearances. Moreover, with the proposed implementation of continuous evaluation, the workload of agencies’ security offices could significantly increase, making it critical for agencies to have a high-quality clearance revocation process in place. In the absence of requirements to track or report security clearance eligibility data and related metrics, DHS and DOD do not have key revocation data, such as the number of proposed revocations, to help oversee the revocation process or determine their workload for planning purposes. Although both DHS and DOD are generally meeting their responsibilities and providing information to employees about most of their rights under the two executive orders governing the revocation process, until Army, Navy, and Coast Guard guidance is updated, some employees could potentially be denied some of the protections provided in the executive orders. Additionally, given the different interpretations of the executive order and other obstacles to implementation of the Secretary of Defense’s direction to consolidate DOD’s PSABs, in the absence of a resolution of these issues by the DOD General Counsel, DOD will be unable to implement the Secretary of Defense’s direction to eliminate the overlap in this function. Further, DHS, DOD, and some of their components have implemented the requirements from the executive orders in different ways. Without consistent processes for all employees, regardless of which component they work for, employees within DHS may experience different opportunities to cross-examine witnesses during the revocation process. In addition, without performance measures to assess the quality of the personnel security clearance revocation process, the ODNI, DHS, and DOD lack information to identify and resolve potential problems in the process, and make informed decisions about potential changes to the program. Further, until the DNI, as the Security Executive Agent, reviews the efficiency and effectiveness of the existing revocation processes, it is unknown whether having different processes, for military and civilian personnel and for contractors, and having inconsistencies among DHS and DOD, is the most appropriate approach to meet national security needs. Finally, without specific guidance from DHS and DOD on what information should be shared between personnel security and human capital offices, and when that information should be shared, DHS and DOD cannot ensure that individuals are treated in a fair and consistent manner. Similarly situated individuals who lose their security clearance may lose their employment or remain employed and be reassigned, based on their supervisor’s discretion. Moreover, without accurate data about the number of current DOD military and federal civilian employees eligible to access classified information, DOD is not well positioned to provide the information Congress has requested. DOD also will be hindered in implementing recommendations to reduce the total population of clearance holders in order to minimize risk and reduce cost. We recommend that the Secretaries of Defense and Homeland Security, and the Director of National Intelligence take the following 13 actions. To help ensure that the respective DHS and DOD data systems contain sufficiently complete and accurate information to facilitate effective oversight of the personnel security clearance revocation and appeal process, we recommend that the Secretary of Homeland Security direct the Chief Security Officer to assess the benefits and associated costs of tracking additional revocation and appeals information, and take any steps necessary to modify the Integrated Security Management System (ISMS) to track such information as is deemed beneficial; and the Secretary of Defense direct the Under Secretary of Defense for Intelligence to take steps to ensure that data are recorded and updated in the Joint Personnel Adjudication System (JPAS) and the department’s new systems, so that the relevant fields are filled. To help ensure that all employees within DHS receive the same protections during their personal appearance, we recommend that the Secretary of Homeland Security direct the Chief Security Officer to revise and finalize the DHS instruction regarding the personnel security program to clarify whether or not employees are allowed to cross-examine witnesses during personal appearances. To help ensure independence and the efficient use of resources, we recommend that the Secretary of Defense direct the DOD General Counsel to take the following two actions: first, resolve the disagreement about the legal authority to consolidate the PSABs and, in collaboration with the PSABs and the Under Secretary of Defense for Intelligence, address any other obstacles to consolidating DOD’s PSABs; and second, if the General Counsel determines that there are no legal impediments and that other obstacles to consolidation can be addressed, we recommend that the Secretary of Defense direct the Defense Legal Services Agency to take steps to implement the Secretary of Defense’s direction to consolidate DOD’s PSABs. To help ensure that all employees within DOD receive the same rights during the revocation process, we recommend that the Secretary of Defense direct the Secretary of the Navy to revise Secretary of the Navy Manual M-5510.30 to specify that any information collected by the Navy PSAB from the employee’s command will be shared with the employee, who will also be given the opportunity to respond to any such information provided; and direct the Secretary of the Army to revise Army Regulation 380-67 to specify that any information collected by the Army PSAB from the employee’s command or by the Army PSAB itself will be shared with the employee, who will also be given the opportunity to respond to any such information provided. To help ensure that all employees are treated fairly and receive the protections established in the executive order, we recommend that the Secretary of Homeland Security direct the Commandant, U.S. Coast Guard, to revise the Coast Guard instruction for military personnel to specify that military personnel may be represented by counsel or other representatives at their own expense. To facilitate department-wide review and assessment of the quality of the personnel security clearance revocation process, we recommend that the DNI, in consultation with the Secretaries of Defense and Homeland Security, develop performance measures to better enable them to identify and resolve problems, and direct the collection of related revocation and appeals information. To help ensure that similarly situated individuals are treated consistently, and to facilitate oversight and help ensure the quality of the security clearance revocation process, we recommend that the DNI review whether the existing security clearance revocation process is the most efficient and effective approach. In this review, the DNI should consider whether there should be a single personnel security clearance revocation process used across all executive-branch agencies and workforces, with consideration of areas such as the timing of the personal appearance in the revocation process, and the ability to cross-examine witnesses. Further, to the extent that a single process or changes to the existing parallel processes are warranted, the DNI should consider whether there is a need to establish any policies and procedures to facilitate a more consistent process, and recommend as needed any revisions to existing executive orders or other executive-branch guidance. To facilitate coordination between personnel security and human capital offices regarding how a security clearance revocation should affect an employee’s employment status, and to help ensure that individuals are treated in a fair and consistent manner, we recommend that the Secretary of Homeland Security direct the Under Secretary for Management to review and revise policy regarding coordination between the personnel security and human capital offices to clarify what information can and should be communicated between human capital and personnel security officials at specified decision points in the revocation process, and when that information should be communicated; and the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in consultation with the Under Secretary of Defense for Intelligence, to review and revise policy regarding coordination between the personnel security and human capital offices to clarify what information can and should be communicated between human capital and personnel security officials at specified decision points in the revocation process, and when that information should be communicated. To help ensure that the DNI report to Congress contains accurate data about the number of current DOD military and federal civilian employees eligible to access classified information, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Intelligence and the Under Secretary of Defense for Personnel and Readiness to review and analyze the discrepancies in the total number of employees and the number of employees eligible to access classified information, and take immediate steps to address the problems. We provided a draft of this report to DHS, DOD, and ODNI for review and comment. Written comments from DHS, DOD, and ODNI are reprinted in their entirety in appendices II, III, and IV respectively. All three agencies generally concurred with our recommendations and provided additional technical comments, which we incorporated in the report where appropriate. In its written comments, DHS concurred with our four recommendations directed to it, and stated it has already taken steps to implement two of our recommendations. First, regarding our recommendation to assess the benefits and associated costs of tracking additional revocation and appeals information, DHS concurred, stating that the Office of the Chief Security Officer has established an estimated completion date of December 2014 to conduct a review to consider what additional data would be valuable for collection. Second, with respect to our recommendation to revise and finalize the DHS instruction regarding cross-examination of witnesses, DHS concurred, commenting that the Office of the Chief Security Officer has revised its personnel security instruction with unambiguous language on cross-examination of witnesses, and intends to issue the revised instruction by the end of the year. Third, for our recommendation to revise the Coast Guard instruction to specify that military personnel may be represented by counsel, DHS concurred, stating that the Coast Guard, pending the update of the Commandant Instruction on Personnel Security, issued an interim memorandum in May 2014 advising that individuals may have counsel or other representatives present at the second-level review at their own expense. DHS also stated that it believes the Coast Guard’s actions to implement our recommendation regarding the revision of its instruction to specify that military personnel may be represented by counsel fulfill the intent of the recommendation, and requested that this recommendation be closed as implemented. While we are encouraged by the actions the Coast Guard has already taken, we continue to believe that it is important that the change be formalized in the updated Commandant instruction before we close out our recommendation. Moreover, the revision made by the Coast Guard in its interim memorandum appears to extend the right to counsel only to the personal appearance, and does not make clear how employees will be informed of their right to counsel, but under Executive Order 12968 the right to counsel is not limited to one specific stage of the revocation process, and the order requires that employees be informed of this right. Finally, regarding our recommendation to review and revise policy regarding coordination between the personnel security and human capital offices, DHS concurred, commenting that the DHS Office of the Chief Human Capital Officer concurs with the concept of facilitating coordination between the personnel security and human capital offices, and will assess the process to determine appropriate communication points and provide appropriate guidance. DHS established an estimated completion date of March 2015 for this action. Further, in its technical comments, DHS noted that this recommendation would be more appropriately directed to the DHS Under Secretary for Management, who oversees both the Office of the Chief Human Capital Officer and the Office of the Chief Security Officer. As a result, we have modified the recipient of this recommendation as suggested. In its written comments, DOD fully concurred with all but one of our seven recommendations directed to it, and partially concurred with one of our recommendations. First, with respect to our recommendation to ensure data are recorded and updated in JPAS and DOD’s new systems, DOD concurred, stating that the Office of the Under Secretary of Defense for Intelligence will incorporate monitoring of data fields pertaining to the personnel security clearance revocation and appeal process into its quarterly oversight of DOD Personnel Security Program metrics. Regarding our two recommendations to revise Navy and Army guidance, respectively, about sharing information collected by the respective PSABs with the employee, DOD concurred with both recommendations. DOD commented that the Navy plans to issue interim guidance by October 1, 2014, and issue the final revised Navy Manual by October 1, 2015. DOD further stated that the Army Regulation is under revision and will specify that the PSAB will provide any documents it obtains to the subject and allow a period of time for response. With respect to our recommendation to review and revise policy regarding coordination between the personnel security and human capital offices, DOD concurred, stating that the Office of the Under Secretary of Defense for Personnel and Readiness, with support from the Office of the Under Secretary of Defense for Intelligence, will identify the way forward to review and revise policy and procedures regarding coordination between the personnel security and human capital offices as appropriate. Finally, regarding our recommendation to review and analyze the discrepancies in the total number of employees and the number of employees eligible to access classified information, DOD concurred, commenting that within 30 days of the release of the final report, the Office of the Under Secretary of Defense for Intelligence will convene a meeting of action officers and analysts to identify strategies for reviewing, analyzing, and resolving the discrepancies in the total number of employees and the number of employees eligible to access classified information. DOD partially concurred with our draft recommendation for the DOD General Counsel to resolve the disagreement about the legal authority to consolidate the PSABs and address any other obstacles to consolidation, and to implement the Secretary of Defense’s direction to consolidate DOD’s PSABs if there are no legal or other impediments to consolidation. DOD agreed with us to review legal or other impediments to consolidation, and stated that the DOD Office of General Counsel will address any unresolved disagreements about legal authority for consolidation of PSABs. DOD further commented that the DOD Office of General Counsel will work closely with the Office of the Under Secretary of Defense for Intelligence to address other issues concerning consolidation of PSABs. However, DOD commented that some DOD components disagreed with PSAB consolidation. Specifically, DOD stated that of the eleven components that provided responses to the draft report, eight concurred or had no issues or comments, while the remaining three components noted that the PSABs should remain at the component level and not be consolidated. One of these three components also commented that the perceived efficiencies from consolidation described in our report should be validated and that all models for consolidation should be evaluated before a decision is made that would consolidate the PSABs. DOD’s comments reflect internal disagreement, which corroborates our finding that there is disagreement within DOD on the legal authority, risks, and benefits of consolidating the department’s multiple appeals boards. As we also note in our report, the Secretary of Defense has already directed this consolidation. However, in light of statements from some DOD officials that DOD needs to study the implications of moving to a consolidated appeal board to make an informed decision, we clarified our recommendation to clearly separate the two actions to be taken by the DOD General Counsel: first to resolve the disagreement about the legal authority for consolidation and address other obstacles, and second to take steps to implement the consolidation if there are no legal impediments and the other obstacles to consolidation can be addressed. We believe this language addresses the need for DOD to fully consider and resolve the components’ concerns about consolidation. In its written comments, ODNI concurred with our two recommendations directed to it, for ODNI to develop performance measures and direct the collection of related revocation and appeals information, and to review whether the existing security clearance revocation process is the most efficient and effective approach. ODNI stated it established the Security Executive Agent National Assessment Program in April 2014 to conduct oversight of personnel security processes across the Executive Branch. ODNI said that this program includes gathering and analyzing data to establish standard processes as appropriate and developing performance measures against those standards. ODNI further commented that DHS and DOD have implemented revocation processes in different ways, which warrant additional ODNI oversight of agency revocation policies. DOD also concurred with our recommendation directed to ODNI regarding development of performance measures and collection of related revocation and appeals information, stating that the Office of the Under Secretary of Defense for Intelligence would ensure that ODNI receives a copy and is made aware of this recommendation. We are sending copies of this report to appropriate congressional committees, the Secretaries of Homeland Security and Defense, and the DNI. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report assesses the policies and practices that the Department of Homeland Security (DHS) and Department of Defense (DOD) use when revoking personnel security clearances. The scope of our work focused on the revocation of personnel security clearances for federal civilian employees and military personnel within DHS and DOD, as well as federal government contractors. Known intelligence community military and civilian personnel and contractors were excluded from our scope, because they follow different processes and guidance than other DOD personnel. Table 5 provides a complete list of the agencies we contacted for our review. To examine the extent to which DHS and DOD track data to oversee their revocation processes, and what these data show, we analyzed relevant executive orders and DHS and DOD personnel security clearance revocation policies to identify the extent to which they are required to maintain or report data and documentation on their security clearance revocation and appeals processes. We compared those requirements to leading practices, and assessed the extent that the policy requirements comply with these leading practices.obtained DHS and DOD personnel security clearance revocation and appeal data. In addition, we requested and Revocation data for DHS was provided by DHS’s Office of the Chief Security Officer using its system for managing and standardizing personnel security processes and data, the Integrated Security Management System (ISMS). Personnel security records maintained in ISMS include suitability and security clearance investigations, which contain information related to background checks, investigations, and access determinations. The reported DHS security clearance revocation and appeals data include DHS military personnel and federal civilian employees within DHS Headquarters and the DHS operational components. DHS revocation cases for its contractor employees are processed by DOD and were not included in the DHS data. Although we requested data from fiscal years 2009 through 2013, DHS officials from Office of the Chief Security Officer could only provide revocation data for fiscal years 2011through 2013, because not all of the DHS operational components had been using ISMS to manage personnel and administrative security case records until recently. DHS Headquarters migrated data from its legacy system into and began using ISMS in May 2008; Federal Emergency Management Agency migrated to ISMS in May 2009; U.S. Customs and Border Protection migrated in October 2009; U.S. Immigration and Customs Enforcement migrated in December 2009; U.S. Citizenship and Immigration Services migrated in December 2009; U.S. Coast Guard migrated in July 2011; Transportation Security Administration migrated in December 2012; and U.S. Secret Service migrated in May 2013. To provide the revocation data we requested regarding the number of revocations and the reasons for the revocation under the adjudicative guidelines, DHS queried ISMS and then validated those results with each of its operational components. The components made changes to the ISMS data when they determined that the data entered into ISMS did not track with what they had tracked elsewhere. DHS officials said that the differences were likely based on data entry and system use issues. The total number of revocation cases and the total number of revocation cases that went to the Security Appeals Board represents DHS military and civilian employees’ revocation cases that were closed in that particular fiscal year. We found that the total number of cases where a revocation proceeding was initiated could be higher because ISMS does not track cases where a person separated from the agency before a final decision was made on a proposal to revoke a personnel security clearance. In addition, the total number of military and civilian employees eligible to access classified information represents a current snapshot in time as ISMS does not track historical security clearance numbers. To corroborate the accuracy of the ISMS total number of DHS employees eligible to access classified information at each component, we compared this information to the total number of employees at six DHS components (U.S. Coast Guard, Transportation Security Administration, Federal Emergency Management Agency, U.S. Immigration and Customs Enforcement, U.S. Citizenship and Immigration Services, and U.S. Customs and Border Protection). We found that in all six components, the total number of employees was greater than the number of employees eligible to access classified information, as not all DHS employees need eligibility for access. Furthermore, while we requested DHS data on the number of employees that filed an initial appeal and the average amount of time it takes to complete a revocation case, these data were not available. Officials from the Office of the Chief Security Officer told us that ISMS has a module, called the Appeals Case, that could provide information about the number of initial appeals, but because use of this module is not required, only a few DHS components use it. Furthermore, officials from the Office of the Chief Security Officer told us that ISMS cannot track case timeliness data as a whole across the DHS components, because each appeal level would be saved as a different appeals case module entry, but the officials explained that they could determine this information for a particular case by looking at the individual ISMS records. We analyzed the DHS revocation data and supporting documentation, and discussed its reliability with DHS officials, and found the data to be sufficiently reliable to report on the number of employees whose personnel security clearance was revoked in DHS, and the reasons for the revocations. Revocation data for DOD military and federal civilian personnel and for industry or contractor personnel government-wide was provided by the Defense Manpower Data Center from DOD’s Joint Personnel Adjudication System (JPAS), which is DOD’s system of record for personnel security management to record and document personnel security actions. DOD security clearance revocation and appeals data include military personnel and federal civilian employees within the military services (Army, Navy, Air Force, and Marine Corps) and the defense agencies (referred to as Washington Headquarters Services). Data for government-wide contractors (also referred as industry personnel) is collectively grouped as one entity because Defense Manpower Data Center officials informed us that data on contractor personnel do not indicate the agency with which an individual’s contract is associated. We met with officials from the Office of the Under Secretary of Defense for Intelligence and the Defense Manpower Data Center (the administrator of JPAS) to discuss the approach for our data request and to get their feedback. We requested JPAS data extracts showing the total number of persons eligible to access classified information, the number of security clearance revocations, the reasons for a revocation decision, the number of appeals, the number of favorable and unfavorable appeal decisions, the type of appeal selected by the individual (personal appearance or in writing), and the time values at different intervals of the revocation and appeal process. We requested that all of these data be broken out by each DOD component for DOD military personnel, DOD federal civilian employees, and government-wide contractor employees for fiscal years 2009 through 2013. Furthermore, while we requested DOD data on the number of employees that filed an appeal, appeal outcomes, and the average amount of time it takes to complete a revocation case, these data were not available. Although there are fields in JPAS where this information can be recorded, we found that these fields were not consistently being used in JPAS. Defense Manpower Data Center officials initially provided mock-ups of the data request that excluded these data fields or left them blank. When we asked about this, Defense Manpower Data Center officials stated that it is their agency’s practice not to provide information from data fields with less than 50 percent fill rates. We asked that Defense Manpower Data Center to provide all the requested data along with an additional worksheet to show the data fill rate percentage, so we could report on the extent that these data fields had not been used. To corroborate the accuracy of the JPAS revocation data for DOD military and civilian employees, we asked DOD officials from the DOD Consolidated Adjudications Facility (CAF) to provide us with the number of revocations processed by their adjudicators for the military departments’ military and civilian employees for fiscal years 2009 through 2013. We compared the JPAS data received from Defense Manpower Data Center with the data provided by the DOD CAF and we found that the data did not match. We determined that the discrepancy with the DOD CAF data was likely caused by a difference in the periods and populations included in the counts. To corroborate the accuracy of the JPAS revocation data for contractor personnel, and data regarding the personal appearance for DOD military and civilian employees, we asked DOD officials from Defense Office of Hearings and Appeals (DOHA) to provide information on the number of contractor hearings and appeals performed, and their outcomes, for fiscal years 2009 through 2013 and the number of personal appearances for DOD military and civilian employees and their outcomes for fiscal years 2009 through 2013. We compared the JPAS data received from the Defense Manpower Data Center with the data provided by DOHA and we found that the data did not match. We determined that the discrepancy with the contractor data from DOHA was a result of the inclusion of clearance denials, which the DOHA database was unable to separate from clearance revocations. Security clearance denials were not part of the scope of this review. We analyzed the DOD revocation data and supporting documentation, and discussed their reliability with DOD officials, and found the data to be sufficiently reliable to report on the number of military personnel and federal civilian employees and contractors whose personnel security clearances were revoked in DOD, and the reasons for the revocations. To examine the extent to which DHS and DOD consistently implemented government-wide requirements in their revocation processes, we obtained and reviewed the policies and procedures DHS, DOD, and their components use when revoking an employee’s access to classified information, interviewed DHS and DOD officials about whether these processes are being uniformly applied within each department and across the departments, and discussed the officials’ suggestions for improving the revocation process. In addition, we reviewed Executive Orders 12968 and 10865, which establish the overall process for revoking an employee’s security clearance, to identify agency and employee rights and responsibilities during the clearance revocation process. We then analyzed DHS and DOD template or redacted sample communication letters sent to employees during the revocation and appeal process by each component within DHS and DOD to determine whether they provide employees notice of their security clearance revocation rights and responsibilities under Executive Orders 12968 and 10865. Two analysts independently reviewed and assessed the DHS and DOD communication letters to determine whether they contain the 14 key rights and responsibilities for military, civilian, and contractor employees provided by Executive Order 12968 and the three additional rights for contractor employees provided by Executive Order 10865. For DHS military and civilian employees, we reviewed the Notice of Determination, the Notice of Review, and the Security Appeals Board decision letter. For DOD military and civilian employees, we reviewed the Statement of Reasons, the Letter of Revocation, and the PSAB decision letter. For contractor employees government-wide, we reviewed a Statement of Reasons, the administrative judge’s decision letter, and the DOHA Appeal Board decision letter. The analysts then compared their results to identify any disagreements and reached agreement on all items through discussion. We reviewed processes for civilian and military personnel within DHS and DOD, excluding the intelligence community, and for industry or contractor personnel that are part of the 23 executive-branch agencies that follow the DOD guidance and process. Additionally, we interviewed officials from DHS, DOD, and their respective components to discuss (1) how they are following their policies, (2) how employee rights and responsibilities factor into the security clearance revocations process, and (3) how and under what circumstances they communicate with employees who are subject to the security clearance revocation process. When we identified discrepancies in following policies or communicating with employees, we contacted appropriate DHS and DOD officials to determine the reasons for such discrepancies and their potential effect. We also met with DHS, DOD, and ODNI officials to discuss the oversight they provide over executive-branch agencies’ personnel security revocation processes, their suggestions for building quality into the revocation process, and whether there are currently any metrics or reporting requirements related to personnel security clearance revocations. To examine the extent to which DHS’s and DOD’s respective human capital and personnel security clearance revocation policies enable the departments to determine the employment status of their federal civilian and military employees subject to revocation in a consistent manner, we analyzed department-level and component level DHS and DOD human capital guidance—specifically their respective guidance for misconduct, discipline, and adverse actions, such as a table of penalties—and personnel security guidance. We assessed the extent that this guidance could be used to systematically determine what actions the agencies should take regarding the employment status of individuals subject to the clearance revocation and appeals processes, and what employment actions, such as reassignment or separation, are typically taken if an employee’s personnel security clearance is revoked. In addition, we assessed the extent to which the different sources of guidance are linked or are cross-referenced, and assessed what communication is required to take place between an agency’s personnel security office and human capital office during the course of a clearance revocation proceeding. We also interviewed human capital officials at DHS, DOD, and their components to obtain their perspectives on the extent that DHS and DOD’s human capital practices regarding the employment status of individuals subject to revocation are linked to and aligned with personnel security policies related to security clearance revocation, and the extent that there is communication between an agency’s personnel security office and a human capital office during the course of a clearance revocation proceeding. In addition, we analyzed DHS’s, DOD’s, and the components’ guidance to determine whether the departments required tracking of any data regarding the employment outcomes of individuals whose personnel security clearances were revoked. We also discussed with DHS and DOD officials what data regarding employment outcomes were available at the department and component level. For this objective, within DHS, we focused on the three DHS components that had the largest number of personnel security clearance revocations from fiscal years 2011 through 2013, which were the U.S. Coast Guard, U.S. Immigration and Customs Enforcement, and U.S. Secret Service. Within DOD, our review included the headquarters-level elements of the Departments of the Army, the Navy, and the Air Force; the Marine Corps, and the Washington Headquarters Services. Contractor personnel were not included in the scope of this objective, as the human capital policies applicable to contractors would be those of their private-sector employers. To assess whether DOD’s personnel security management system accurately reports the total number of DOD employees eligible for access to classified information, we compared the total number of DOD employees eligible for access to classified information reported by DOD’s personnel security management system to the total number of DOD employees in each component. To corroborate the accuracy of the JPAS total number of military and federal civilian employees eligible to access classified information, we compared this information with total military personnel end strength and civilian personnel full-time equivalents from the Under Secretary of Defense Comptroller’s National Defense Budget Estimates (Green Book). We assumed that the total number of military and civilian employees in each component should be higher than the total number of military and civilian employees who were eligible to access classified information, because not all DOD employees should be required to have clearance eligibility. However, we found that the total number of military and civilian employees eligible to access classified information in fiscal year 2013 as reported by JPAS was higher than the total number of military and civilian employees listed in the fiscal 2013 military personnel end strength and civilian personnel full-time equivalent data found in the DOD Green Book. We met with officials from Defense Manpower Data Center to discuss the discrepancies. Regarding the disparity in the revocation data, the Defense Manpower Data Center officials stated that they could not speak for the accuracy of the data derived from the Green Book, since full-time equivalents would undercount the total number of individuals employed, due to issues such as two part-time individuals occupying one full-time position. As a result, they believed that it would not be appropriate to compare these data against the total number of persons eligible to access classified information. DMDC officials subsequently agreed to provide us with counts for the total numbers of DOD active-duty and reserve military personnel and federal civilian employees for fiscal year 2013. To determine the total number of DOD active-duty military personnel who were employed at any time in each active component during fiscal year 2013, data were taken from the Automated Extract of Active Duty Military Personnel Records. DMDC calculated the total number of active-duty military personnel by adding the totals from all 12 monthly files for fiscal year 2013 that were counted and reported as part of official active component strength. After combining the 12 files, duplicate personnel were dropped based on Social Security number and service. This methodology could potentially double-count individuals if someone transferred from one active service to another active service (e.g., if an individual transferred from active duty in the Army to active duty in the Navy). To determine the total number of DOD reserve personnel who were employed at any time in each component during fiscal year 2013, data for reserve personnel were taken from the Reserve Components Common Personnel Data System. Reserve personnel data includes all Reserve categories in the Reserve and National Guard (Ready Reserve, Standby Reserve, and Retired Reserve). DMDC calculated the total number of reserve personnel by adding the totals of all members of the reserve components from all 12 monthly files for fiscal year 2013. combining the 12 files, duplicate personnel were dropped based on Social Security number and service. This count excludes reserve personnel who were counted within the active end strengths of the components, which is usually those who serve on active duty for more than 180 days. permanent and non-full-time permanent employees.12 files, duplicate personnel were dropped based on Social Security number and service. This methodology could potentially double-count individuals if someone transferred from one agency to another agency (e.g., if an individual transferred from an Army civilian position to a Navy civilian position). Using these total employee counts, we still found that the number of DOD employees who were eligible to access classified information in five components exceeded the actual number of DOD employees in those components. Regarding the disparity in the revocation data, Defense Manpower Data Center officials stated that JPAS completeness and accuracy of the data is dependent on the users entering the data. They further stated that information in JPAS may not reflect the loss of personnel—due to changes such as retirements, employee job transfer, and deaths—in the different agencies in DOD, because the department’s personnel centers can only send in separation dates for their personnel for a limited period and the personnel centers may not enter or correct an employee’s status during this period. As a result, we did not find the JPAS data on the number of current military personnel and federal civilian employees and contractors who are eligible to access classified information to be reliable. We conducted this performance audit from April 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Margaret A. Best (Assistant Director), Renee S. Brown, Grace Coleman, Sara Cradic, Randy DeLeon, Cynthia Grant, Mary Jo LaCasse, Amie Lesser, David E. Moser, Kelly Rubin, and Michael Willems made major contributions to this report. Personnel Security Clearances: Actions Needed to Ensure Quality of Background Investigations and Resulting Decisions. GAO-14-138T. Washington, D.C.: February 11, 2014. Personnel Security Clearances: Actions Needed to Help Ensure Correct Designations of National Security Positions. GAO-14-139T. Washington, D.C.: November 20, 2013. Personnel Security Clearances: Opportunities Exist to Improve Quality Throughout the Process. GAO-14-186T. Washington, D.C.: November 13, 2013. Personnel Security Clearances: Full Development and Implementation of Metrics Needed to Measure Quality of Process. GAO-14-157T. Washington, D.C.: October 31, 2013. Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD’s Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Government wide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004.
Personnel security clearances allow people access to classified information that, through unauthorized disclosure, can cause exceptionally grave damage to U.S. national security. In light of recent events, having a high-quality process to determine whether an individual's eligibility to access classified information should be revoked has become increasingly important. DOD and DHS grant the most clearances in the executive branch, and the Director of National Intelligence is responsible for, among other things, oversight of clearance eligibility determinations. GAO was asked to evaluate revocation processes at DHS and DOD. GAO evaluated the extent to which the agencies (1) track data on these processes; (2) consistently implement government-wide requirements and exercise oversight over these processes; and (3) determine outcomes for employees whose clearances were revoked. During this review, GAO identified possible inaccuracies in DOD's data on eligible personnel with access to classified information and is also reporting on that issue. GAO analyzed agency revocation data, reviewed executive orders, agency guidance, and documents, and interviewed officials from ODNI, DHS, DOD, and their components. The Department of Homeland Security (DHS) and the Department of Defense (DOD) both have systems that track varying levels of detail related to revocations of employees' security clearances. DHS's and DOD's data systems could provide data on the number of and reasons for revocations, but they could not provide some data, such as the number of individuals who received a proposal to revoke their eligibility for access to classified information, which means that the total number of employees affected by the revocation process is unknown. Inconsistent implementation of the requirements in the governing executive orders by DHS, DOD, and some of their components, and limited oversight over the revocation process, have resulted in some employees experiencing different protections and processes than other employees. Specifically, DHS and DOD have implemented the requirements for the revocation process contained in Executive Orders 12968 and 10865 in different ways for different groups of personnel. Although certain differences are permitted or required by the executive orders, GAO found that implementation by some components could potentially be inconsistent with the executive orders in two areas. As a result, some employees may not be provided with certain information upon which a revocation appeal determination is based, and may not be told that they have a right to counsel. These inconsistencies in implementation may be in part because neither DHS nor DOD have evaluated the quality of their processes or developed performance measures to measure quality department-wide. Similarly, the Office of the Director of National Intelligence (ODNI) has only exercised limited oversight by reviewing policies and procedures within some agencies. ODNI has not established any metrics to measure the quality of the process government-wide and has not reviewed revocation processes across the federal government to determine the extent to which policies and procedures should be uniform. DHS and DOD employees whose clearances were revoked may not have consistent employment outcomes, such as reassignment or termination, because these outcomes are determined by several factors, such as the agency's mission and needs and the manager's discretion. Further, most components could not readily ascertain employment outcomes of individuals with revoked clearances, because these data are not readily available, and communication between personnel security and human capital offices at the departments varies. GAO's comparison of the total number of DOD employees eligible to access classified information to the total number of DOD employees in fiscal year 2013 suggests that DOD's clearance eligibility totals may be inaccurate. Specifically, GAO found that the number of eligible employees exceeded the total number of employees in five DOD components. DOD officials said this discrepancy could be because DOD's eligibility database is not consistently updated when an employee separates. As a result, the total number of government employees eligible to access classified information that ODNI reports to Congress likely overstates the number of eligible DOD employees. Inaccurate eligibility data hampers DOD's ability to reduce its number of clearance holders to minimize risk and reduce costs to the government. GAO recommends that DHS, DOD, and the DNI take several actions to improve data quality and oversight related to the personnel security revocation process. DHS, DOD, and ODNI generally agreed with GAO's recommendations.
Crude oil prices are a major determinant of gasoline prices. As figure 1 shows, crude oil and gasoline prices have generally followed a similar path over the past three decades and have risen considerably over the past few years. Also, as is the case for most goods and services, changes in the demand for gasoline relative to changes in supply affect the price that consumers pay. In other words, if the demand for gasoline increases faster than the ability to supply it, the price of gasoline will most likely increase. In 2006, the United States consumed an average of 387 million gallons of gasoline per day. This consumption is 59 percent more than the 1970 average per day consumption of 243 million gallons—an average increase of about 1.6 percent per year for the last 36 years. As we have shown in a previous GAO report, most of the increased U.S. gasoline consumption over the last two decades has been due to consumer preference for larger, less-fuel efficient vehicles such as vans, pickups, and SUVs, which have become a growing part of the automotive fleet. Refining capacity and utilization rates also play a role in determining gasoline prices. Refinery capacity in the United States has not expanded at the same pace as demand for gasoline and other petroleum products in recent years. U.S. refineries have been running at very high rates of utilization averaging 92 percent since the 1990s, compared to about an average of 78 percent in the 1980s. Figure 2 shows that since 1970 utilization has been approaching the limits of U.S. refining capacity. Although the average capacity of existing refineries has increased, refiners have limited ability to increase production as demand increases. While the lack of spare refinery capacity may contribute to higher refinery margins, it also increases the vulnerability of gasoline markets to short-term supply disruptions that could result in price spikes for consumers at the pump. Although imported gasoline could mitigate short-term disruptions in domestic supply, the fact that imported gasoline comes from farther away than domestic supply means that when supply disruptions occur in the United States it might take longer to get replacement gasoline than if we had spare refining capacity in the United States. This could mean that gasoline prices remain high until the imported supplies can reach the market. Further, gasoline inventories maintained by refiners or marketers of gasoline can also have an impact on prices. As have a number of other industries, the petroleum industry has adopted so-called “just-in-time” delivery processes to reduce costs leading to a downward trend in the level of gasoline inventories in the United States. For example, in the early 1980s U.S. oil companies held stocks of gasoline of about 40 days of average U.S. consumption, while in 2006 these stocks had decreased to 23 days of consumption. While lower costs of holding inventories may reduce gasoline prices, lower levels of inventories may also cause prices to be more volatile because when a supply disruption occurs, there are fewer stocks of readily available gasoline to draw from, putting upward pressure on prices. Regulatory factors play a role as well. For example, in order to meet national air quality standards under the Clean Air Act, as amended, many states have adopted the use of special gasoline blends—so-called “boutique fuels.” As we reported in a recent study, there is a general consensus that higher costs associated with supplying special gasoline blends contribute to higher gasoline prices, either because of more frequent or more severe supply disruptions, or because higher costs are likely passed on, at least in part, to consumers. Furthermore, changes in regulatory standards generally make it difficult for firms to arbitrage across markets because gasoline produced according to one set of specifications may not meet another area’s specifications. Finally, market consolidation in the U.S. petroleum industry through mergers can influence the prices of gasoline. Mergers raise concerns about potential anticompetitive effects because mergers could result in greater market power for the merged companies, either through unilateral actions of the merged companies or coordinated interaction with other companies, potentially allowing them to increase and maintain prices above competitive levels. On the other hand, mergers could also yield cost savings and efficiency gains, which could be passed on to consumers through lower prices. Ultimately, the impact depends on whether the market power or the efficiency effects dominate. During the 1990s, the U.S. petroleum industry experienced a wave of mergers, acquisitions, and joint ventures, several of them between large oil companies that had previously competed with each other for the sale of petroleum products. More than 2,600 merger transactions occurred from 1991to 2000 involving all segments of the U.S. petroleum industry. These mergers contributed to increases in market concentration in the refining and marketing segments of the U.S. petroleum industry. Econometric modeling we performed of eight mergers involving major integrated oil companies that occurred in the 1990s showed that the majority resulted in small but significant increases in wholesale gasoline prices. The effects of some of the mergers were inconclusive, especially for boutique fuels sold in the East Coast and Gulf Coast regions and in California. While we have not performed modeling on mergers that occurred since 2000, and thus cannot comment on any potential effect on wholesale gasoline prices at this time, these mergers would further increase market concentration nationwide since there are now fewer oil companies. Some of the mergers involved large partially or fully vertically integrated companies that previously competed with each other. For example, as shown in figure 3, in 1998 British Petroleum (BP) and Amoco merged to form BPAmoco, which later merged with ARCO, and in 1999 Exxon, the largest U.S. oil company merged with Mobil, the second largest. Since 2000, we found that at least 8 large mergers have occurred. Some of these mergers have involved major integrated oil companies, such as the Chevron-Texaco merger, announced in 2000, to form ChevronTexaco, which went on to acquire Unocal in 2005. In addition, Phillips and Tosco announced a merger in 2001 and the resulting company, Phillips, then merged with Conoco to become ConocoPhillips. To illustrate the extent of consolidations in the U.S. oil industry, figure 3 shows that there were 12 integrated and 9 non-integrated oil companies, but these companies have dwindled to only 8. Vaeo Independent oil companies have also been involved in mergers. For example, Devon Energy and Ocean Energy, two independent oil producers, announced a merger in 2003 to become the largest independent oil and gas producer in the United States at that time. Petroleum industry officials and experts we contacted cited several reasons for the industry’s wave of mergers since the 1990s, including increasing growth, diversifying assets, and reducing costs. Economic literature indicates that enhancing market power is also sometimes a motive for mergers, which could reduce competition and lead to higher prices. Ultimately, these reasons mostly relate to companies’ desire to maximize profits or stock values. a. Marathon and Ashland formed a joint venture called Marathon Ashland Petroleum that was primarily owned by Marathon Oil (62 percent), which was a wholly owned affiliate of USX Corporation at the time the joint venture was created. Ashland sold its 38 percent ownership of the joint venture to Marathon on June 30, 2005. b. Equilon Enterprises was a 56/44 venture between Shell Oil and Texaco, respectively, that sold motor gasoline and petroleum products under both the Shell Texaco brand names. Although not depicted in the graphic, Motiva Enterprises was a joint venture between Star Enterprise and Shell Oil that sold gasoline and petroleum products under both the Shell and Texaco brand names. Motiva is now a 50/50 joint venture between Saudi Refining and Shell Oil after Texaco sold its ownership to its partners as a precondition of the U.S. Federal Trade Commission approving the merger of Chevron and Texaco. c. El Paso Corporation sold its 16,700-barrels-per-day Chickasaw, Alabama refinery to Trigeant EP Ltd, in August 2003. El Paso’s remaining refineries were sold to publicly traded companies at the times indicated (Sun Company on 01/04 and Valero on 03/04). d. Clark Refining divested its marketing operations (including the “Clark” brandname) and renamed itself Premcor in July 1999. e. Williams Companies sold its Memphis, Tennessee 180,000-barrels-per-day refinery to Premcor in March 2003. market is less competitive and it is more likely that firms can exert control over prices. DOJ and FTC have jointly issued guidelines to measure market concentration. The scale is divided into three separate categories: unconcentrated, moderately concentrated, and highly concentrated. The index of market concentration in refining increased all over the country during the 1990s, and changed from moderately to highly concentrated on the East Coast. In wholesale gasoline markets, market concentration increased throughout the United States between 1994 and 2002. Specifically, 46 states and the District of Columbia had moderately or highly concentrated markets by 2002, compared to 27 in 1994. Evidence from various sources indicates that, in addition to increasing market concentration, mergers also contributed to changes in other aspects of market structure in the U.S. petroleum industry that affect competition—specifically, vertical integration and barriers to entry. However, we could not quantify the extent of these changes because of a lack of relevant data and lack of consensus on how to appropriately measure them. Vertical integration can conceptually have both pro- and anticompetitive effects. Based on anecdotal evidence and economic analyses by some industry experts, we determined that a number of mergers that have occurred since the 1990s have led to greater vertical integration in the U.S. petroleum industry, especially in the refining and marketing segment. For example, we identified eight mergers that occurred between 1995 and 2001 that might have enhanced the degree of vertical integration, particularly in the downstream segment. Furthermore, mergers involving integrated companies are likely to result in increased vertical integration because FTC review, which is based on horizontal merger guidelines, does not focus on vertical integration. Concerning barriers to entry, our interviews with petroleum industry officials and experts at the time we did our study provided evidence that mergers had some impact on the U.S. petroleum industry. Barriers to entry could have implications for market competition because companies that operate in concentrated industries with high barriers to entry are more likely to possess market power. Industry officials pointed out that large capital requirements and environmental regulations constitute barriers for potential new entrants into the U.S. refining business. For example, the officials indicated that a typical refinery could cost billions of dollars to build and that it may be difficult to obtain the necessary permits from the relevant state or local authorities. Furthermore, The FTC has recently indicated that barriers to entry in the form of high sunk costs and environmental regulations have become more formidable since the 1980s, as refineries have become more capital-intensive and the regulations more restrictive. According to FTC, no new refinery still in operation has been built in the U.S. since 1976. To estimate the effect of mergers on wholesale gasoline prices, we performed econometric modeling on eight mergers that occurred during the 1990s: Ultramar Diamond Shamrock (UDS)-Total, Tosco-Unocal, Marathon-Ashland, Shell-Texaco I (Equilon), Shell-Texaco II (Motiva), BP- Amoco, Exxon-Mobil, and Marathon Ashland Petroleum (MAP)-UDS. For the seven mergers that we modeled for conventional gasoline, five led to increased prices, especially the MAP-UDS and Exxon-Mobil mergers, where the increases generally exceeded 2 cents per gallon, on average. For the four mergers that we modeled for reformulated gasoline, two— Exxon-Mobil and Marathon-Ashland—led to increased prices of about 1 cent per gallon, on average. In contrast, the Shell-Texaco II (Motiva) merger led to price decreases of less than one-half cent per gallon, on average, for branded gasoline only. For the two mergers—Tosco-Unocal and Shell-Texaco I (Equilon)—that we modeled for gasoline used in California, known as California Air Resources Board (CARB) gasoline, only the Tosco-Unocal merger led to price increases. The increases were for branded gasoline only and were about 7 cents per gallon, on average. Our analysis shows that wholesale gasoline prices were also affected by other factors included in the econometric models, including gasoline inventories relative to demand, supply disruptions in some parts of the Midwest and the West Coast, and refinery capacity utilization rates. Our past work has shown that, the price of crude oil is a major determinant of gasoline prices along with changes in demand for gasoline. Limited refinery capacity and the lack of spare capacity due to high refinery capacity utilization rates, decreasing gasoline inventory levels and the high cost and changes in regulatory standards also play important roles. In addition, merger activity can influence gasoline prices. During the 1990s, mergers decreased the number of oil companies and refiners and our findings suggest that these changes in the state of competition in the industry caused wholesale prices to rise. The impact of more recent mergers is unknown. While we have not performed modeling on mergers that occurred since 2000, and thus cannot comment on any potential effect on wholesale gasoline prices at this time, these mergers would further increase market concentration nationwide since there are now fewer oil companies. We are currently in the process of studying the effects of the mergers that have occurred since 2000 on gasoline prices as a follow up to our previous report on mergers in the 1990s. Also, we are working on a separate study on issues related to petroleum inventories, refining, and fuel prices. With these and other related work, we will continue to provide Congress the information needed to make informed decisions on gasoline prices that will have far-reaching effects on our economy and our way of life. Our analysis of mergers during the 1990s differs from the approach taken by the FTC in reviewing potential mergers because our analysis was retrospective in nature—looking at actual prices and estimating the impacts of individual mergers on those prices—while FTC’s review of mergers takes place necessarily before the mergers, which is prospective. Going forward, we believe that, in light of our findings, both prospective and retrospective analyses of the effects of mergers on gasoline prices are necessary to ensure that consumers are protected from anticompetitive forces. In addition, we welcome this hearing as an opportunity for continuing public scrutiny and discourse on this and the other issues that we have raised here today. We encourage future independent analysis by the FTC or other parties, and see value in oversight of the regulatory agencies in carrying out their responsibilities. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the Committee may have at this time. For further information about this testimony, please contact me at (202) 512-2642 ([email protected]) or Mark Gaffigan at (202) 512-3841 ([email protected]). Godwin Agbara, John Karikari, Robert Marek, and Mark Metcalfe made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Few issues generate more attention and anxiety among American consumers than the price of gasoline. The most current upsurge in prices is no exception. According to data from the Energy Information Administration (EIA), the average retail price of regular unleaded gasoline in the United States has increased almost every week this year since January 29th and reached an all-time high of $3.21 the week of May 21st. Over this time period, the price has increase $1.05 per gallon and added about $23 billion to consumers' total gasoline bill, or about $167 for each passenger car in the United States. Given the importance of gasoline for the nation's economy, it is essential to understand the market for gasoline and the factors that influence gasoline prices. In this context, this testimony addresses the following questions: (1) what key factors affect the prices of gasoline and (2) what effects have mergers had on market concentration and wholesale gasoline prices? The price of crude oil is a major determinant of gasoline prices. However, a number of other factors also affect gasoline prices including (1) increasing demand for gasoline; (2) refinery capacity in the United States that has not expanded at the same pace as the demand for gasoline; (3) a declining trend in gasoline inventories and (4) regulatory factors, such as national air quality standards, that have induced some states to switch to special gasoline blends. Petroleum industry consolidation plays a role in determining gasoline prices too. The 1990s saw a wave of merger activity in which over 2600 mergers occurred in all segments of the U.S. petroleum industry. This wave of mergers contributed to increased market concentration in U.S. refining and marketing segments. Econometric modeling GAO performed on eight of these mergers showed that, after controlling for other factors including crude oil prices, the majority resulted in higher wholesale gasoline prices--generally between 1 and 7 cents per gallon. While these price increases seem small, they are not trivial--according to FTC's standards for merger review in the petroleum industry, a 1-cent increase is considered to be significant. Additional mergers occurring since 2000 are expected to increase the level of industry concentration further, and because GAO has not yet performed modeling on these mergers, we cannot comment on any potential price effects at this time. We are currently studying the effects of the mergers that have occurred since 2000 as a follow up to our previous work on mergers in the 1990s. Also, we are working on a separate study on issues related to petroleum inventories, refining, and fuel prices.
Although VA is required by law to assist claimants in obtaining the evidence necessary to substantiate a claim for benefits, accreditation helps ensure that claimants have access to qualified representation. By law, only individuals accredited by VA can represent claimants in the VA claims process. Table 1 below describes the three types of individuals that VA recognizes as accredited representatives. To implement accreditation, the law and VA regulations set forth a number of requirements representatives must meet. For example, representatives must: Be of good character: Although what constitutes good character is not specifically defined, VA regulations provide, with respect to agents and attorneys, that evidence showing a lack of good character and reputation may include such things as: conviction of a felony or other crimes related to fraud, theft, or deceit; or suspension or disbarment from a court, bar, or government agency on ethical grounds. In addition, all representatives are required to be truthful in their dealings with claimants and VA. Provide competent representation: Representatives must provide competent representation, which includes the knowledge, skills, thoroughness, and preparation necessary for representation, as well as an understanding of the issues of fact and law relevant to the claim. Provide prompt representation: Representatives must act with reasonable diligence and promptness in representing claimants. This includes responding promptly to VA requests for information or assisting a claimant in responding promptly to VA requests for information. As of May 2013, VA had on its rolls approximately 20,000 individuals who are accredited to represent claimants. Specifically, VA had accredited 8,207 VSO representatives, 11,568 attorneys, and 345 claim agents. Available data demonstrate the growing role and importance of accreditation. Since current program rules were adopted in mid-2008, the number of applications VA received has grown from 2,696 in 2008 to over 5,000 in each year since. Additionally, almost 80 percent of claims that were open as of November 2012 used the services of a representative, with VSOs accounting for the bulk of those claims (see fig. 1). VA’s Office of General Counsel (OGC) oversees the accreditation program. To this end, OGC staff review accreditation applications and make approval decisions, monitor whether accredited representatives meet ongoing program requirements, and investigate issues and complaints that could lead to a representative having his or her accreditation cancelled or suspended. Table 2 describes the initial and ongoing requirements for these representatives. Additionally, OGC staff receive and review fee agreements—contracts between claimants and representatives outlining how claimants will be charged for services. Within VA, the Veterans Benefits Administration (VBA) also plays a limited role in enforcing accreditation rules—checking that individuals are accredited when claimants designate them as their representative. In cases where an individual is not accredited, VA policy is to inform the would-be representative of accreditation program rules and prohibit the individual from serving as the representative for that claim. VA rules also govern the fees that each type of representative can charge claimants. VSO representatives are required to provide their services free of charge. Attorneys and claim agents may not charge claimants for services related to the initial preparation and filing of their claims, but can charge fees for any services rendered after VA makes an initial decision on a claim and the claimant initiates an appeal of VA’s decision.services rendered after an initial decision is made and an appeal is initiated, VA rules generally allow attorneys and agents to charge a For reasonable fee based on retroactive benefits that are awarded. Fees that do not exceed 20 percent of any retroactive benefits are presumed to be reasonable. VA’s OGC may cancel accreditation if the representative fails to meet any of the requirements for accreditation, knowingly presents a fraudulent or frivolous claim, or demands or accepts unlawful compensation. OGC may also suspend and reinstate the individual if he or she meets conditions for reinstatement. Additionally, a VSO can request that VA suspend or cancel accreditation for one of its representatives based on misconduct or lack of competence. OGC is required to inform representatives of the nature of their alleged violation and representatives may request a hearing on the matter. An OGC decision can be appealed to the Board of Veteran’s Appeals. For four of these individuals, VA received complaints alleging these same individuals charged non-allowable fees for filing claims. Of the 21 individuals we conducted background checks on, 10 of them had complaints against them on file with VA. The remaining 11 individuals were selected from our sample of 92 attorneys and agents accredited in 2012. We were able to obtain information on 20 of the 21 individuals in our sample. acknowledged that additional background information, particularly on agents who are also financial planners, would be useful in informing VA’s judgment of their character. Subsequent to our May 2013 exit briefing with VA, OGC officials informed us that they recently gained access to VBA’s system to conduct background searches and are developing plans to conduct comprehensive background checks on all claim agent applicants and on attorney applicants as necessary. The official that oversees accreditation also stated that VA is considering requiring applicants to supply information on any professional certifications they hold and confirming this information with agencies like the Financial Industry Regulatory Authority, which regulates brokers’ activities. Not consistently following up on references: VA may be missing opportunities to obtain additional information about applicants by not consistently following up on character references. While attorneys and agents are required to provide references in their accreditation applications, the official who oversees accreditation told us that VA contacts references only for agents, which we verified in our review of applications. Further, the value of reference letters VA receives is questionable as VA did not use a standard set of questions or guidance to obtain specific information that should be included in reference letters, such as requesting information on the agents’ criminal or employment history. Several reference letters we reviewed did not provide substantial information on applicants’ ability to assist veterans. In one instance, VA had to request additional reference letters for an applicant because two of the applicant’s reference letters were from members of the same church group and had identical language. Additionally, we found several instances where references listed in applications were family members or lived at the same address as the applicant, calling into question the impartiality of the information received. At our May 2013 exit briefing with VA, officials announced that VA has revised and will begin using a standardized letter to references to specifically request information on agents’ criminal and employment histories, as well as their interest in serving veterans. Reliance on VSOs and state bars: OGC officials told us that they rely heavily on the judgment of VSOs when deciding whether to accredit their prospective representatives. OGC officials told us that they believe VSOs do a good job of screening their applicants and that it is in the best interests of VSOs to maintain a positive reputation regarding the quality of representation they provide as VSOs depend on contributions from veterans to fund their operations. That said, VA does not actively review VSO certification plans and therefore cannot know whether there is variability in procedures and standards among organizations. Regarding attorneys, VA generally presumes good character and fitness to represent claimants if they have a state bar membership in good standing. However, our work shows that an attorney’s standing with a state bar may not always be a sufficient proxy for good character. In one example, an attorney was in good standing with his state bar, but he had several previous suspensions from the bar and multiple felonies in his criminal record involving theft or misappropriation of property or funds. In this instance, VA chose not to accredit the individual based on his self- reported criminal record, but it is not clear what the outcome would have been had VA relied on his bar membership status in the absence of such self-reported information. Limited ongoing monitoring: Once representatives become accredited, VA does little to ensure that they retain good character. VA requires attorneys and agents to annually certify that they are in good standing with any court, bar, or federal or state agency to which they are admitted to practice or authorized to appear. VA also requires VSOs to recertify their representatives every 5 years. However, VA officials told us that they currently face a backlog in processing these annual certifications and therefore have not consistently monitored whether these re-certifications have occurred. For example, the official that oversees accreditation told us that in one instance, an attorney self-reported in a letter to VA that he was disbarred and that his accreditation should be cancelled. Since VA does not consistently monitor whether attorneys annually certify their standing with the bar, the agency would not have known that this individual was disbarred had he not voluntarily communicated this information. After our May 2013 exit briefing with VA, OGC informed us that they are developing plans to annually audit the certifications of good standing that attorneys and agents file. VA’s initial knowledge requirements for attorneys and agents are limited and do not ensure that they are knowledgeable about VA benefits. To become accredited, agents must pass an exam comprised of 25 multiple- choice and true-false questions. However, organizations that represent or help train agents told us the exam covers a wide array of subjects concerning veterans’ benefits law and procedure without deeply broaching any particular topic. Further, they said that the exam alone is not sufficient to determine whether agents have enough knowledge to represent veterans. For attorneys, VA presumes that any attorney in good standing with the bar is qualified and knowledgeable enough to assist veterans. As such, attorneys are not required to take an initial exam to demonstrate their knowledge of veterans’ benefits law. However, officials from two organizations that provide training for accreditation told us that membership with the bar does not guarantee that an individual is knowledgeable about VA benefits law. One attorney noted that it can take years to understand VA benefits issues and provide knowledgeable assistance in this area. In fact, representatives from one VSO told us that attorneys and agents often contact VSOs with questions about representing their claimants. In addition, VA’s initial and ongoing training requirements do not ensure accredited attorneys and agents are knowledgeable and VA does not consistently enforce existing requirements. In addition to requiring that attorneys and agents complete 3 hours of qualifying continuing legal education (CLE) within 12 months as an initial condition of accreditation, VA requires that accredited attorneys and agents complete 3 hours of training every 2 years, and that this training cover certain topics such as representation before VA, claims procedures, basic eligibility for benefits, and appeal rights. Officials from two organizations that provide training for accreditation told us that this amount of training is not sufficient to ensure that attorneys and agents are knowledgeable. Additionally, officials from these organizations told us that VA does not review, or provide guidance on course content. OGC officials told us that they rely on each state bar association to approve its own training, which can introduce variability across states. Moreover, VA does not consistently ensure that attorneys and agents complete required training. OGC officials told us that individuals who do not certify their training requirements could have their accreditation suspended. Despite this, OGC has fallen behind on its monitoring of this requirement, and it is likely that individuals who should not be accredited continue to assist claimants. After our May 2013 exit briefing with VA, OGC officials informed us that they are developing plans to annually audit the training certifications which attorneys and agents must file with OGC. VA relies on VSOs to train their representatives and ensure that VSO representatives can provide knowledgeable assistance to veterans with relatively little oversight from VA. We spoke to three national VSOs who noted that they provide numerous training opportunities for their representatives, which may include on-the-job training, seminars, and regular conferences. Two VSOs we spoke with also said that they monitor whether their representatives are meeting knowledge requirements. However, the official who oversees accreditation said VA relies on VSOs to ensure their staff have appropriate training and that VA does not review VSO training programs. While VA did not express concerns about VSO representatives meeting knowledge requirements, GAO’s standards for internal controls state that information about a program’s operations should be communicated to management, in order to determine whether the agency is achieving compliance requirements under relevant laws and regulations. Absent better oversight of VSO training, VA cannot ensure the knowledge of representatives who represent a majority of claimants. Representatives’ knowledge is critical to meeting another VA requirement—ensuring that they provide prompt representation. Officials at one regional office noted that some representatives are less knowledgeable than others and that they might forget or overlook certain items in a claim. At the same time, they stated that it is VA’s responsibility to review claims to make sure they are complete and to notify claimants when information is missing. However, officials from an organization representing attorneys and agents told us that VA does not consistently follow up with veterans to make sure that their paperwork is complete and there have been instances where mistakes on initial claims resulted in veterans losing the ability to claim benefits that they were entitled to receive. Similarly, the officials told us that a representative who is not knowledgeable enough to use the appropriate language for appealing a decision may result in VA not recognizing the communication as a formal disagreement with VA’s decision, in turn causing the veteran to miss the deadline for appealing their claims case. VA has dedicated only a few staff to administer its accreditation program, which has resulted in limited monitoring efforts and workload backlogs. VA officials told us that approximately four staff positions in OGC are dedicated to accreditation. These staff are responsible for reviewing thousands of applications each year, and ensuring that the approximately 20,000 individuals already accredited meet continuing requirements. Officials told us this level of staffing is insufficient to carry out all these responsibilities and that VA has chosen to prioritize screening initial accreditation applications over monitoring ongoing requirements. Even so, VA has a significant backlog of accreditation applications to review. VA estimates that it may take 60 to 120 days to review an application after it is received. Because by law only accredited individuals may represent claimants, this backlog may cause delays for claimants who need assistance with their claims. VA currently has no plans to permanently increase the number of staff dedicated to accreditation. OGC officials told us that they have been seeking to increase the number of staff working on accreditation, but have been unsuccessful in obtaining additional permanent staff. An official noted that in the fall of 2012, several staff were assigned to accreditation on a temporary basis and, with their help, OGC was able to eliminate its backlog of attorney and VSO representative applications. However, OGC stated that a considerable backlog of agent applications remains and it is likely the backlog of attorney and VSO representative applications will return since this temporary initiative has ended. Moreover, as of May 2013, one of the four positions was not filled because of a resignation. OGC is in the process of replacing staff lost to attrition as well as obtaining an additional temporary staff person. Still, OGC officials stated that they would need several additional staff beyond the four dedicated positions to function more effectively. It is questionable if this will happen in the near future as VA’s proposed fiscal year 2014 budget calls for fewer staff in VA’s OGC. This may also affect OGC’s plans to increase its oversight of accredited representatives because those plans are contingent upon eliminating the backlog of initial applications. VA’s implementation of its accreditation process is also hampered by limited information technology (IT) support. Officials told us that the database system used by OGC cannot automatically inform individuals that they are meeting program requirements and OGC staff must do this manually. Further, an official noted that a significant amount of data entry is required when applicants submit information for accreditation. Officials noted that other IT improvements, such as the ability for applicants to electronically submit applications, or for accredited representatives to submit certifications of good standing and training certifications would help OGC manage its responsibilities more efficiently. However, no steps have been taken to date toward developing these capabilities. VA’s ability to identify and address abuses by representatives is limited because VA has missed opportunities to educate claimants about their rights and protections against potential abuses. In prior work, we reported that targeted communication with a specific message is a best practice for outreach to veterans. Individuals who do not yet have representation receive a letter from VA after submitting a claim containing some information on representation—such as explaining what VSOs are and that they provide assistance at no charge. However, the letter does not discuss attorneys and agents, nor does it note that a claimant should not have to pay for services associated with filing an initial claim. Similarly, the form that claimants use to designate an attorney or agentindividuals to the section of the law governing fees, but does not explain that claimants should not pay for filing an initial claim. Beyond these forms, VBA officials told us that VA does not actively conduct outreach to claimants regarding representation, what to expect from their representative, or their right to not pay filing fees for initial claims. As a refers result, several VSOs we interviewed stated that veterans are often unaware of their rights or what to expect during the claims process. One VSO service officer told us that nearly all veterans he encounters are unaware that they should not pay to file initial claims. He added that if a veteran is told that he or she must pay a fee, the veteran will usually just assume this is how business is done. Further, many of the complaints to VA we reviewed regarded improper fees. VA’s ability to learn about and address potential abuses also may be hampered by a complaint process that is not well-communicated to claimants. GAO standards for internal controls state that effective communications—such as with external stakeholders—is critical for agencies to ensure they receive information that may significantly affect whether they achieve their objectives. While VA regulations establish a complaint process, VA may be missing opportunities to serve and protect claimants as it has not clearly communicated to claimants or others how to report concerns about representatives. For example, VA’s accreditation website does not explicitly state how to report concerns about representatives. While the website provides a link to an e-mail address used for general inquiries, an official noted that OGC receives a large volume of emails at this address—including complaints—and is behind in responding to inquiries. Additionally, the materials provided to individuals when filing a claim also do not clearly state how to report complaints. When we interviewed veterans at two VSOs in the D.C. area, we generally found that they were unfamiliar with program requirements and did not know where or how to file a complaint. Further, the process of responding to and addressing complaints—which can be difficult and lengthy—is understaffed, thereby limiting its effectiveness. VA officials told us they require clear and convincing evidence in order to cancel a representative’s accreditation. One official noted the process of monitoring representatives who received complaints is difficult given competing demands for resources. Additionally, collecting evidence can be difficult because claimants may be reluctant to reveal their identities when making complaints. They added that some cancellation actions may take years to resolve when representatives exercise their right to appeal decisions. An OGC official told us that allocated resources were currently inadequate to effectively monitor representatives about whom complaints had been submitted and that information about complaints is not shared with other parts of VA. For instance, OGC does not share information with VBA that could help identify or monitor the activities of representatives with complaints. OGC estimated that only two attorneys or agents had their accreditation cancelled over the last 5 years for violating the rules of the program and none were suspended. VA faces challenges with unaccredited individuals helping veterans file claims and charging claimants for assistance. While federal regulations require representatives to be accredited, we found a number of complaints about individuals who are not accredited filing claims for veterans. Of the 24 complaints filed against attorneys and agents in 2012, 7 were regarding unaccredited individuals. Because VA is not aware of the extent to which these individuals interact with claimants, VA cannot take action or ensure they provide quality services. An OGC official told us that when it learns of these individuals, it is limited in the actions it can take beyond instructing the individual to stop. Additionally, he said he has written to state attorneys general offices regarding potential wrongful actions a few times in the last year, but does not know if the states took action. In our review, we found a few instances in which OGC sent letters concerning unaccredited individuals to state attorneys general only to continue to receive complaints about these individuals. An OGC official added that, beyond cancelling or suspending accreditation, there are no penalties for individuals who violate the requirements of accreditation and that cases generally are not referred to VA’s Inspector General. Accreditation also does not address whether individuals should sell financial products to veterans. Our prior work has shown that some accredited individuals were selling financial products to veterans in order to shelter assets and allow them to qualify for VA pension benefits.Some of these cases involved vulnerable populations, such as veterans in assisted living facilities, or involved individuals selling products that resulted in veterans losing control of their assets without qualifying for VA benefits. VA and some VSO officials told us that financial planners continue to be an area of concern. VA officials told us that an increasing number of financial planners are applying for accreditation as agents. In fact, all six of the agents in our file review appeared to have a financial planning background. VA also told us that when an individual with a financial planning background applies for accreditation, it asks for additional information about their business plans, reviews any business websites, and reminds applicants that the purpose of accreditation is to provide assistance to veterans and that they should not use accreditation to promote financial products. That said, the official who oversees accreditation told us that they often lack a sufficient basis to deny accreditation to these individuals because being a financial planner in and of itself does not violate VA’s accreditation rules. He added that it might be helpful to collect additional information on these individuals, such as from financial regulators, when deciding whether to accredit them. It is also difficult for VA to ensure that claimants are being charged appropriate fees. Attorneys and agents are not allowed to charge or receive a fee for the preparation or initial filing of a claim, but are allowed to charge a fee for services provided after VA has decided the claim and a notice of disagreement has been filed initiating an appeal of that decision. The allowable fee is often 20 percent of retroactive benefits awarded if the claim is granted. However, there is no restriction on fees charged for services before an individual files a claim. VA’s OGC issued a letter in 2004 noting that attorneys may charge claimants for services that are rendered before the individual begins the process of filing a claim, such as consulting with the individual about the range of VA and other federal benefits he or she may qualify for. Some VSOs and other experts expressed some concern regarding pre-filing consultation fees. The head of one VSO noted that these fees may serve as a mechanism to hide the fact that attorneys and agents are charging claimants for preparing claims. Ambiguity regarding these fees makes it difficult for claimants and VA to know whether they are being charged allowable fees, and may result in attorneys inappropriately billing for work related to the claim as if it was for a general consultation. Because fees for pre-filing activities are outside the claims process, VA also has no way of knowing the extent to which they occur or are properly charged. More than half of the complaints against attorneys and agents (15 of 24) that we reviewed were related to fees. Hundreds of thousands of veterans and their families rely on accredited representatives to guide them through the process of applying for VA benefits. However, current program implementation and requirements do not sufficiently ensure that veterans and their families are protected against potential abuses or that VA has the ability to identify and address situations where representatives are not acting in the best interests of clients. While recent plans to collect more information on applicants and increase oversight of existing representatives are promising, it is unclear how OGC will implement and sustain these improvements given the current level of resources VA has allocated to this program. Additionally, without providing better information to claimants about how to report issues or concerns about their representation, claimants may not know where to turn to report an abuse or not even recognize that their representative is engaged in prohibited practices. Lastly, claimants may be vulnerable to emerging threats—such as unaccredited representatives—in the absence of VA tools to provide protection. We recognize that in considering program enhancements, VA will need to balance the effort of instituting changes and the additional burdens they may place on program staff and representatives with ensuring that claimants continue to have ready access to representation. However, representatives with ill-intent or poor knowledge can cause real harm to claimants and a weak accreditation process will negatively affect VA’s ability to provide veterans the benefits to which they are entitled. To improve VA’s ability to ensure that claimants are represented by qualified and responsible individuals, the Secretary of Veterans Affairs should explore options and take steps to: 1. Ensure an appropriate level of staff and IT resources are in place to implement the requirements of the accreditation program. This should include exploring options for utilizing other VA components and resources outside of OGC. 2. Strengthen initial and continuing knowledge requirements for accreditation for all types of representatives. 3. Enhance communications with claimants, including how they can report complaints related to their representation. This could include exploring options for incorporating information about representation and veterans’ rights into existing communications and outreach efforts. 4. Address potentially abusive practices by representatives who lack accreditation, charge inappropriate fees, or sell financial products to claimants that are not in their best interest. If necessary, VA should consider seeking additional legislative authority to address such practices and enforce program rules. We provided a draft of this report to the Secretary of Veterans Affairs for review and comment. In its comments (see app. II), VA generally agreed with our conclusions, and either concurred or concurred in principle or in part with our recommendations, as discussed more fully below. VA concurred in principle with our first recommendation to ensure accreditation has appropriate staffing and IT resources, noting that efforts to increase staff and obtain IT resources must be considered within the existing OGC budget. We agree and fully support VA’s plans to identify available resources within and outside of OGC. VA concurred in principle with our second recommendation that it explore strengthening initial and continuing knowledge requirements. VA stated that it believes that existing initial knowledge requirements for attorneys and agents adequately ensure that VA claimants have qualified representation. Additionally, VA expressed concerns that additional knowledge or testing requirements could have a chilling effect on attorney representation for claimants. Nevertheless, VA stated that it will consider ways in which it can equip newly accredited attorneys and agents with information regarding veterans benefits law and procedures. Additionally, VA stated it plans to revise and update examinations for prospective agents to ensure they have adequate knowledge of veterans law and procedures. Regarding VSO representatives, VA reiterated that it believes it is in each organization’s best interest to ensure their representatives are competent and qualified. Nevertheless, VA plans to request and review training curricula for up to about 10 percent of recognized organizations each year—an effort which we commend. We support these efforts, but continue to believe VA should consider ways to better equip all accredited attorneys and agents with relevant information and not limit efforts to just newly accredited attorneys and agents, for example, by improving the quality of required continuing legal education. Regarding our recommendation to enhance communications with claimants, VA concurred and plans to include information on how to report complaints on OGC’s accreditation Web site, and will work with VBA to identify potential outreach activities. We agree with VA’s stated efforts to improve communication with claimants. VA also concurred in principle with our recommendation to explore options for addressing potentially abusive practices by representatives and stated it would consider seeking additional legislative authority to address these practices and enforce program rules. VA noted that imposing penalties on unaccredited individuals, individuals who inappropriately charge claimants, or sell financial products to claimants could help curb inappropriate practices, but in some cases may have a chilling effect on the legitimate activities of others. We acknowledge that penalties may be an appropriate deterrent in some but not all circumstances and agree with VA’s desire to balance any changes with maintaining access for claimants to valuable assistance. We also urge VA to further explore other remedies that would not require legislative action, such as closer cooperation with state and local law enforcement regarding individuals who may commit unlawful acts. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, this document will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in Appendix III. In conducting our review of how the Department of Veterans Affairs (VA) accredits and oversees veterans’ representatives, our objectives were to examine (1) the extent to which VA’s procedures adequately ensure representatives meet program requirements, and (2) any obstacles that may impede VA’s effort to adequately implement its accreditation process. We conducted this performance audit from September 2012 to August 2013, in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with investigation standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. To determine the extent to which VA’s procedures are adequate, we reviewed pertinent federal laws and regulations and interviewed officials in VA’s Office of General Counsel (OGC) and Veterans Benefits Administration. To assess the extent to which VA carries out its procedures, we reviewed a random, representative sample of 92 case files for attorneys and agents who were granted accreditation in 2012. We examined these files to determine whether individuals provided complete information on their personal histories, whether individuals had the appropriate qualifications, and to determine whether VA took steps to collect additional information when necessary. We determined whether the evidence in each file indicated that VA carried out the procedures that VA officials stated they follow when reviewing files.reviewed all 24 complaints that OGC received in 2012 regarding attorneys and claim agents in order to understand the actions that VA takes in response to concerns. Additionally, we selected a random, judgmental sample of 21 attorneys and agents to determine whether an independent background check would uncover issues that could call their character into question. These 21 individuals consisted of 5 attorneys and 6 agents selected in our random sample of 2012 accreditation decisions, Additionally, we and 5 attorneys and 5 agents whose complaint files we reviewed.used Accurint—a commercial database of public records—to determine whether these individuals had (1) a criminal history, (2) bankruptcies, (3) liens, or (4) professional licenses revoked. In the instances in which Accurint delivered a positive result, we confirmed the result by obtaining court records. To provide further context on VA’s procedures and to determine obstacles that impede VA’s efforts to adequately implement accreditation, we interviewed a number of veterans service organizations (VSO)—which both assist veterans in filing claims and advocate for their interests—and organizations that represent attorneys and claim agents (see table 3). Additionally, we conducted a site visit to the Philadelphia VA Regional Office where we interviewed regional managers, veterans service representatives, and staff who review fee agreements, as well as local VSO representatives. Finally, we informally met with groups of veterans who were present at two VSOs in the Washington, D.C. area on the days of our visits, to obtain views on their experiences with representation. Daniel Bertoni, (202) 512-7215 or [email protected]. In addition to the contact named above, Michele Grgich and Lori Rectanus (Assistant Directors), Daniel Concepcion and Aimee Elivert made key contributions to this report. David Chrisinger, Paul Desaulniers, Sheila McCoy, Wayne McElrath, Dae Park, Almeta Spencer, Roger Thomas, and Walter Vance provided support.
Representatives accredited by VA serve a critical role in helping veterans or their family members file claims for VA benefits. By law, accredited individuals must demonstrate good moral character and program knowledge and VA's OGC is tasked to ensure they do so by reviewing initial applications and monitoring ongoing requirements, such as training. GAO examined (1) the extent to which VA's procedures adequately ensure representatives meet program requirements, and (2) any obstacles that may impede VA's efforts to adequately implement its accreditation process. GAO reviewed relevant federal laws, regulations and procedures, and interviewed VA officials and organizations of accredited representatives. GAO also reviewed a representative sample of accreditation decisions made in 2012 as well as complaints received by VA in 2012. GAO also conducted additional checks on a random but small and non-representative sample of accredited individuals. The Department of Veterans Affairs' (VA) Office of General Counsel (OGC) procedures do not sufficiently ensure that accredited representatives have good character and knowledge. While GAO's analysis shows that VA follows its procedures for reviewing initial accreditation applications, VA relies on limited self-reported information to determine whether applicants have a criminal history or their character could be called into question, which in turn leaves VA vulnerable to accrediting individuals who may not provide responsible assistance. For example, when GAO conducted additional checks on a non-representative sample of accredited individuals, GAO found that some individuals had histories of bankruptcies or liens, information which could help develop a more complete picture of applicants' character and prompt further inquiry by VA into their background. VA's procedures also do not ensure that representatives have adequate program knowledge. For example, VA's initial training requirements are minimal and VA does not consistently monitor whether representatives meet additional continuing education requirements. As a result, some accredited representatives may not have adequate program knowledge to effectively assist clients with their claims. After being briefed on GAO's findings in May 2013, VA's OGC announced plans to take additional steps toward conducting background checks on applicants and auditing ongoing character and training requirements. VA efforts to administer accreditation are hindered by an inadequate allocation of resources and unclear communication with claimants. For example, OGC has only four staff dedicated to overseeing thousands of accreditation applications each year, in addition to monitoring approximately 20,000 accredited representatives. As a result, OGC has not kept pace with pending accreditation applications, and has not consistently monitored continuing requirements. OGC's reliance on manual data entry results in resource-intensive program administration. For instance, OGC lacks information technology systems and tools that would help it proactively and efficiently identify representatives who are not meeting ongoing training requirements. Moreover, VA does not clearly solicit feedback from claimants about accredited representatives. For example, neither VA's accreditation web page nor information VA sends to claimants clearly communicates their rights or how to report abuses. Absent such outreach, claimants may not be aware that some representatives may be engaging in prohibited practices. Lastly, VA's current accreditation program does not address some emerging threats to claimants. For instance, VA has received complaints regarding unaccredited individuals inappropriately charging claimants to apply for benefits. By law, only accredited individuals can assist claimants. However, VA is not aware of the extent these unaccredited individuals operate, and is limited in the actions it can take to prevent them from assisting claimants. To improve the integrity of accreditation, GAO recommends that VA explore options for strengthening knowledge requirements and addressing emerging threats, improve its outreach, and determine the resources needed to adequately carry out accreditation. VA concurred or concurred in principle with GAO's recommendations and cautioned that imposing additional requirements to address concerns with representative knowledge or address emerging threats could have a chilling effect on representation.
DOD is one of the largest and most complex organizations in the world. For fiscal year 2012, the budget requested for the department was approximately $671 billion—$553 billion in discretionary budget authority and $118 billion to support overseas contingency operations. The department is currently facing near- and long-term internal fiscal pressures as it attempts to balance competing demands to support ongoing operations, rebuild readiness following extended military operations, and manage increasing personnel and health care costs and significant cost growth in its weapons systems programs. For more than a decade, DOD has dominated GAO’s list of federal programs and operations at high risk of fraud, waste, abuse, and mismanagement. In fact, all of the DOD programs on GAO’s High-Risk List relate to business operations, including systems and processes related to management of contracts, finances, the supply chain, and support infrastructure, as well as weapon systems acquisition. Long-standing and pervasive weaknesses in DOD’s financial management and related business processes and systems have (1) resulted in a lack of reliable information needed to make decisions and report on the financial status and cost of DOD activities to Congress and DOD decision makers, (2) adversely affected its operational efficiency in business areas, such as major weapon systems acquisition and support and logistics, and (3) left the department vulnerable to fraud, waste, and abuse. In support of its military operations, DOD performs an assortment of interrelated and interdependent business functions, such as logistics management, procurement, health care management, and financial management. The DOD systems environment that supports these business functions has been overly complex and error prone, characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. The department has stated that the following ERPs are critical to transforming the department’s business operations and addressing some of its long-standing weaknesses. A brief description of each of the ERPs is presented below.  The General Fund Enterprise Business System (GFEBS) was initiated in October 2004 and is intended to support the Army’s standardized financial management and accounting practices for the Army’s general fund, with the exception of that related to the Army Corps of Engineers, which will continue to use its existing financial system, the Corps of Engineers Financial Management System. GFEBS is intended to allow the Army to share financial, asset and accounting data across the active Army, the Army National Guard, and the Army Reserve. The Army estimates that when fully implemented, GFEBS will be used to control and account for about $140 billion in annual spending.  The Global Combat Support System-Army (GCSS-Army) was initiated in December 2003 and is expected to integrate multiple logistics functions by replacing numerous legacy systems and interfaces. The system is intended to provide tactical units with a common authoritative source for financial and related nonfinancial data, such as information related to maintenance and transportation of equipment. The system is also intended to provide asset visibility for accountable items. GCSS-Army will manage over $49 billion in annual spending by the active Army, National Guard, and Army Reserve.  The Logistics Modernization Program (LMP) was initiated in December 1999 and is intended to provide order fulfillment, demand and supply planning, procurement, asset management, material maintenance, and financial management capabilities for Army’s working capital fund. The third and final deployment of LMP occurred in October 2010.  The Navy Enterprise Resource Planning System (Navy ERP) was initiated in July 2003 and is intended to standardize the acquisition, financial, program management, maintenance, plant and wholesale supply, and workforce management capabilities at Navy commands.  The Global Combat Support System–Marine Corps (GCSS-MC) was initiated in September 2003 and is intended to provide the deployed warfighter with enhanced capabilities in the areas of warehousing, distribution, logistical planning, depot maintenance, and improved asset visibility.  The Defense Enterprise Accounting and Management System (DEAMS) was initiated in August 2003 and is intended to provide the Air Force the entire spectrum of financial management capabilities, including collections, commitments and obligations, cost accounting, general ledger, funds control, receipts and acceptance, accounts payable and disbursement, billing, and financial reporting for the general fund. According to Air Force officials, when DEAMS is fully operational, it is expected to maintain control and accountability for about $160 billion in spending.  The Expeditionary Combat Support System (ECSS) was initiated in January 2004 and is intended to provide the Air Force a single, integrated logistics system—including transportation, supply, maintenance and repair, engineering and acquisition—for both the Air Force’s general and working capital funds. Additionally, ECSS is intended to provide the financial management and accounting functions for the Air Force’s working capital fund operations. When fully implemented, ECSS is expected to control and account for about $36 billion of inventory.  Each of the military departments is in the process of developing its own Service Specific Integrated Personnel and Pay System. The military departments’ integrated personnel and pay systems replace the Defense Integrated Military Human Resources System that was initiated in February 1998 and intended to provide a joint, integrated, standardized personnel and pay system for all military personnel.  The Defense Agencies Initiative (DAI) was initiated in January 2007 and is intended to modernize the defense agencies’ financial management processes by streamlining financial management capabilities and transforming the budget, finance, and accounting operations. When DAI is fully implemented, it is expected to have the capability to control and account for all appropriated, working capital and revolving funds at the defense agencies implementing the system.  The Enterprise Business System (EBS) is the second phase of the Defense Logistics Agency’s (DLA) Business System Modernization (BSM) effort, which was initiated in November 1999 and implemented in July 2007. BSM focused on DLA’s operations in five core business processes: order fulfillment, demand and supply planning, procurement, technical/quality assurance, and financial management. In September 2007, the name of the program was changed to Enterprise Business System as it entered the second phase, and according to the agency, EBS will further enhance DLA’s supply chain management of nearly 6 million hardware and troop support items. Implementation of the ERPs is intended to standardize and streamline DOD’s financial management and accounting systems, integrate multiple logistics systems and finance processes, and provide asset visibility for accountable items. Effective implementation of the ERPs is also critical to DOD’s auditability efforts and goals. However, to date, DOD’s ERP implementations have been negatively impacted by schedule delays, cost increases, failures in delivering the necessary functionality, and a lack of compliance with required standards. Delays in the implementation of ERPs increase costs with the additional time and rework needed on the new system. The cost of additional time and rework needed have continued the funding of these legacy systems longer than anticipated and further eroded the estimated savings that were to accrue to DOD as a result of modernization. If the ERPs do not provide the intended capabilities, DOD’s goal of modernizing and streamlining its business processes and strengthening its financial management capabilities leading to auditable financial statements could be jeopardized. The following are examples of weaknesses in DOD’s implementation efforts. Accurate and reliable schedule and cost estimates are essential for DOD management to make good decisions regarding ERP implementation and for overseeing progress of the project. The success of any program depends on having a reliable schedule of the program’s work activities that will occur, how long they will take, and how the activities are related to one another. As such, the schedule not only provides a road map for systematic execution of a program, but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. As highlighted below, we and the DOD IG have questioned the accuracy and reliability of the ERPs’ schedule and cost estimates. In October 2010, we reported that based upon the data provided by DOD, 6 of the 10 ERPs DOD had identified as critical to transforming its business operations had experienced schedule delays ranging from 2 to 12 years, and five had incurred cost increases totaling an estimated $6.9 billion. DOD told us that the ERPs will replace hundreds of legacy systems that cost hundreds of millions of dollars to operate annually. According to the program management officers, while there had been schedule slippages and cost increases for several of the ERP efforts, the functionality that was envisioned and planned when each program was initiated remained the same. While the original intent of each program remained the same, the anticipated savings that were to accrue to the department may not be fully realized. Our October 2010 report also noted that our analysis of the schedule and cost estimates for four ERP programs—DEAMS, ECSS, GFEBS, and GCSS-Army—found that none of the programs were fully following best practices for developing reliable schedule and cost estimates. More specifically, none of the programs had developed a fully integrated master schedule that reflected all activities, including both government and contractor activities. In addition, none of the programs established a valid critical path or conducted a schedule risk analysis. The report also noted that in July and September 2008, we reported that the schedules for the GCSS-MC and the Navy ERP were developed using some of these best practices, but several key practices were not fully employed that are fundamental to having a schedule that provides a sufficiently reliable basis for estimating costs, measuring progress, and forecasting slippages. Furthermore, our analysis of the four ERP programs’ cost estimates found that ECSS, GFEBS, and GCSS-Army did not include a sensitivity analysis, while cost estimates for GFEBS did not include a risk and uncertainty analysis. GAO, Office of Management and Budget (OMB), and DOD guidance stipulate that risk and uncertainty analysis should be performed to determine the level of risk associated with the dollar estimate. A sensitivity analysis would assis decision makers in determining how changes to assumptions or k ey cost drivers (such as labor or equipment) could affect the cost estimate. We also previously reported similar concerns regarding the GCSS-MC and the Navy ERP. A reliable cost estimate tha includes sensitivity analysis and information about the degree of d uncertainty provides the basis for realistic budget formulation an program resourcing, meaningful progress measurement, proactive course correction, and accountability for results. In a June 2011 report, the DOD IG reported that the Army estimated it will spend $2.4 billion on the implementation of GFEBS. However, the report noted that the Army had not identified all of the requirements and costs associated with the project. In addition, the Army used unsupported and incomplete life-cycle cost estimates to determine $1.4 billion in cost savings and used an inappropriate methodology to determine the estimated $3.9 billion in benefits for implementing GFEBS. To support its business functions, DOD has reported that it relies on about 2,200 business systems, including accounting, acquisition, logistics, and personnel systems. DOD has stated that its ERPs will replace over 500 legacy systems that cost hundreds of millions of dollars to operate annually. However, some ERPs we reviewed did not deliver the functionality they were intended to provide, and thereby requiring continued operation of the existing systems. In November 2010, we reported that after two deployments of its LMP system, the Army had improved its implementation strategy, but continued to face problems that might prevent the system from fully providing its intended functionality at sites planned for the third and final deployment. While the Army improved its data-testing strategy for the third deployment, data quality problems continued at previous deployment sites and prevented staff at the sites from using LMP as intended. Also, new testing activities to support the third deployment were designed to assess how well the software functions but did not evaluate whether the data loaded into LMP were of sufficient quality to support the system’s processes. We found that the Army had yet to fully develop the software capabilities that LMP needed to achieve its intended functionality for some third-deployment sites. Without this functionality, LMP might limit the ability of staff at these sites to perform certain tasks, such as maintaining accountability of ammunition. For example, the Joint Munitions and Lethality Life Cycle Management Command conducts operations related to the production, management, and maintenance of ammunition. Officials at the command’s sites told us that LMP— unlike the systems that will be replaced once LMP is deployed—did not enable them to ship, receive, inventory, or perform stock movements for ammunition. LMP program management officials told us that the omission of an ammunition-specific functionality was identified in 2009, and that its development began in January 2010. The Army planned to deliver the functionality and interfaces in phases through March 2011. The Army has mitigation plans to address this functionality gap. For example, the command planned to hire 172 additional personnel to perform manual data entry until the software can perform the required functions. We recommended that Army report to Congress on the extent to which the third deployment sites were able to use LMP as intended, the benefits that LMP was providing, an assessment of the Army’s progress in ensuring that data used in LMP can support the LMP processes, timelines for the delivery of software and additional capabilities necessary to achieve the full benefits of LMP, and the costs and time frames of the mitigation strategies. Our preliminary results from an ongoing ERP review identified problems related to GFEBS and DEAMS providing Defense Finance and Accounting Service (DFAS) users with the expected capabilities in accounting, management information, and decision support. To compensate for the deficiencies, DFAS users have devised manual workarounds and applications to obtain the information they need to perform their day-to-day tasks. GFEBS is expected to be fully deployed during fiscal year 2012, is currently operational at 154 locations, including DFAS, and is being used by approximately 35,000 users. DEAMS is expected to be fully deployed during fiscal year 2016, is currently operational at Scott Air Force Base and DFAS, and is being used by about 1,100 individuals. Examples of the problems in these systems that DFAS users have identified include the following:  The backlog of unresolved GFEBS trouble tickets has increased from about 250 in September 2010 to approximately 400 in May 2011. According to Army officials, this increase in tickets was not unexpected because the number of users and the number of transactions being processed by the system have increased, and the Army and DFAS are taking steps to address problems raised by DFAS.  Approximately two-thirds of invoice and receipt data must be manually entered into GFEBS from the invoicing and receiving system (i.e., Wide Area Work Flow) due to interface problems. DFAS personnel told us that manual data entry will eventually become infeasible due to increased quantities of data that will have to be manually entered as GFEBS is deployed to additional locations. Army officials acknowledged that there is a problem with the interface between Wide Area Work Flow and GFEBS and that this problem had reduced the effectiveness of GFEBS, and that they are working with DOD to resolve the problem.  GFEBS lacks the ability to run ad hoc queries or to research data to resolve problems or answer questions. The Army has recognized this limitation and is currently developing a system enhancement that Army officials expect will better support the users’ needs.  Manual workarounds are needed to process certain accounts receivable transactions such as travel debts. DFAS personnel told us that this problem is the result of the improper conversion of data transferred from the legacy systems to DEAMS.  DFAS officials indicated that they were experiencing difficulty with some DEAMS system interfaces. For example, the interface problem with the Standard Procurement System has become so severe that the interface has been turned off, and the data must be manually entered into DEAMS.  DFAS officials told us that DEAMS does not provide the capability— which existed in the legacy systems—to produce ad hoc query reports that can be used to perform the data analysis needed for daily operations. They also noted that when some reports are produced, the accuracy of those reports is questionable.  Army and Air Force officials told us that they have plans to address these issues, and the Army has plans to validate the audit readiness of GFEBS in a series of independent auditor examinations over the next several fiscal years. For DEAMS, the DOD Milestone Decision Authority has directed that the system not be deployed beyond Scott Air Force Base until the known system weaknesses have been corrected and the system has been independently tested to ensure that it is operating as intended. To be efficient and effective as accounting and financial and business information tools, DOD’s ERPs must be able to process information according to accounting and financial reporting standards. However, this has not always been the case. In a November 2010 report, the DOD IG stated that after more than 10 years in development and a cost of $1.1 billion, the Army’s LMP system was not compliant with the U.S. Government Standard General Ledger, which supports the consistent recording of financial information and the preparation of standard reports required by the OMB and the Department of the Treasury. Agencies are required by law to maintain financial management systems that “comply substantially” with the Standard General Ledger, which contains two series of accounts—budgetary accounts used to recognize and track budget approval and execution and proprietary accounts used to recognize and track assets, liabilities, revenues, and expenses. Specifically, the DOD IG found that LMP did not contain 42 general ledger account codes necessary to record the Army working capital fund financial transactions. As a result, LMP cannot record all working capital fund transactions correctly and will therefore continue to inaccurately report financial data for the Army’s working capital fund operations. The DOD IG report further noted that the Army and DOD financial communities had not established the appropriate senior-level governance needed to develop, test, and implement the financial management requirements and processes needed in LMP to record Army Working Capital Fund financial data at the transaction level. As a result, LMP was not substantially compliant with the Federal Financial Management Improvement Act of 1996. The DOD IG also reported that the system also did not resolve any of the Army Working Capital Fund internal control weaknesses. The report concluded that the Army will need to spend additional funds to comply with U.S. Government Standard General Ledger requirements and achieve an unqualified audit opinion on its Army Working Capital Fund financial statements. GAO will continue to monitor the department’s progress of and provide feedback on the status of the department’s financial management improvement efforts. More specifically, we are in the process of finalizing our work related to GFEBS and DEAMS. DOD has invested billions of dollars and will invest billions more to implement the modern business systems it will rely on for timely, accurate, and reliable information in managing its financial and other business operations, preparing auditable financial statements, and maintaining accountability for its stewardship of public funds. Too often, though, costs exceed estimates by millions as system-development programs run years behind schedule. Even with extended periods of development, we have found new systems that are missing interfaces needed to integrate them with existing systems while others, slated to replace legacy systems, are delivered without some of the functionalities performed by the systems they are expected to replace. Meanwhile, the department continues to operate largely in the duplicative, stovepiped environment of its legacy systems. The continued deficiencies in the development and implementation of its ERPs also erode savings DOD has expected to accrue as a result of more-efficient business systems. While the implementation of the ERPs is a complex, demanding endeavor, the success of these systems is critical if DOD is to reach its auditability goals. Effective planning and implementation and the best efforts of a committed leadership, management, and staff will be critical. Mr. Chairman and members of the Panel, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Panel may have at this time. For further information regarding this testimony, please contact Asif A. Khan, (202) 512-9869 or [email protected]. Key contributors to this testimony include J. Christopher Martin, Senior-Level Technologist; Karen Richey, Assistant Director; Darby Smith, Assistant Director; Beatrice Alff; Maxine Hattery; Jeffrey Isaacs; Jason Lee; and Brian Paige. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As one of the largest and most complex organizations in the world, the Department of Defense (DOD) faces many challenges in resolving its long-standing financial and related business operations and system problems. DOD is in the process of implementing modern multifunction enterprise resource planning (ERP) systems to replace many of its outdated legacy systems. The ERPs are intended to perform business-related tasks such as general ledger accounting and supply chain management. Modernizing DOD's business systems is a critical part of transforming the department's business operations, addressing high-risk areas, and providing more-accurate and reliable financial information to Congress on DOD's operations. The Panel requested that GAO provide its perspective on DOD's ERP implementation efforts and the impact implementation problems could have on DOD's efforts to improve financial management and be audit ready by fiscal year 2017. This statement is based on GAO's prior work, reports issued by the Department of Defense Inspector General (DOD IG), and GAO's ongoing oversight of selected DOD ERP efforts. Over the years, GAO has made numerous recommendations to improve the department's financial management operations. DOD has invested billions of dollars and will invest billions more to develop and implement 10 ERPs that it has estimated will replace over 500 legacy systems that reportedly cost hundreds of millions of dollars to operate annually. DOD considers implementation of the ERPs as critical not only for addressing weaknesses in financial management, but also for resolving weaknesses in other high-risk areas such as business systems modernization and supply chain management. The ERPs are also important for DOD's goal of departmentwide audit readiness by fiscal year 2017. Furthermore, in light of the Secretary of Defense's recent decision that the Statement of Budgetary Resources is to be audit ready by fiscal year 2014, it is critical that the department have such systems in place to support its auditability goals. To date, however, DOD's ERP implementation has been impaired by delays, cost increases, failures in delivering the necessary functionality, and a lack of compliance with required standards. Delays in implementation have extended the use of existing duplicative, stovepiped systems, and the need to fund them. More specifically, (1) GAO has reported that, based upon the data provided by DOD, 6 of the 10 ERPs DOD had identified as critical to transforming its business operations experienced schedule delays ranging from 2 to 12 years, and five had incurred cost increases totaling an estimated $6.9 billion. (2) GAO's review of 6 ERPs found that none of the programs had developed a fully integrated master schedule, a best practice and tool in the management of business-system development that is crucial to estimating the overall schedule and cost of a program. (3) DOD IG has reported that the Army's Logistics Modernization Program, which is intended to provide financial management capabilities for the Army Working Capital Fund, was not compliant with the U.S. Government Standard General Ledger, which supports the consistent recording of financial information and the preparation of standard reports required by the Office of Management and Budget and the Department of the Treasury. Further, GAO's preliminary results from an ongoing audit of two ERPs--the Army's General Fund Enterprise Business System and the Air Force's Defense Enterprise Accounting and Management System--found that the systems did not provide Defense Finance and Accounting Service users with the expected capabilities in accounting, management information, and decision support. System problems identified include interface issues between legacy systems and the new ERPs, lack of ad hoc query reporting capabilities, and reduced visibility for tracing transactions to resolve accounting differences. To compensate for these operational deficiencies, users were relying on manual workarounds to perform day-to-day operations. Such performance deficiencies, delays, and other problems in ERP implementation can negatively impact DOD's auditability goals.
The federal government began with a public debt of about $78 million in 1789. Since then, the Congress has attempted to control the size of the debt by imposing ceilings on the amount of Treasury securities that can be outstanding. In February 1941, an overall ceiling of $65 billion was set on all types of Treasury securities that could be outstanding at any one time. The debt ceiling was raised several times between February 1941 and June 1946, when a ceiling of $275 billion was set that remained in effect until August 1954. At that time, the first temporary debt ceiling, which added $6 billion to the $275 billion permanent ceiling, was imposed. Since then, numerous temporary and permanent increases in the debt ceiling have been enacted. Total debt subject to the debt ceiling, as of June 30, 2002, was about $6.1 trillion. About 44 percent, or $2.7 trillion, was held by federal trust funds, such as the Social Security trust funds and the Civil Service Retirement and Disability Trust Fund (Civil Service fund), and by the Government Securities Investment Fund of the Federal Employees’ Retirement System (G-Fund), hereafter collectively referred to as Funds. The Secretary of the Treasury has several responsibilities related to the federal government’s financial management operations. These include paying the government’s obligations and investing Funds’ receipts not needed for current benefits and expenses. The Secretary has generally been provided with the ability to issue the necessary securities to the Funds for investment purposes and to borrow the necessary funds from the public to pay government obligations. Under normal circumstances, the debt ceiling is not an impediment to carrying out these responsibilities. Treasury is notified by the appropriate agency (such as the Office of Personnel Management for the Civil Service fund) of the amount that should be invested (or reinvested), and Treasury makes the investment. In some cases, the agency may also specify the security that Treasury should purchase. These securities count against the debt ceiling. Consequently, if Funds’ receipts are not invested, an increase in the debt subject to the debt ceiling does not occur. When Treasury is unable to borrow because the debt ceiling has been reached, the Secretary is unable to fully discharge his financial management responsibilities using the normal methods. In 1985, the government experienced a debt ceiling crisis from September 3 through December 11. During that period, Treasury took several actions that were similar to those discussed in this report. For example, Treasury redeemed Treasury securities held by the Civil Service fund earlier than normal in order to borrow sufficient cash from the public to meet the fund’s benefit payments and did not invest some trust fund receipts. In 1986 and 1987, after Treasury’s experiences during prior debt ceiling crises, the following statutory authorities were provided to the Secretary of the Treasury to use the Civil Service fund and the G-Fund to assist Treasury in managing its financial operations during a debt ceiling crisis: 1. Redemption of securities held by the Civil Service fund. Subsection (k) of 5 U.S.C. 8348 provides authority to the Secretary of the Treasury to redeem securities or other invested assets of the Civil Service fund before maturity to prevent the amount of public debt from exceeding the debt ceiling. Subsection (k) of 5 U.S.C. 8348 also provides that, before exercising the authority to redeem securities of the Civil Service fund, the Secretary must first determine that a “debt issuance suspension period” exists. Subsection (j) of 5 U.S.C. 8348 defines a debt issuance suspension period as any period for which the Secretary has determined that obligations of the United States may not be issued without exceeding the debt ceiling. The statute authorizing the debt issuance suspension period and its legislative history are silent as to how the Secretary should determine the length of a debt issuance suspension period. Specifically, subsection (j) (5) of 5 U.S.C. 8348 states that “the term ‘debt issuance suspension period’ means any period for which the Secretary of the Treasury determines for purposes of this subsection that the issuance of obligations of the United States may not be made without exceeding the public debt limit.” 2. Suspension of Civil Service fund investments. Subsection (j) of 5 U.S.C. 8348 provides authority to the Secretary of the Treasury to suspend additional investment of amounts in the Civil Service fund if the investment cannot be made without causing the amount of public debt to exceed the debt ceiling. This subsection of the statute also authorizes the Secretary to make the Civil Service fund whole after the debt issuance suspension period has ended. 3. Suspension of G-Fund investments. Subsection (g) of 5 U.S.C. 8438 provides authority to the Secretary of the Treasury to suspend the issuance of additional amounts of obligations of the United States to the G-Fund if issuance cannot occur without causing the amount of public debt to exceed the debt ceiling. The subsection authorizes the Secretary to make the G-Fund whole after the debt issuance suspension period has ended. We have previously reported on aspects of Treasury’s actions during the 1995/1996 debt issuance suspension period and earlier debt ceiling crises (see Related GAO Products). develop a chronology of significant events related to the debt issuance suspension periods during April 2002 and May/June 2002, analyze the financial aspects of Treasury’s actions taken during the 2002 debt issuance suspension periods and assess the legal basis of these actions, and analyze the impact of the policies and procedures used by Treasury to manage the debt during the 2002 debt issuance suspension periods. To develop a chronology of the significant events related to the 2002 debt issuance suspension periods, we obtained and reviewed applicable documents. We also discussed Treasury’s actions during the debt issuance suspension periods with senior Treasury officials. To analyze the financial aspects of Treasury’s actions taken, we (1) reviewed the methodologies Treasury developed to minimize the impact of such departures on the Civil Service fund and the G-Fund, (2) quantified the impact of the departures, (3) assessed whether any principal and interest losses were fully restored, and (4) assessed whether any losses were incurred that could not be restored under Treasury’s current statutory authority. To assess the legal basis of Treasury’s departures from its normal policies and procedures, we identified the applicable legal authorities and determined how Treasury applied them during the 2002 debt issuance suspension periods. Our evaluation included those authorities related to issuing and redeeming Treasury securities during a debt issuance suspension period and restoring losses after such a period has ended. To analyze the impact of the policies and procedures used by Treasury to manage the debt during debt issuance suspension periods, we reviewed the actions taken and the stated policies and procedures used during debt issuance suspension periods. To determine the stated policies and procedures used during the 2002 debt issuance suspension periods, we discussed with Treasury officials and examined the support for actions taken during these periods. We also compiled and analyzed source documents relating to previous debt issuance suspension periods, including executive branch legal opinions, memorandums, and correspondence. We performed our work from April 4 through July 31, 2002, in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of the Treasury or his designee. The written response from the Fiscal Assistant Secretary of Treasury is reprinted in appendix I. In December 2001, Treasury analysts concluded that the debt ceiling of $5.95 trillion might be reached in February 2002. Table 1 shows the significant actions the Congress and the executive branch took from December 2001 through June 2002 to address the debt ceiling. During the second 2002 debt issuance suspension period, the Secretary of the Treasury redeemed Treasury securities held by the Civil Service fund earlier than normal and suspended the investment of Civil Service fund receipts. Subsection (k) of 5 U.S.C. 8348 authorizes the Secretary of the Treasury to redeem securities or other invested assets of the Civil Service fund before maturity to prevent the amount of public debt from exceeding the debt ceiling. The statute does not require that early redemptions be made only for the purpose of making Civil Service fund payments. Further, the statute permits early redemptions even if the Civil Service fund has adequate cash balances to cover such payments. Before redeeming Civil Service fund securities earlier than normal, the Secretary must determine that a debt issuance suspension period exists. The statute authorizing the debt issuance suspension period and its legislative history are silent as to how to determine the length of a debt issuance suspension period. On May 14, 2002, the Secretary of the Treasury declared that a debt issuance suspension period would begin no later than May 16 and would last until June 28, 2002. On May 16, 2002, Treasury redeemed about $4 billion of the Civil Service fund’s Treasury securities using this authority. The $4 billion of redemptions was determined based on (1) the length of the debt issuance suspension period (May 16 through June 28, 2002) and (2) the estimated monthly Civil Service fund benefit payments that would occur during that period. These were appropriate factors to use in determining the amount of Treasury securities to redeem early. Since Treasury had redeemed the securities associated with the June 3, 2002, payments in May, it redeemed only the difference between the amount that had been redeemed early (less any reinvestments) and the actual amount of benefit payments made on June 3. In this case, Treasury redeemed about $728 million associated with reinvestments and about $8 million that represented the difference between the estimated payments and the actual payments made on June 3, 2002. Subsection (j) of 5 U.S.C. 8348 authorizes the Secretary of the Treasury to suspend additional investment of amounts in the Civil Service fund if the investment cannot be made without causing the amount of public debt to exceed the debt ceiling. From May 17 to June 28, 2002, the Civil Service fund had about $2 billion in receipts that were not invested. On June 28, 2002, after the debt ceiling was raised, these receipts were invested. Subsection (g) of 5 U.S.C. 8438 authorizes the Secretary of the Treasury to suspend the issuance of additional amounts of obligations of the United States to the G-Fund if the issuance cannot be made without causing the amount of public debt to exceed the debt ceiling. Each day from April 4 to April 16, 2002, and from May 16 to June 28, 2002, Treasury determined the amount of funds that the G-Fund would be allowed to invest in Treasury securities and, when necessary, suspended some investments and reinvestments of the G-Fund receipts and maturing securities that would have caused the debt ceiling to be exceeded. On April 4, 2002, when the Secretary determined that the first debt issuance suspension period had begun, the G-Fund held about $41 billion of Treasury securities that would mature that day. To ensure that it did not exceed the statutory debt limit, Treasury did not reinvest about $13.7 billion of these securities on this date. On April 16, 2002, the debt issuance suspension period ended, and Treasury fully invested the G-Fund and compensated the G-Fund for its interest losses. The G-Fund remained fully invested until the start of the second debt issuance suspension period on May 16, 2002. On that date, the G-Fund held about $41 billion of maturing Treasury securities. To ensure that it did not exceed the statutory debt limit, Treasury did not reinvest about $9.2 billion of these securities. During both debt issuance suspension periods, the amount of the G-Fund’s receipts that Treasury invested changed daily, depending on the amount of the government’s outstanding debt. Although Treasury can accurately predict the outcome of some events that affect the outstanding debt, it cannot precisely determine the outcome of others until they occur. For example, the amount of securities that Treasury will issue to the public from an auction can be determined some days in advance because Treasury can control the amount that will be issued. On the other hand, the amount of savings bonds that will be issued and redeemed and of securities that will be issued to, or redeemed by, various government Funds is difficult to precisely predict. Because of these difficulties, Treasury needed a way to ensure that the government’s Funds activities did not cause the debt ceiling to be exceeded and also to maintain normal investment and redemption policies for the majority of the government Funds. To do this, each day during a debt issuance suspension period, Treasury calculated the amount of public debt subject to the debt ceiling, excluding the receipts that the G-Fund would normally invest; determined the amount of G-Fund receipts that could safely be invested without exceeding the debt ceiling and invested this amount in Treasury securities; and suspended investment, when necessary, of the G-Fund’s remaining receipts. For example, on May 23, 2002, excluding G-Fund transactions, Treasury issued about $32.2 billion and redeemed about $29.1 billion of other Funds’ securities that counted against the debt ceiling. Treasury also issued about $66.4 billion and redeemed about $56.1 billion of other securities. Since Treasury had been at the debt ceiling the previous day, Treasury could not invest the entire amount that the G-Fund had requested ($41 billion) without exceeding the debt ceiling. As a result, the $13.4 billion difference between the $98.6 billion of securities issued and the $85.2 billion of securities redeemed was added to the amount of uninvested G-Fund receipts. This raised the amount of uninvested funds for the G-Fund from about $900 million to about $14 billion on that date. Interest on the uninvested funds was not paid until the debt issuance suspension period ended. Treasury used the same policies and procedures for calculating the interest losses for both the 1995/1996 and 2002 debt issuance suspension periods. On June 28, 2002, the statutory debt limit was raised to $6.4 trillion. By June 30, 2002, Treasury restored all losses to the Civil Service fund and the G-Fund. The Civil Service fund incurred about $15.4 million in principal and interest losses during the second 2002 debt issuance suspension period. When 5 U.S.C. 8348 was amended to expressly authorize the Secretary of the Treasury to redeem securities earlier than normal or to refrain from promptly investing Civil Service fund receipts because of debt ceiling limitations, it was also amended to ensure that such actions would not result in long-term losses to the Civil Service fund. Thus, the Secretary of the Treasury was authorized to immediately restore, to the maximum extent practicable, the Civil Service fund’s security holdings to the proper balances when a debt issuance suspension period ends and to restore lost interest on the subsequent first normal interest payment date. Under this statute, Treasury took the following actions once the debt issuance suspension period had ended: Treasury invested about $2 billion of uninvested receipts on June 28, 2002. Treasury paid the Civil Service fund about $15.4 million as compensation for losses incurred because of the actions it had taken. Treasury made payment on June 30, 2002, because this was the next semiannual interest payment date. We verified that after these transactions the Civil Service fund’s security holdings were, in effect, the same as they would have been had the debt issuance suspension period not occurred. For the two periods from April 4 to April 16, 2002, and from May 16 to June 28, 2002, the G-Fund lost about $27.7 million and $139.6 million in interest, respectively, because its excess funds were not fully invested. As discussed above, the amount of funds invested for the G-Fund fluctuated daily during the debt issuance suspension period, with the investment of some funds being suspended. When 5 U.S.C. 8438 was amended to expressly authorize the Secretary of the Treasury to suspend G-Fund investments because of debt ceiling limitations, it was also amended to ensure that such actions would not result in long-term losses to the G-Fund. Thus, the Secretary of the Treasury was authorized to make the G-Fund whole by restoring any losses once the debt issuance suspension period ended. On April 16, 2002, when the first debt issuance suspension period was terminated by the Secretary of the Treasury, and on June 28, 2002, when the debt ceiling was raised, Treasury restored the lost interest on the G-Fund’s uninvested funds. Consequently, the G-Fund was fully compensated for its interest losses during the 2002 debt issuance suspension periods. The basic actions taken during the 2002 and the 1995/1996 debt issuance suspension periods were similar–-G-Fund and Civil Service fund receipts were not invested and Civil Service fund securities were redeemed earlier than needed to pay fund benefits and expenses. However, Treasury had not documented the policies and procedures that should be used to implement these actions. Further, the stated policies and procedures used to implement the actions taken on the Civil Service fund between the 2002 and the 1995/1996 debt issuance suspension periods were different. Accordingly, some confusion existed about how to implement these actions and some errors were made that had to be corrected. More importantly, documented policies and procedures would allow Treasury to better determine the potential impacts associated with the policies and procedures it implements in managing the amount of debt subject to the limit. The stated policies and procedures Treasury used to implement its actions related to the Civil Service fund during the second 2002 debt issuance suspension period differed from those used in the 1995/1996 debt issuance suspension period. These differences were as follows: Current-year securities were redeemed earlier than normal during the second 2002 debt issuance suspension period, while long-term securities were redeemed earlier than normal during the 1995/1996 debt issuance suspension period. Accrued interest was used in the calculation of the securities that were eligible to be redeemed earlier than normal during the second 2002 debt issuance suspension period, while accrued interest was not considered in the calculation of securities redeemed during the 1995/1996 debt issuance suspension period. As discussed below, the policies and procedures used in 2002 and 1995/1996 have different impacts on Treasury’s flexibility to manage the amount of debt subject to the statutory debt limit. Two basic policies and procedures can be used to redeem Civil Service fund securities earlier than normal. The normal redemption policy, which involves redeeming current-year securities first, was used during the second 2002 debt issuance suspension period. For example, when Treasury redeemed about $4 billion earlier than normal on May 16, 2002, the securities selected were those that matured on June 30, 2002. During the 1995/1996 debt issuance suspension period, the early redemptions were made from long-term securities that matured about 14 years later. The impact between the two approaches on Treasury’s ability to manage the amount of outstanding debt during a debt issuance suspension period can be significant when a debt issuance suspension period also includes the date when securities mature. This could have occurred during the second 2002 debt issuance suspension period as the Civil Service fund had more than $45 billion of Treasury securities scheduled to mature on June 30, 2002. Should a debt issuance suspension period cover a June 30 rollover date, the securities selected for early redemption can have a significant impact on the amount of maturing securities, as shown in tables 2 and 3. The amount of maturing securities to be reinvested is important because, as in the case of the G-Fund, Treasury does not have to reinvest the maturing Civil Service fund securities during a debt issuance suspension period. This, in turn, allows Treasury to take other actions, such as investing other Funds’ receipts or issuing securities to the public to raise cash. As illustrated in tables 2 and 3, the amount of maturing securities to be reinvested can have a significant impact on Treasury’s debt management options. For example, (1) if the Civil Service fund had $48 billion of maturing Treasury securities and (2) Treasury needed to invest $52 billion of other Funds’ receipts that could not remain legally uninvested on June 30, by not reinvesting the maturing Civil Service fund’s current-year securities Treasury could invest all but $4 billion of these receipts (see table 2). Treasury would then need to find some other method of generating room under the debt ceiling in order to invest the remaining $4 billion. On the other hand, if Treasury had redeemed long-term securities, then the $52 billion of other Funds’ receipts could have been invested by simply not reinvesting any of the Civil Service fund’s maturing securities (see table 3). During the second 2002 debt issuance suspension period, Treasury expected to make about $50 billion in interest payments to the Funds, excluding the Civil Service fund and the G-Fund, on June 30, 2002. Had Treasury redeemed the long-term securities rather than the current-year securities, the resulting $52 billion of maturing Civil Service fund securities would have been adequate to fully invest the $50 billion of interest payments. This assumes that Treasury would have decided to suspend the reinvestment of these maturing securities and use the resulting room under the debt ceiling provided by this suspension to invest the interest payments to the other Funds. On the other hand, by redeeming short-term securities, the $48 billion of maturing Civil Service fund securities available would not have been adequate to fully invest the interest payments, and Treasury would have had to obtain $2 billion of debt limit from other sources, such as the G-Fund. Treasury’s normal redemption policy is to include the accrued interest on the security that is being redeemed when determining the amount of principal that should be redeemed. For example, if Treasury needed to redeem securities to make a $4 billion payment and $3,950 million of securities had earned $50 million of interest, then Treasury would need to redeem only $3,950 million of securities because the accrued interest would make up the difference between the payments to be made and the securities redeemed. During the second 2002 debt issuance suspension period, Treasury used the accrued interest when it redeemed Civil Service fund securities early and when it redeemed funds associated with one of the early redemptions that had been reinvested. The interest payments associated with these redemptions totaled about $84 million. However, during the 1995/1996 debt issuance suspension period, which had a 14-month debt issuance suspension period, Treasury did not use the accrued interest in determining the amount of securities that should be redeemed. Including accrued interest in the calculation, as noted below, can have a significant impact on the amount of securities that are redeemed. This in turn affects the amount of securities Treasury can issue to the public for cash or issue to other Funds that have receipts that need to be invested. Table 4 provides a hypothetical example showing that the reduction in outstanding debt can be significantly lower when accrued interest is used in the computation of securities redemptions. For purposes of this table, we assumed a 14-month debt issuance suspension period. A number of factors affect the amount of interest that is associated with a given redemption. For example, the length of the debt issuance suspension period affects the amount of funds subject to early withdrawal—the more funds withdrawn, the greater the interest calculation. Another important factor is the time of year that the redemption is made. Since December 31 and June 30 are semiannual interest payment dates, securities redeemed in January and July will have significantly less interest associated with them than similar securities redeemed in May and November. Treasury has not documented the policies and procedures it used to implement the actions that it takes during a debt issuance suspension period. Although the actions that are allowed are well defined in law (e.g., suspending Civil Service fund and G-Fund investments and redeeming Civil Service fund securities earlier than normal), the policies and procedures needed to implement them are not documented. Our review disclosed some cases in which the lack of documented policies and procedures contributed to some confusion and errors that had to be corrected, as necessary. As stated in Standards for Internal Control in the Federal Government, all transactions and other significant events need to be clearly documented, and documentation should be readily available. The limited number of people involved in and the complex nature of managing the debt during a debt issuance suspension period are factors that further support the need to document policies and procedures to be implemented. As noted above, policies and procedures can have an impact on managing the debt during a debt issuance suspension period. Furthermore, the policies and procedures developed should identify which office is authorized to approve any modifications to the policies and procedures. Treasury officials noted that the changes to the stated policies and procedures used during the 2002 debt issuance suspension periods made the operations more consistent with those that it uses during its normal operations. They also noted that since the 1995/1996 debt issuance suspension period, Treasury has implemented a new financial management system that allows Treasury to use a more sophisticated approach to ensuring that the Civil Service fund is adequately compensated for any losses incurred. Therefore, the Treasury officials believe that the current stated policies and procedures are an improvement over those used in the 1995/1996 debt issuance suspension period. As discussed earlier in this report, the approaches used during the 2002 debt issuance suspension periods allowed Treasury to restore the fund balances. At the same time, due to the limited number of people involved and the complex nature of managing debt during a debt issuance suspension period, Treasury would benefit from documenting the necessary policies and procedures to be used in such situations. We noted that the lack of documented policies and procedures contributed to some confusion and some errors that were subsequently corrected, as necessary. The following errors occurred during the second 2002 debt issuance suspension period: When Treasury decided to redeem Civil Service fund securities earlier than normal, it initially redeemed long-term securities. It subsequently reversed this transaction and redeemed current-year securities. When Treasury decided to reinvest funds associated with some of the early Civil Service fund redemptions, it did not include the accrued interest associated with those funds when they were subsequently redeemed to pay the June 3, 2002, Civil Service fund benefit payments. This was inconsistent with a similar reinvestment made on May 17, 2002, that was redeemed on May 20, 2002, in which Treasury included the accrued interest in its calculations. When Treasury restored the losses incurred by the Civil Service fund, it misclassified about $1.2 million of principal losses as interest losses. Treasury’s practice of keeping a dual set of accounts in its new financial management system—one to track actual debt issuance suspension period transactions and one to track transactions that would have occurred had there not been a debt issuance suspension period—is a good first step toward ensuring that losses caused by Treasury’s actions can be restored. However, as a result of the restoration policies and procedures Treasury used during the 2002 debt issuance suspension period, according to Treasury’s new financial management system, the amount of the Civil Service fund’s security holdings was about $1.2 million less on June 28, 2002, than it would have been had the debt issuance suspension period not occurred. Nevertheless, as previously noted, the restoration made on June 30, 2002, fully compensated the Civil Service fund for all losses. Although these errors were not significant and were subsequently corrected as necessary, we believe that had Treasury established documented policies and procedures and effectively implemented them, the likelihood of these errors would have been greatly reduced. During the 2002 debt issuance suspension periods, Treasury acted in accordance with its statutory authorities when it (1) suspended some investments of the Civil Service fund and G-Fund and (2) redeemed securities earlier than normal from the Civil Service fund. These and other actions discussed in this report allowed the government to avoid default on its obligations and to stay within the debt ceiling. Although some of the stated policies and procedures Treasury used to implement the actions it took on the Civil Service fund during the second 2002 debt issuance suspension period differed from those used in the 1995/1996 debt issuance suspension period, they were adequate to ensure that the Civil Service fund did not incur any losses after the debt issuance suspension period had ended and Treasury was able to take the necessary restoration actions. However, Treasury’s stated policies and procedures to be used for the Civil Service fund and G-Fund during a debt issuance suspension period have not been documented. Properly documenting the policies and procedures will (1) allow Treasury management to ascertain the impacts of these policies and procedures on Treasury’s ability to manage the outstanding debt during a debt issuance suspension period and (2) if effectively implemented, reduce the chance for confusion and risk of errors should Treasury need to use the policies and procedures in the future. We recommend that the Secretary of the Treasury direct the Under Secretary for Domestic Finance to document the necessary policies and procedures that should be used during any future debt issuance suspension period. Further, the document developed should clearly state which office is responsible for approving any modifications to the documented policies and procedures. In written comments on a draft of this report, Treasury agreed that accurate documentation of its policies and procedures is a valuable objective and said that it believed it was desirable to maintain the preexisting policies and procedures for the redemption of securities and crediting of interest to the maximum extent possible. Treasury said that maintaining these standards makes the operations transparent and reduces confusion to the stakeholders of the funds affected by early redemption activities. Because it was unclear whether Treasury’s proposed development and documentation of guidelines for debt issuance suspension periods would address our recommendation to document the necessary policies and procedures that should be used during any future debt issuance suspension period, we held subsequent discussions with Treasury officials to clarify the department’s intentions. Treasury officials were concerned that developing detailed policies and procedures would limit their flexibility to manage the debt during debt issuance suspension periods because they believed such situations may have unique characteristics with distinct circumstances that need to be addressed. We explained that our recommendation did not call for documenting the circumstances under which the Secretary should invoke specific actions. For example, we did not call for stipulating (1) how to determine the length of a debt issuance suspension period, (2) which funds should be used by Treasury to help manage its operations, (3) when to exchange securities held by the Federal Financing Bank for securities held by the Civil Service fund, (4) when to recall compensating balances, or (5) when to suspend fund investments. On the other hand, we did envision that such policies and procedures would document how to implement the actions directed by the Secretary, including (1) how to implement a given course of action, such as redeeming Civil Service fund securities earlier than normal, and (2) how to fully compensate a fund for its losses. Taken from this perspective, Treasury officials generally agreed with the need to document the necessary policies and procedures relating to implementing actions determined by the Secretary. They did note, however, that such procedures might need to contain options in order to maintain the flexibility needed. For example, the policies and procedures might have two or more options on how to handle the redemption of Civil Service fund securities earlier than normal. Documenting policies and procedures that contain options would meet the intent of our recommendation. As we noted in our report, properly documenting the policies and procedures will (1) allow Treasury management to better ascertain the impact of these policies and procedures on Treasury’s ability to manage the outstanding debt during a debt issuance suspension period and (2) if effectively implemented, reduce the chance for confusion and risk of errors should Treasury need to use the policies and procedures in the future. Regarding three instances where the lack of documented policies and procedures contributed to what we characterized as some confusion and errors, Treasury did not agree that these instances were errors. As discussed below, we continue to believe that errors occurred. As a backdrop for this discussion, the recurring theme of our report is that Treasury did not have documented policies and procedures that should be used during a debt issuance suspension period. Based on discussions with cognizant Treasury officials, it was our understanding that Treasury intended to apply what it referred to as its standard redemption policies and procedures—those used in normal daily operations. In commenting on this report, however, Treasury stated that it initially modeled its actions during the 2002 debt issuance suspension period on actions it had taken during the 1995/1996 debt issuance suspension period but that after further analysis it decided to instead use its standard redemption policies and procedures. The 1995/1996 procedures for redeeming securities earlier than normal used long-term securities and did not consider accrued interest in determining the amount to be redeemed. In contrast, Treasury’s standard redemption policies and procedures use current-year securities and consider accrued interest. Regardless of which approach Treasury opted to follow for the debt issuance suspension period transactions discussed in our report, Treasury did not consistently adhere to either approach and consequently made the following errors: When Treasury first redeemed securities earlier than normal, it redeemed long-term securities and included the accrued interest on the securities when determining the amount of principal that should be redeemed. Although the choice of long-term securities for early redemption was consistent with the practices used during the 1995/1996 debt issuance suspension period, including accrued interest in calculating the amount of principal to be redeemed was a departure from Treasury’s 1995/1996 practices. For subsequent redemptions of securities reinvested, although Treasury used current-year securities, it was inconsistent in considering accrued interest in determining the amount of principal that should be redeemed. When Treasury redeemed the May 17, 2002, reinvestment on May 20, 2002, it redeemed current-year securities and included accrued interest in this calculation. This was consistent with its standard redemption policies and procedures. However, on June 3, 2002, when Treasury redeemed 10 reinvestments, it did not consider accrued interest. Instead, the June 3, 2002, redemption followed the practices used in the 1995/1996 debt issuance suspension period. Regarding the third instance, the classification of $1.2 million of losses incurred, Treasury did not agree that its classification of this amount as interest losses was in error. As discussed in our report, the dual set of accounts maintained by Treasury’s new financial management system— one that tracks actual debt issuance suspension period transactions and one that tracks transactions that would have occurred had there not been a debt issuance suspension period—clearly showed that the principal balances in the Civil Service fund differed by $1.2 million on June 28, 2002. As such, we concluded that when Treasury restored the losses incurred by the Civil Service fund, it misclassified about $1.2 million of principal losses as interest losses. As stated in our report, these errors were not significant and were subsequently corrected as necessary; however, we believe that had Treasury established documented policies and procedures and effectively implemented them, the likelihood of these errors would have been greatly reduced. Specific technical comments provided orally by Treasury were incorporated in this report as appropriate. We are sending copies of this report to the chairmen and ranking minority members of the Senate Committee on Appropriations; the Senate Committee on Governmental Affairs; the Senate Committee on the Budget; the Subcommittee on Treasury and General Government, Senate Committee on Appropriations; the Senate Committee on Finance; the House Committee on Appropriations; the House Committee on Government Reform; the House Committee on the Budget; the House Committee on Ways and Means; and the Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations. We are also sending copies of this report to the Under Secretary for Domestic Finance, the Inspector General of the Department of the Treasury, the Director of the Office of Management and Budget, and other agency officials. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on this recommendation to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. If I can be of further assistance, please call me at (202) 512-3406. Should you or members of your staff have any questions concerning this report, please contact Mr. Chris Martin, Senior Level Technologist, at (202) 512- 9481 or Ms. Louise DiBenedetto, Assistant Director, at (202) 512-6921. We have previously reported on aspects of Treasury’s actions during the 1995/1996 debt issuance suspension period and earlier debt ceiling crises in the following reports: Debt Ceiling: Analysis of Actions during the 1995-1996 Crisis. GAO/AIMD-96-130. Washington, D.C.: August 30, 1996. Information on Debt Ceiling Limitations and Increases. GAO/AIMD-96- 49R. Washington, D.C.: February 23, 1996. Debt Ceiling Limitations and Treasury Actions. GAO/AIMD-96-38R. Washington, D.C.: January 26, 1996. Social Security Trust Funds. GAO/AIMD-96-30R. Washington, D.C.: December 12, 1995. Debt Ceiling Options. GAO/AIMD-96-20R. Washington, D.C.: December 7, 1995. Civil Service Fund: Improved Controls Needed over Investments. GAO/AFMD-87-17. Washington, D.C.: May 7, 1987. Opinion on the legality of the plan of the Secretary of the Treasury to disinvest the Social Security and other trust funds on November 1, 1985, to permit payments to beneficiaries of these funds. B-221077.2. Washington, D.C.: December 5, 1985. A New Approach to the Public Debt Legislation Should Be Considered. FGMSD-79-58. Washington, D.C.: September 7, 1979.
In connection with fulfilling our requirement to audit the financial statements of the U.S. government, we audit the Schedules of Federal Debt Managed by the Bureau of the Public Debt, which includes testing compliance with the debt ceiling. To assist us in this testing and because of the nature of and sensitivity towards actions taken during a debt issuance suspension period, we (1) developed a chronology of significant events, (2) analyzed the financial aspects of Treasury's actions taken during the debt issuance suspension periods and assessed the legal basis of these actions, and (3) analyzed the impact of the policies and procedures used by Treasury to manage the debt during the debt issuance suspension periods. In April and May 2002, the Department of the Treasury announced two debt issuance suspension periods because certain receipts could not be invested without exceeding the statutory debt ceiling of $5.95 trillion. The first debt issuance suspension period occurred from April 4 to April 16, 2002, and involved use of the Government Securities Investments Fund (G-Fund). The second debt issuance suspension period occurred from May 16 to June 28, 2002, and involved the use of the Civil Service Retirement and Disability Trust Fund (Civil Service fund) and the G-Fund. During both debt issuance suspension periods, Treasury suspended some investments and reinvestments of the G-Fund's receipts and maturing securities. During the second debt issuance suspension period, Treasury also took the following actions related to the Civil Service fund: (1) it redeemed about $4 billion in Treasury securities held by the Civil Service fund before they were needed to pay benefits and expenses, and (2) it suspended the investment of about $2 billion of trust fund receipts. These actions were consistent with legal authorities provided to the Secretary of the Treasury. Although the actions that are allowed during a debt issuance suspension period are well defined in law, the policies and procedures needed to implement such actions are not documented, Our review disclosed some cases where the lack of documented policies and procedures contributed to confusion and errors that had to be corrected.
Nurse staffing is a critical part of health care because of the effects it can have on patient outcomes and nurse job satisfaction. According to VHA, its staffing methodology aims to maximize nurses’ productivity and efficiency, while providing safe patient care by ensuring appropriate nurse staffing levels and skill mix. VHA’s nurse workforce is primarily composed of RNs, licensed practical nurses (LPN), and nursing assistants (NA). These nurses provide care—ranging from primary care to complex specialty care—in inpatient, outpatient, and residential care settings at 151 VAMCs across the country. In addition to the size of the nursing workforce, the nursing skill mix—i.e., the share of each type of nurse (RNs, LPNs, or NAs) of the total—is an important component of nurse staffing. Units vary in their nursing skill mix, depending on the needs of their patients. For example, intensive care units require higher intensity nursing, and may have a skill mix that is primarily composed of RNs compared to other types of nursing units that may provide less complex care. (See table 1 for a general description of the types of nursing staff position, responsibilities, and educational requirements.) Although the number of nurses at VAMCs increased from FY 2009 to FY 2013, VHA ranked nurses as the second most challenging occupation to recruit and retain. Specifically, the total number of nurses at VAMCs increased 13 percent from 72,542 in FY 2009 to 81,940 in FY 2013, with similarly proportionate increases within each position type—RN, LPN, and NA. During the same time period, the annual nurse turnover rate at VAMCs—the percentage of nurses who left VHA through retirement, death, termination, or voluntary separation—increased from 6.6 percent to 8.0 percent. Although RNs had the lowest turnover rate among nurses, VHA noted particular difficulty recruiting and retaining for the position, particularly for RNs with advanced professional skills, knowledge, and experience, such as RNs that provide services in medical and surgical VHA projects that approximately 40,000 new nurses will be care units. needed through FY 2018 to maintain current staffing levels and to meet the needs of veterans. See Department of Veterans Affairs, Veterans Health Administration, 2013 Workforce Succession Strategic Plan (Washington, D.C.: 2013). To help ensure adequate and qualified nurse staffing at VAMCs, in July 2010, VHA issued VHA Directive 2010-034: Staffing Methodology for VHA Nursing Personnel. ONS, the VHA office responsible for providing national policies and guidelines for all VHA nursing personnel, led the development of the nurse staffing methodology, which began in 2007. (See fig. 1.) To implement the methodology, each VAMC is required to (1) develop a VAMC-wide staffing plan for its nurse workforce, comprised of individual unit-level staffing plans, and (2) execute that plan. (See figure 2 for an outline of the process for implementing VHA’s nurse staffing methodology.) Each VAMC unit is to develop a staffing plan outlining recommendations on the appropriate nurse staffing levels and skill mix needed in that unit to support high-quality patient care in the most effective manner possible. Specifically, staffing plans are to be developed using expert panels and a data-driven analysis of nursing hours per patient day (NHPPD). VAMC nurse executives—members of senior management within each VAMC— are responsible for implementing the staffing methodology in their respective VAMCs. Expert panels: advisory groups—at the unit and facility level—of VAMC staff with in-depth knowledge of nurse staffing needs. The use of expert panels is intended to apply principles of shared governance, which allows nurses to have influence over the delivery of patient care and involves stakeholders from across the VAMC. VAMC nurse executives are responsible for ensuring that the unit-based expert panels represent all nursing types (RN, LPN, NA) and developing the VAMC’s facility expert panel. Data-driven analysis of NHPPD: involves determining the number and skill mix of nurses needed for each unit by calculating the number of direct patient care nursing hours provided for all patients on that unit during a 24-hour period. The use of NHPPD represents a move away from the more traditional nurse-to-patient ratios that assign a certain number of patients to each nurse. Some research suggests that NHPPD can better capture changes in nurses’ workloads and case mix resulting from admissions and discharges, as well as patient acuity levels, which can impact the amount of time nurses spend with each patient. After developing the staffing plan, each unit-based expert panel presents its plan, which includes staffing recommendations, to the VAMC’s facility expert panel. Those staffing recommendations may include, for example, initiatives to change the number and skill mix of nurses needed for each shift; change the number of nurses required for coverage during predicted absences, such as annual and sick leave; and develop support services for nurses, such as designated individuals to transport patients to other areas of the facility as needed. The facility expert panel—comprised of staff from across the VAMC—reviews each unit-based panel’s staffing plan and aggregates all of the unit plans into one VAMC-wide staffing plan. The VAMC nurse executive reviews the VAMC-wide staffing plan and forwards it to the VAMC director for review and approval. Once approved, the VAMC then begins execution of the initiatives outlined in the VAMC-wide staffing plan. The directive requires each VAMC to conduct an ongoing staffing analysis to evaluate staffing plans annually, at a minimum, and for VAMC directors to incorporate projected staffing needs into their annual budget review. The staffing methodology is being implemented in three phases. In Phase I, VAMCs were to implement the staffing methodology in all inpatient units no later than September 30, 2011. In Phase II, VAMCs are to implement the staffing methodology for all other units, including the operating room, emergency department, and spinal cord injury units. ONS has completed the Phase II pilot for operating room units, and VAMCs are expected to implement the methodology in their operating room units by October 1, 2014. Deadlines for the implementation in other Phase II units have not been set. In Phase III, VAMCs are to use an automated system developed by VHA that (1) merges VHA staffing data used in the staffing methodology and other VHA data, such as human resource data, into one data system, and (2) incorporates the data into staffing-related reports, such as quality-of-care reports. A deadline for Phase III implementation has not been set. In May 2014, the VA OIG found that VAMCs in its review varied in their implementation of the staffing methodology. Specifically, the VA OIG reported that 8 of the 28 VAMCs reviewed had not fully implemented all components of the staffing methodology by September 2013, 2 years past the implementation date required by the VHA directive. As these findings were similar to those of its April 2013 report, the VA OIG stated in its 2014 report, “We re-emphasize the need for all facilities to fully implement the methodology and accurately address patient needs with safe and adequate staffing.” Adequate and qualified nurse staffing at VAMCs is required to provide effective and continuous patient care and to maintain a stable and engaged workplace. The importance of nurse staffing on patient outcomes and nurse job satisfaction has been emphasized by various entities, including The Joint Commission; American Nurses Association; Institute of Medicine; and Agency for Healthcare Research and Quality.and qualifications of nurse staffing to patient outcomes and nurse job satisfaction. For example, studies have shown: Additionally, research has linked the adequacy A link between the adequacy of nurse staffing and patient outcomes, particularly in inpatient units, such as intensive care and surgical units. For example, medication errors, pressure ulcers, hospital acquired infections, pneumonia, longer-than-expected stays, and higher mortality rates each have been associated with inadequate nurse staffing. A link between the qualifications of nursing staff and patient outcomes. For example, one study found that patients cared for in units utilizing more licensed and experienced nursing staff (RNs and LPNs) and fewer unlicensed aides (NAs) had shorter lengths of stay. Other studies linked baccalaureate-prepared nurses to lower mortality rates. A link between nurse staffing and job satisfaction. For example, some studies have linked low job satisfaction to heavy workloads and an inability to ensure patient safety. Other studies found that improving nurse staffing and working conditions may simultaneously reduce nurses’ burnout, risk of turnover, and the likelihood of medical errors, while increasing patients’ satisfaction with their care. Non-VA health care organizations use various approaches to ensure effective nurse staffing. For example, some use fixed nurse-to-patient ratios while others use adjustable, unit-specific minimum staffing levels, and there have been several efforts to address nurse staffing using these different approaches.requiring regulations that mandate specific nurse-to-patient ratios that limit the number of patients cared for by an individual nurse. Other states have passed legislation or adopted regulations addressing nurse staffing without mandating specific ratios or staffing levels. For example, some states require hospitals to have committees responsible for developing unit staffing plans or require public reporting of staffing. All seven VAMCs in our review developed staffing plans using VHA’s nurse staffing methodology and have taken steps to execute them. However, VAMCs experienced problems in both the development and execution of their staffing plans. Improvements in nurse staffing were reported by some of the VAMCs which had taken steps to execute the staffing plans. The seven VAMCs in our review have implemented VHA’s nurse staffing methodology; specifically, each of these VAMCs has developed a facility- wide staffing plan, comprised of unit-level staffing plans for inpatient units, and has taken steps to execute it. Although each of the seven VAMCs in our review developed a staffing plan for FY 2013, only one had developed a plan per VHA’s directive—that is, used both expert panels and analysis of NHPPD—by September 30, 2011, the deadline specified in the directive. (See table 2.) Across all 151 VAMCs, according to ONS officials, VAMCs’ implementation of the nurse staffing methodology varied, with, for example, some VAMCs completing the development of their staffing plans during FY 2013, and some only beginning the development process. In addition to developing staffing plans, all seven VAMCs in our review had taken steps to execute their respective staffing plans. For example, VAMCs had taken steps to execute initiatives to increase the number of unit nurses or change the skill mix of nurses to address patient care needs. (See table 3 for examples of VAMCs’ staffing plan initiatives.) VAMC officials told us there are many factors that could affect the execution of staffing plan initiatives, such as available resources, the amount of time needed, and other strategic priorities. Officials and nursing staff from the seven VAMCs in our review told us they experienced problems developing and executing staffing plans. (See table 4 for examples of problems.) Some VAMCs were able to devise solutions; however in many cases, the problems have persisted. Problems Developing Staffing Plans. Staff and officials from each of the seven VAMCs in our review reported facing problems developing staffing plans. Lack of necessary data resources. Staff and officials at six of the seven VAMCs in our review said they did not have the appropriate data resources to effectively calculate NHPPD as required by VHA’s staffing methodology directive. Specifically, the directive instructs VAMC staff to calculate NHPPD using a wide range of data, such as number of admissions, transfers, and discharges; hours used for planning and treatment; and human resources data. We found that staff and officials needed to use multiple sources to collect the necessary data, in some cases manually, a process they said was time-consuming and potentially error-prone, and required data expertise they did not always have. For example, at one VAMC, the staffing methodology coordinator—a VAMC official who assists with the administrative tasks associated with the implementation process— told us she struggled with some data analysis techniques, such as creating a spreadsheet to help track staffing data, but the VAMC did not have the financial resources to hire additional data analysts to support the methodology. In contrast, officials from two VAMCs in our review told us staffing methodology coordinators were assigned in part based on their data analysis expertise. Difficulty completing and understanding training. Staff from six of the seven VAMCs in our review said the ONS training on the methodology was time consuming to complete, and difficult to understand. In 2011, ONS switched from instructor-led, group training to individual, computer-based PowerPoint training. Many unit staff reported that because the computer-based training took many hours to complete, it was difficult to find the time to complete it, while also carrying out their patient care responsibilities. They told us they often had to start and stop the training to attend to patients, which diminished its effectiveness. Further, the course’s complex material was hard to absorb through an individual, computer-based course, with many staff suggesting their understanding would have been greatly improved with an instructor-led, group course where they could ask questions, ensure consistency of learning, and build camaraderie among unit expert panel members. To address the difficulties in completing and understanding the training, one VAMC developed its own instructor-led, group training provided to all its units. Time required. Staff and officials at all seven of the VAMCs in our review reported that developing staffing plans required a lot of staff time due to the complexity of the process. In particular, they said gaining an understanding of the methodology, collecting the necessary data, convening the unit expert panels, and preparing presentations for the facility expert panel were time-intensive tasks that, in some circumstances, took time away from patient care. For example, members from one unit expert panel estimated they spent, in total, about 160 hours (4 weeks) developing the unit staffing plan during the first year the staffing methodology was implemented in their unit. Some VAMCs’ staffing methodology coordinators developed specific processes designed to decrease the burden on nursing staff and improve efficiency. For example, they created templates for unit panel members to use in staffing plan development; such templates improved efficiency because unit panel members did not have to independently develop their presentation format. Further, facility expert panel members had to orient themselves to only one template, and were therefore able to more easily make facility-level comparisons and decisions. Lack of communication within VAMC. Unit expert panel members at four of the seven VAMCs in our review said there was a lack of communication between nurses and VAMC leadership regarding the status of the staffing plans, including plans for execution of the staffing plan initiatives. Staff at one of these VAMCs said they had not received any feedback on their FY 2012 or FY 2013 unit staffing plans; they added that developing the 2013 staffing plan without getting any feedback on the prior year’s plan felt “frustrating.” In contrast, at another VAMC, officials told us that all unit staff—not just staff involved in the unit panel—received regular updates on the nurse staffing process at their monthly unit staff meetings. Difficulty integrating unit staff into expert panels. Staff and officials at three VAMCs described challenges in integrating unit staff into expert panels. Some unit panel members told us that although they were considered members of their respective unit panels, they were not significantly involved in the development of their units’ staffing plans. For example, a unit panel member said the VAMC’s staffing methodology coordinator calculated the unit’s NHPPD, developed the corresponding unit staffing plan, and presented the unit staffing plan to the respective facility expert panel almost entirely without her unit’s input. As a result, there was limited involvement of the unit panel members in the expert panel and, consequently, limited shared governance. Officials at these VAMCs said that from their perspectives, there was interest in the methodology among unit panel members, but sometimes it was difficult for these staff to attend relevant meetings because of patient needs. In contrast, unit panel members at other VAMCs in our review described how they were fully integrated into the unit panels. They described in detail the data analyses they prepared, the meetings they participated in, and their experiences presenting their unit staffing plans to the facility expert panel. Members from one unit panel told us it was helpful to be able to use data to validate the unit’s staffing and share this data with the facility expert panel—VAMC staff “beyond the typical chain of command.” Officials at this VAMC noted that unit panel members felt “empowered” to present their work to the facility expert panels. Problems Executing Staffing Plans. Staff and officials from six of the seven VAMCs in our review noted problems executing staffing plans once approved by the VAMC director. Hiring delays. Staff and officials from six of the seven VAMCs in our review said they often faced hiring delays that impacted their ability to execute staffing plan initiatives. Some VAMC staff noted it could take more than 6 months to fill unit vacancies. Although staff from one VAMC said hiring was slowed by the dearth of qualified nurses in their community, staff from other VAMCs in our review said the supply of nurses was not the problem, but rather the problem was the VHA hiring process, which took months to complete for each candidate. Additionally, VAMC staff noted that new hires also needed to complete necessary internal trainings before joining a unit full time, which added to the delays, and that some new hires were hurried through this training process because their units were so desperate to have them on staff. Budget constraints. Staff and officials from five of the seven VAMCs in our review said their VAMCs were not able to fully execute their staffing plans due to budget constraints. For example, at one VAMC, one of the approved staffing plan initiatives was the hiring of a large number of nurses for its units, in part, to address the VAMC’s inability to increase their nursing staff over a period of years. The official told us that, due to budget constraints, the VAMC was going to phase in this hiring initiative over the next few years. Some VAMC staff reported improvements in the adequacy and qualifications of their units’ nursing staff when nurse staffing plan initiatives were executed. For example, at two VAMCs at which the number of nurses was increased or support services for nurses, such as patient transporters or sitters, were put in place,adequacy of the nursing staff had improved. Furthermore, improvements in the qualifications of unit nursing staff were noted by staff in VAMC units where, for example, skill-mix changes were made or the amount of floating of nurses from their home unit to an unfamiliar unit was decreased. Both VAMC officials and unit staff noted improvements in staffing when nurses’ qualifications were more appropriately matched to the right level of work (for example, having RNs rather than LPNs available to provide more complex patient care) and to the right units (for example, the units for which they were hired and trained). Some VAMC staff said they also had seen improvements in patient outcomes and nurse job satisfaction. For example, nursing staff at one VAMC said that after creating sitter positions—as indicated by their VAMC’s staffing plan—they saw a decrease in patient falls. The staff said sitters were able to monitor patients more closely, and as a result, patients were less likely to fall during walks to the bathroom, for example. Similarly, nursing staff in a mental health unit at another VAMC said that by having more staff they had decreased their restraint use because there were more staff available to meet veterans’ needs. Additionally, nursing staff we interviewed at one VAMC that had made staffing changes based on staffing plans said they were better able to provide the type of nursing care “veterans deserve,” and this made them feel more positive about their work. Some nurses at this VAMC also said the shared governance aspect of the methodology was empowering, which, combined with their enhanced understanding of staffing at their VAMC, helped improve their overall job satisfaction. However, some VAMC unit staff reported that unit nurse staffing continued to be inadequate and that nurse unit assignments and job duties were not always appropriate for their qualifications. For six of the VAMCs in our review, staff from at least one unit interviewed said their unit staffing levels were inadequate. Staff said ensuring adequate staffing was particularly challenging when there were unplanned staff absences and they had to “scramble” to provide coverage. Some unit staff noted that this situation often resulted in units forcing nurses to work overtime or nurses floating to other units where they did not always have the qualifications to provide care. At some VAMCs, staff said there were increased staff injuries due to inadequate staffing. Furthermore, staff at one VAMC reported that where there had not been any changes made based on the unit staffing plans, their units continued to be understaffed to the detriment of both patient care and their job satisfaction. Our review of VHA’s oversight of its nurse staffing methodology found that some internal controls—those related to environmental assessment, a plan for monitoring compliance, evaluation, timeliness of communication, and organizational accountability—are limited. The implementation of internal controls is necessary for ensuring initiatives achieve intended outcomes and for minimizing operational problems. Without these internal controls in place, VHA cannot ensure that its methodology meets department goals, such as establishing a standardized methodology for determining adequate and qualified nurse staffing at all VAMCs, and ultimately, having nurse staffing that is adequate to meet veterans’ health care needs. Environmental Assessment. VHA did not comprehensively assess each VAMC to ensure preparedness for implementing its methodology, including having the necessary technical support and resources, prior to the issuance of the methodology directive in 2010. Furthermore, as of August 2014, VHA did not have a plan for assessing whether VAMCs have the necessary resources to execute their approved nurse staffing plans. Under federal internal control standards, successful organizations monitor their internal and external environments continuously and systematically, and by building environmental assessments into the strategic planning process, are able to stay focused on long-term goals even as they make changes to achieve them. VHA did not assess VAMCs’ technical resources to determine if all VAMCs would be able to successfully implement the methodology. For example, the directive recommended that VAMCs use comparative data from external sources, such as the National Database of Nursing Quality Indicators (NDNQI) when analyzing unit-level staffing data. According to some VAMC officials, due to the costs and complexity of contracting, not all VAMCs had access to this data source. Each VAMC was responsible for establishing its own contract to purchase access to NDNQI data, which some VAMC officials said was expensive and time- consuming to set up, noting that it would have been helpful to have assistance in coordinating the contracting process. Officials from ONS reported that they are discussing the possibility of having a VHA-wide contract so that all VAMCs would have access to NDNQI data. In addition to access to comparative data, according to the directive, VAMCs need appropriate data system capabilities—in particular an automated staffing system for information such as patient admission, transfer and discharge data, and human resources data—to facilitate implementation of the data- driven methodology and calculation of NHPPD. However, not all VAMCs in our review had an automated staffing system in place even 3 years after the release of the directive. Officials at a VAMC without an automated staffing system told us staff were collecting and inputting data, in many cases manually, into a spreadsheet to calculate NHPPD, and that this process was extremely time-consuming and potentially error- prone. ONS officials said they knew VAMCs needed automated staffing systems when the directive was published in 2010. However, they thought Phase III—a national automated staffing system—would be forthcoming, and did not fully review whether VAMCs had alternative data capabilities to assist them in the interim. When we asked how they assessed the readiness of VAMCs for implementation of the methodology, ONS officials told us that they did not do this as well as they should have for Phase I implementation in inpatient units, despite its 2009 Phase I pilot evaluation to better understand the potential capabilities and weaknesses of VAMCs. According to ONS, it still has not conducted such an assessment of all VAMCs even though it has moved forward with planning the national rollout of Phase II in operating room, emergency department, and spinal cord injury units. ONS, however, has assessed some of the available resources of the sites that have participated in the pilots for Phase II in spinal cord injury units. For example, ONS officials told us that they asked these participating sites questions about their access to data and nurse turnover within the pilot units to determine their ability to fully and successfully participate in the pilot. According to ONS officials, all sites reported that they were able to fully participate in the pilot. By not comprehensively assessing the VAMCs’ technical support and resources to determine if they were prepared to implement the methodology, VHA had no assurance that the VAMCs would be successful. Plan for Monitoring Compliance. ONS did not develop a plan for monitoring VAMCs to ensure they were in compliance with the implementation and ongoing administration of Phase I of the methodology. Under federal internal control standards, plans should be designed to ensure that ongoing monitoring occurs in the course of normal program operations, and managers should identify performance gaps in compliance with program policies and procedures. ONS reported implementing two mechanisms for obtaining information from VAMCs—a 2013 questionnaire sent to all VAMCs and monthly methodology conference calls with VAMCs—but neither was an adequate mechanism for comprehensively assessing the compliance of each VAMC. The questionnaire, sent nearly 2 years after the deadline for implementation of Phase I of the methodology, asked VAMCs to report their status of staffing plan development, but because of lack of clarity in the questions asked, inconsistency in medical center responses, and lack of validation of the self-reported responses, it was not reliable for determining the extent to which VAMCs had developed staffing plans. ONS officials reported that they have no plans to survey VAMCs again on their status of developing staffing plans. Furthermore, the monthly methodology conference calls that started when the directive was published in 2010 did not provide an adequate mechanism for monitoring compliance because they too relied on VAMCs to self-report problems. A VAMC official told us that participants were reluctant to raise problems, such as not developing staffing plans on time, during these monthly calls. In addition, the directive requires VAMCs to evaluate their staffing plans for Phase I annually, or more frequently if needed, but ONS officials told us that they did not have a systematic plan for monitoring compliance with this evaluation beyond the 2013 questionnaire and the monthly methodology conference calls. Moving forward, ONS officials said they plan to review whether all VAMCs implemented both the unit and facility expert panels, but, as of August 2014, had no detailed plan or timeline for conducting this review or for monitoring VAMCs’ ongoing evaluation of their staffing plans. The lack of a plan for monitoring VAMCs’ compliance with the implementation and ongoing administration of the methodology hinders VHA from being able to ensure that all VAMCs are staffing their nurses using the same, standardized methodology. Evaluation. There have been limited evaluations of the methodology, and one of these evaluations has been significantly delayed. Under federal internal control standards, measuring performance allows organizations to track the progress they are making towards program goals and objectives, and provides managers important information on which to make management decisions and resolve any problems or program weaknesses. Evaluation of Phase I pilot (conducted in September 2009)—ONS identified VAMC challenges with implementing the methodology— such as difficulties accessing data, and staff nurses having an overall lack of knowledge of the methodology process. The evaluation contained recommendations, such as developing a training guidebook and providing guidelines on the role of the expert panels, to improve the methodology process. According to ONS, most of the recommendations from this 2009 evaluation have been addressed; however, we found that weaknesses identified in the 2009 evaluation still existed for all of the seven VAMCs included in our review. Evaluations of Phase I national implementation and training (began early 2014, preliminary results were expected August 2014). Similarly, ONS did not begin an evaluation of the national implementation of the methodology until January 2014, more than 2 years after VAMCs were required to have implemented it, and, as of August 2014, had still not been completed. According to ONS officials, the Phase I national evaluation was to review VAMCs’ experiences during implementation, including a review of the training provided to VAMCs during that phase. The lengthy delay in the evaluation of Phase I was potentially problematic because the ongoing difficulties that VAMCs have experienced during implementation may have been avoided or resolved more quickly if the evaluation results had been available and corrective actions put into place. VAMC staff we interviewed told us they have been struggling with components of the methodology since the directive was issued. For example, some VAMC staff expressed difficulty completing and understanding the data analysis process for calculating NHPPD. An earlier evaluation of the methodology could have helped identify this problem, as well as potential solutions to address it. Furthermore, the delay limited ONS’s time to apply lessons learned from Phase I evaluations to the implementation of Phase II, portions of which are already nearly complete. Phase II pilot training evaluation (began in early 2014 with results expected November 2014)—ONS is conducting an evaluation of the training that was provided to the VAMCs involved in the Phase II pilots in operating room, emergency department, and spinal cord injury units to determine if the training provided to these units needs to be changed in preparation for the national rollout. ONS officials told us that they have completed the operating room pilot; the national rollout of the methodology in operating room units in all VAMCs began in February 2014 and is expected to be completed by October 1, 2014. ONS officials said that it has completed the pilot for the emergency department units, but has not completed the pilot for spinal cord injury units; ONS has not scheduled deadlines for their national implementation. VHA’s delays in completing evaluations of the methodology limit its ability to identify and resolve VAMC implementation and administration problems, and thus help to ensure successful rollouts of subsequent phases of the methodology. Timeliness of Implementation and Communication. The long timeline for implementing the pilots and national rollouts of Phases I and II, as well as evaluating Phase I of the staffing methodology—more than 7 years— and for communicating methodology-related information to VAMCs may have hindered the ability of VAMCs to develop their staffing plans and to execute the initiatives contained in those plans. Under federal internal control standards, timeliness in the development of a program or implementation of a policy is needed to maintain relevance and value in managing operations and making decisions. When information regarding a policy or program is not provided in a timely manner, there can be a loss of stakeholder support, which can affect how stakeholders make decisions. For example, staff from some VAMCs involved in the Phase II pilot stated that they believed the data and reports generated from the methodology were only a paper exercise because they had not gotten any feedback from ONS on next steps. ONS officials told us they have communicated information on the Phase II pilot, such as the status of the pilot and feedback obtained from the training sessions, through their monthly conference calls with VAMCs; however, based on our interviews, this information did not reach many staff at the VAMCs in our review that participated in the Phase II pilot. Furthermore, ONS officials have not adequately communicated to VAMCs the status of Phase III of the methodology—development of a national automated staffing system. According to the directive, a national automated staffing system was to be developed to support VAMCs in the implementation of the methodology. Because this automated staffing system has yet to be developed as per the directive, officials from two VAMCs told us they bought their own systems, which helped to effectively administer the methodology. ONS officials told us at the time the directive was published in 2010, Phase III implementation was an aspirational goal. ONS officials said they had expected VHA data system teams to begin the process of developing a national automated system; however, it was not made a department goal, and is not currently on the list of projects under consideration for funding. Having a variety of staffing systems, and thus inconsistent data variables across VAMCs, inhibits ONS’s ability to adequately evaluate the effectiveness of the staffing methodology. If an automated staffing system is eventually developed under Phase III, VAMCs likely will have to dismantle the staffing systems they have created and restructure their data analysis processes, which likely will be time-consuming and costly. VHA’s long timelines for the implementation and communication of methodology-related information put stakeholder support of the methodology at risk and increase the potential for duplication of efforts. Organizational Accountability. VHA did not define areas of responsibility or establish the appropriate line of reporting within the framework of VA’s management structure for the ongoing administration and oversight of the methodology. Under federal internal control standards, an agency’s organizational structure should provide management with a framework for planning, directing, and controlling operations to achieve agency objectives; a good internal control environment requires that the agency clearly defines key areas of authority and responsibility. VHA does not require VAMCs to submit any information or reports on the implementation and ongoing administration of the methodology to ONS or the VISNs. Such information, if it were shared, may have been used to inform ONS of any systematic problems that necessitate changes to help ensure the continued viability of its methodology, as well as identify any best practices that have been implemented by VAMCs across the country. ONS officials told us that they did not require the VAMCs to submit any such documentation to ONS, because they made a conscious decision to not “micro-manage” the local process of nurse staffing. Furthermore, VHA has not sufficiently utilized the VISN-level management structure in the implementation or ongoing administration of the methodology. While the methodology directive described a role for the VISNs, that role was limited to ensuring that resources are available to VAMCs as they try to staff their units; the directive did not mention a role in the implementation or ongoing administration of the actual methodology. As a result, VISNs have not been consistently aware of problems experienced by VAMCs in their region, and have not provided support or education. In our interviews with VISN officials representing each of the seven VAMCs in our review, we found that three of the VISNs were not substantively involved in the implementation and ongoing administration of the methodology. According to ONS, in many VISNs, discussions of staffing methodology implementation were minimal, and rather than VISN leadership, the nurse executives, in addition to their responsibilities within their individual VAMCs, had the responsibility of disseminating staffing methodology-related information to the VAMCs within the VISN. Staff from three VISNs that were more substantially involved in the implementation of the methodology provided oversight for the nurse staffing methodology and acted as liaisons for VAMC nurse executives for network-level issues. One VISN official we interviewed was developing oversight mechanisms for VAMCs in the region, including a requirement for nurse executives to submit a quarterly staffing report. According to the official, having such a reporting requirement at the VISN level would give the right amount of emphasis to the process and provide support to nurse executives implementing the methodology in the VAMCs. The quarterly report could also help inform VISN officials about issues with the methodology. This official was developing these mechanisms independently of ONS, but they could be considered potential best practices to be shared across all VISNs. ONS officials told us they thought ideas or problems across VAMCs related to the methodology would be shared through the VAMC nurse executives. They also hoped that VISN leadership would be interested in the methodology and, as a result, schedule VISN-level briefings to aid in its implementation. VHA, however, did not specify either of these roles in the directive or take steps to ensure that they were occurring. Moving forward, ONS officials said they are considering developing a VISN-level staff position that would specifically focus on educating VAMCs within the region about the methodology, and assisting them with implementing it. Without clearly defined roles and responsibilities within VA’s organizational structure, VHA’s ability to improve its oversight of the implementation and administration of the staffing methodology and provide VAMCs with additional resources to assist with problems is compromised. As the number of veterans requiring care in VAMCs and the complexity of services needed by many of these veterans increase, the need for an adequate and qualified nurse workforce is increasingly critical. Although VHA’s nurse staffing methodology was intended to provide a nationally standardized methodology for determining and ensuring adequate and qualified nurse staffing at VAMCs, its ability to do so across all 151 VAMCs is not likely to be realized unless existing weaknesses are addressed. Although some improvements in nurse staffing were reported with the implementation of the staffing methodology, the seven VAMCs in our review experienced problems developing and executing the related staffing plans, including problems pertaining to data resources, training, and communication. Many of these problems persist as the seven VAMCs continue to administer the methodology. We also found that VHA’s oversight of the staffing methodology is limited and in many cases lacks sufficient internal controls, which could diminish VHA’s ability to ensure an adequate and qualified nurse workforce. In particular, VHA has not adequately assessed the needs or preparedness of VAMCs to effectively implement the methodology, does not have a formal mechanism to ensure VAMCs’ ongoing compliance with the methodology, has not clearly defined a role in oversight for VISNs, and does not regularly communicate with VAMCs or VISNs to cull and share best practices system-wide. Furthermore, delays in VHA’s evaluations of early phases of the staffing methodology have made them too late to be useful in designing future phases or helping VAMCs with implementation. Because the implementation and administration of the nurse staffing methodology is ongoing, it is critical that VHA improve its oversight to help ensure an adequate and qualified nurse workforce across all VAMCs. To help ensure adequate and qualified nurse staffing at VAMCs, we recommend that the Secretary of Veterans Affairs direct the Interim Under Secretary for Health to enhance VHA’s internal controls through the following five actions: 1. Provide support to all VAMCs to meet the objectives of the VHA a. training that more clearly aligns with the needs of VAMC staff and b. a systematic process for collecting and disseminating staffing 2. Conduct an environmental assessment of all VAMCs, including an assessment of their data analysis needs, to determine their preparedness to implement the remaining phases of the methodology, and use that information to help guide and provide the necessary support for the implementation of the remaining phases and for the ongoing administration of the methodology; 3. Develop and implement a documented process to assess VAMCs’ ongoing compliance with the staffing methodology, including assessing VAMCs’ execution of staffing plans and more clearly defining the role and responsibilities of all organizational components, including VISNs, in the oversight and administration of the methodology; 4. Complete evaluations of Phase I and Phase II and make any necessary changes to policies and procedures before national implementation of Phase II in all VAMCs; and 5. Improve the timeliness and regularity of communication with VAMCs, including unit-level staff, regarding the status of the various phases of the methodology. We provided a draft of this report to VA for its review and comment. VA provided written comments, which are reprinted in appendix I. In its written comments, VA generally agreed with our conclusions and concurred with all five of the report’s recommendations. To address the recommendations, VA indicated that VHA will take a number of actions, such as developing a written document specifying its process for assessing ongoing compliance with the staffing methodology and improving the timeliness and regularity of communication with VAMCs through face-to-face regional training sessions. VA indicated that target completion dates for implementing these recommendations range from September 2015 through September 2016. Regarding the recommendation that VA complete evaluations of Phase I and Phase II before national implementation of Phase II in all VAMCs, VA indicated that, by September 2016, it would complete its evaluations and determine what opportunities exist to modify policies and procedures, but did not explicitly state that the evaluations would be completed before national implementation. We continue to emphasize the importance of completing the evaluations before national implementation of Phase II in all VAMCs. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Janina Austin, Assistant Director; Jennie Apter; Kathryn Black; Jacquelyn Hamilton; Kelli Jones; Vikki L. Porter; and Karin Wallestad made key contributions to this report.
GAO and others have raised prior concerns about the adequacy and qualifications of VHA's nurse staffing. In part to address these concerns, VHA issued a directive in 2010 requiring all VAMCs to implement a standardized methodology for determining an adequate and qualified nurse workforce, which includes developing and executing nurse staffing plans. It also requires VAMCs to use the methodology on an ongoing basis to evaluate staffing plans. GAO was asked to provide information on nurse staffing at VAMCs. This report reviews the extent to which (1) VAMCs have implemented VHA's nurse staffing methodology, and (2) VHA oversees VAMCs' implementation and ongoing administration of the methodology. GAO reviewed documents and interviewed officials from VHA, seven VAMCs selected to ensure variation in factors such as geographic location, and regional offices for these VAMCs. GAO used federal internal control standards to evaluate VHA's oversight. GAO also interviewed representatives of veterans service organizations, nursing organizations, and unions. The seven Department of Veterans Affairs medical centers (VAMC) in GAO's review implemented the Veterans Health Administration's (VHA) nurse staffing methodology, experienced problems developing and executing the related nurse staffing plans, and some reported improvements in nurse staffing. Specifically, GAO found that each of the seven VAMCs had developed a facility-wide staffing plan—which outlines initiatives needed to ensure appropriate unit-level nurse staffing and skill mix—and taken steps to execute it. However, VAMCs experienced problems—such as lack of data resources and difficulties with training—in both the development and execution of their staffing plans. Some VAMC staff reported improvements in the adequacy and qualifications of their units' nursing staff when nurse staffing plan initiatives were executed. For example, at two VAMCs where the number of nurses was increased or where support services for nurses were put in place, such as a designated group of staff to assist in transporting patients to and from appointments off the unit, unit staff said the adequacy of the nursing staff had improved. However, some VAMC unit staff reported that unit nurse staffing continued to be inadequate and that nurse unit assignments and job duties were not always appropriate for their qualifications. VHA's oversight is limited for ensuring its nurse staffing methodology is implemented and administered appropriately. GAO found the following internal controls were limited in VHA's oversight process: Environmental assessment. VHA did not comprehensively assess each VAMC to ensure preparedness for implementing the methodology, including having the necessary technical support and resources, prior to the issuance of the directive requiring each VAMC to implement the methodology. Monitoring compliance. VHA does not have a plan for monitoring VAMCs to ensure compliance with the implementation and ongoing administration of the methodology. Evaluation. VHA has conducted limited evaluations of the methodology, and at least one of these evaluations has been significantly delayed. Timeliness of communication. VHA's protracted timeline for communicating methodology-related information may have hindered the ability of VAMCs to appropriately develop their staffing plans and to execute the initiatives contained in those plans. Organizational accountability. VHA did not define areas of responsibility or establish the appropriate line of reporting within VA's management structure for oversight of the implementation and ongoing administration of the methodology. Without these internal controls in place, VHA cannot ensure its methodology meets department goals, such as establishing a standardized methodology for determining an adequate and qualified nurse workforce at VAMCs, and ultimately, having nurse staffing that is adequate to meet veterans' growing and increasingly complex health care needs. GAO recommends VA: (1) assess VAMCs' ability to implement the methodology, (2) monitor VAMCs' ongoing compliance with the methodology, (3) complete timely evaluations, (4) improve the timeliness of communication with VAMCs, and (5) define areas of responsibility and reporting within VA's management structure. VA concurred with the recommendations.
The Division is responsible for promoting and maintaining competition in the American economy by enforcing the federal antitrust laws. To accomplish its mission, the Division has 16 sections in headquarters and 7 field offices located throughout the United States. As of July 21, 2000, the Division had 561 full-time staff and 237 part-time staff onboard; about 29 percent of the full-time staff and about 21 percent of the part-time staff were assigned to the field offices (see app. II for a breakdown of staffing by sections and field offices). The Division’s budget authority for fiscal year 2000 was $114.4 million, about a 16-percent increase, in nonconstant dollars, over fiscal year 1999 funding of $98.3 million. The Division’s appropriations are offset by the fees companies are required to pay when they file premerger notifications under the HSR Act. The Division is responsible for enforcing the antitrust laws throughout the economy, including industries, such as computers, health care, telecommunications, transportation, and agriculture. Overall, the Division handled over 4,900 matters during fiscal year 1999, 4,642 (about 95 percent) of which were proposed mergers for which notice was filed under the HSR Act. For fiscal year 1999, agricultural industry matters— both mergers and nonmergers—represented about 8 percent (395) of the Division’s total workload. As previously noted, the basic federal antitrust statutes are the Sherman Act and the Clayton Act. Section 1 of the Sherman Act makes illegal any contract, combination, or conspiracy that results in a “restraint of trade.” The courts have construed the term to cover a variety of horizontal and vertical trade restraining agreements. Horizontal restraints are agreements among competitors at the same level of the production, distribution, or marketing process. Vertical restraints are arrangements among persons or firms operating at different levels of the manufacturing- distribution-marketing chain that restrict the conditions under which firms may purchase, sell, or resell. Section 2 of the Sherman Act prohibits monopolization, as well as attempts, combinations, or conspiracies to monopolize. Although the Sherman Act is both a criminal and civil statute, it is the Division’s policy to criminally prosecute only what the Division considers to be the most egregious per se Sherman Act violations.However, in some situations the Division may not deem criminal investigation or prosecution to be appropriate, even though the conduct may appear to be a per se violation of the law. As examples, the Antitrust Division manual, which outlines the Division’s formal internal practices and procedures, cites situations in which (1) there is confusion in the law; (2) there are novel issues of law or fact presented; (3) confusion reasonably may have been caused by past prosecutorial decisions; or (4) there is clear evidence that the subjects of the investigation were not aware of, or did not appreciate, the consequences of their actions. Section 7 of the Clayton Act prohibits mergers and acquisitions that may substantially lessen competition or tend to create a monopoly in any market. The HSR Act added section 7A to the Clayton Act, which requires premerger notification of proposed mergers to assist the Division and FTC in investigating whether they would be anticompetitive. The Division generally shares responsibility for enforcing federal antitrust laws with FTC and state attorneys general. With the exception of criminal enforcement of the Sherman Act, which the Division has sole authority for, the “unfair methods of competition” clause of section 5 of the FTC Act generally allows FTC to reach the same conduct as prohibited by the Sherman Act. Both the Division and FTC are responsible for enforcing Sections 7 and 7A of the Clayton Act, with the exception of certain industries in which FTC’s jurisdiction is limited by statute. For example, Section 5 (a) (2) of the Federal Trade Commission Act of 1914 generally excludes from its coverage activities subject to the Packers and Stockyards Act. FTC also enforces the Robinson-Patman Act, which governs price discrimination in interstate commerce, and section 8 of the Clayton Act (15 U.S.C. 19), which governs interlocking directorates. As previously noted, the Division is responsible for enforcing the antitrust laws in a broad range of industries. The antitrust laws apply generally throughout the economy, and the Division exercises prosecutorial discretion to determine which matters warrant investigation or enforcement action. According to Division officials, their principal expertise is in antitrust laws, not in specific industries. Some industries also are regulated by government agencies under statutes that go beyond the antitrust laws to establish additional, industry-specific regulatory requirements and standards. For example, USDA’s Grain Inspection, Packers, and Stockyards Administration (GIPSA) regulates sales in the livestock and meat-packing industries. GIPSA generally has the authority to prohibit unfair trade primarily within the livestock industry. The Division’s policies and procedures manual outlines the processes for investigating potential antitrust violations. Depending on the conduct alleged, the Division can initiate either civil or criminal investigations of these potential violations. The Division may identify possible antitrust violations or proposed mergers through a variety of sources. The Division is made aware of many proposed mergers through filings required of the merging parties by the HSR Act. The Division may learn of a possible antitrust violation from a confidential informant, individuals, or corporations applying for amnesty; complaints and referrals from other government departments or agencies; anonymous tips; or through reviews of newspapers, journals, and trade publications. It may develop information about potential criminal violations through grand jury proceedings. As previously noted, more detailed information and flow charts of the Division’s enforcement procedures can be found in appendix III. Our objectives were to (1) describe the Division’s interaction with FTC and USDA with regard to antitrust matters in the agriculture industry, (2) provide information on the number of complaints and leads in the agriculture industry received by the Division for fiscal years 1997 through 1999, and (3) provide information on the number and type of agriculture- related matters closed by the Division for fiscal years 1997 through 1999. As agreed with Senator Grassley’s office, we used SIC codes selected by the Division to define the agriculture industry (see app. I). The SIC codes selected included only those codes specifically related to plant and animal products that originate on land and are commercially cultivated or raised for human or animal consumption. To obtain information on the Division’s interaction with FTC and USDA, we interviewed officials in the Division, FTC, and USDA. We also reviewed the Division’s policies and procedures for interacting with other agencies, including relevant interagency agreements. The Division does not maintain divisionwide data on complaints and leads related to possible anticompetitive practices in the agriculture industry. To obtain an estimate of the number of agriculture-related complaints and leads received in fiscal years 1997 through 1999, we interviewed the section or field office chief, assistant section chief, and other knowledgeable Division officials for each of the five legal sections and seven field offices that potentially handle matters in the agriculture industry during this period. These officials provided information on the number of complaints and leads received and the methods they used to record and track them. To gather information on the number and type of matters closed in the agriculture industry in fiscal years 1997 through 1999, we obtained data on the characteristics of the matters for selected SIC codes and for the relevant time period from the Division’s MTS. However, some of the data in MTS were not accurate and reliable. For example, actions were shown as being taken on matters after the matter had been recorded as closed, or links were not being established between related matters. We also reviewed opening and closing memorandums for matters that were closed without the filing of a complaint to determine the reasons for closing the matters. We did not attempt to determine the appropriateness of the Division’s prosecutorial decisions. More detailed information about our scope and methodology can be found in appendix I. We performed our work in Washington, D.C., between October 1999 and February 2001, in accordance with generally accepted government auditing standards. As noted earlier, the basic federal antitrust statutes are the Sherman Act and the Clayton Act, and the Division generally shares responsibility for enforcing the antitrust laws with FTC. The interactions between the agencies are generally limited to their roles in enforcing antitrust laws. Officials from the Division, FTC, and USDA said that their agencies maintain a cooperative working relationship with regard to anticompetitive matters in the agriculture industry. According to Division and FTC officials, their agencies’ interactions with respect to specific matters generally occur during the clearance process established between them to determine which agency will investigate potential antitrust violations for which they have joint jurisdiction. According to agency officials, in recent years, the Division and USDA have worked together in a number of respects. For example: The Division obtains useful information from USDA’s AMS and ERS. With respect to livestock and meatpacking markets, the Division has obtained useful information from USDA’s GIPSA. The Division has also provided technical assistance to GIPSA on various economic studies and on GIPSA’s competition enforcement program. During the course of any major antitrust investigation involving agriculture-related markets, the Division typically consults with USDA officials to obtain the benefit of their perspective. USDA and Division officials have also participated in a number of interagency policy and public outreach efforts regarding competitive conditions in agriculture-related markets. The Division, FTC, and USDA entered into a Memorandum of Understanding (MOU) on August 31, 1999, that sets forth general policy for interacting, exchanging information, and continuing to work together on competitive developments in the agriculture marketplace. Division, FTC, and USDA officials described the MOU as memorializing a long- standing, cooperative working relationship between the agencies. Under the MOU, each agency agreed to designate a primary contact person to facilitate communication among agency officials. The Division’s designated contact is an attorney-advisor in the Legal Policy Section. However, in practice, this responsibility is shared with the Division’s Special Counsel for Agriculture. FTC’s contact is the Deputy Director in the Bureau of Competition; and USDA’s contact is the Assistant General Counsel, Trade Practices Division. The officials in the three agencies could not determine whether there has been any change in the purpose or frequency of contact since the MOU was signed because they have not tracked, nor do they currently track, communications, coordination, or consultations between the agencies. To avoid duplication of effort in areas of antitrust enforcement in which FTC and the Division share enforcement jurisdiction, the agencies issued joint clearance procedures, most recently revised in December 1993, to determine which agency will pursue an investigation. According to Division officials, the agencies interact on specific matters mainly around these procedures. Appendix III provides, among other things, information on the clearance procedures. According to Division officials, the August 1999 MOU is the only formal procedure relating to the interaction between the Division and USDA. FTC has had a formal agreement with USDA since at least 1958 regarding investigations in the wholesale or retail food industry. According to Division and USDA officials, USDA provides the Division with useful information about agricultural markets that the Division uses in its economic analysis of a particular industry. Additionally, when USDA uncovers conduct that it believes may violate antitrust laws, it has the authority to refer the matter to the Division for possible investigation and enforcement action. Our review of 64 matters that the Division closed after conducting a preliminary investigation (PI) during fiscal years 1997 through 1999 revealed that 1 matter was referred from USDA. Additionally, the Division consulted with USDA on six of the matters. Our 1991 review of the Division’s criminal cases found that although the Division had a policy for documenting information on complaints and leads from the public, this policy was not always followed for complaints or leads the Division decided not to investigate. As a result, the Division did not know how many complaints it had received nor did it have complete data on their characteristics. Such problems continued during the period of our review, fiscal years 1997 through 1999. High-level Division officials were not aware of a policy requiring staff to document all complaints and leads from the public, and the Division did not have a uniform database to provide divisionwide information on complaints and leads. During the course of our review, the Division took steps to improve its documentation of complaints and leads. If fully implemented, these steps should move the Division in a positive direction toward addressing the deficiencies we identified. However, the improvements do not allow the Division to track complaints and leads by industry classifications, and on the basis of past history, the information may still be collected inconsistently by the sections and field offices. Thus, better guidance and closer management attention to monitoring the documentation of complaints and leads will be necessary. We met with high-level Division officials, including the Chief of the Legal Policy Section and an attorney-advisor in that section; the Special Counsel for Agriculture; and the attorney who, according to Division officials, handles the largest number of the complaints and leads in the agriculture industry. We asked them whether the 1980 policy cited in our 1991 report was still in effect. Initially, the officials were not aware of whether such a policy was still in effect. According to the officials, the Division could provide only an estimate of the number of complaints and leads received in the agriculture industry. On February 29, 2000, the Division’s Chief of Staff issued a memorandum to all section and field office chiefs stating that the Attorney General wanted to ensure that the Department of Justice was answering all of its mail and telephone inquiries in a timely and effective way. The memorandum required all sections and field offices to use the Division’s CCTS to track inquiries from the public to help ensure that the Division was responsive to incoming inquiries. However, in June and July 2000 when we conducted our structured interviews with section and field office staff responsible for documenting and tracking complaints and leads, they were not all using CCTS, and two told us they had never heard of CCTS. When the Division began implementing CCTS in 1997, the Division encouraged, but did not require, the sections and field offices to use CCTS for general and telephone correspondence. CCTS was designed primarily to record information on controlled correspondence (e.g., correspondence received from the White House; Congress; and federal, state, and local agencies). CCTS includes such information as the complainant’s name, address, phone number, position, and organization; the date received, date assigned, section name, staff assigned, response due date, and date completed; and a description of the complaint and keywords. CCTS does not include specific fields to capture information on SIC codes or related industries or the final outcome of complaints and leads received from the public. Additionally, CCTS is not linked to the Division’s MTS. As a result, CCTS does not have the information needed to analyze patterns of potential anticompetitive behavior that might emerge from complaints and leads related to specific industries or to track the ultimate outcome of a complaint or lead. Subsequent to our interviews with the five legal sections and seven field offices that would potentially handle complaints and leads related to the agriculture industry, Division officials determined that the July 2, 1980, directive requiring staff to document all complaints and leads was still in effect. The 1980 directive outlined the Division’s policies and procedures for handling all unsolicited contacts of a routine and nonsensitive nature from the general public, whether by letter, telephone call, or personal visit. The directive required that staff ensure that unsolicited public contacts are assigned promptly to the appropriate staff member(s); log basic information about each contact; mail written responses, if warranted, within 20 working days of the date the inquiry was received by the Division; maintain a central file in each section and office of all correspondence and notes on telephone calls and visits; and make all logs and files described in the directive available for review by Division officials. In the absence of a central source of uniform, complete, and reliable data on complaints and leads, the Division agreed to let us contact officials in five legal sections and seven field offices to obtain any available data on complaints and leads received in those sections and field offices.According to the data provided, the Division received 165 agriculture- related complaints and leads in fiscal years 1997 through 1999, 14 of which resulted in a PI being initiated (see table 1). Of these 14, 1 was referred to USDA. Officials in these sections and field offices provided information derived from a variety of sources, such as automated and manual tracking systems and attorney notes, that varied among the sections and field offices. We did not verify the accuracy or completeness of the information provided. Officials in Transportation, Energy and Agriculture (TEA), which received the largest number of complaints and leads reported to us, indicated that the numbers for fiscal years 1997 and 1998 did not, to their knowledge, include complaints received by phone and that the 1999 data were based on information from the two section attorneys who received “most” of the complaints and leads for the section. According to Division officials, the Division receives many complaints reporting conduct that is unrelated to antitrust. In these instances, they said that a complaint might be referred to an appropriate federal, state, or local agency. For example, 4 of the 165 complaints and leads the Division received during fiscal years 1997 through 1999 were referred to FTC. On some occasions, they said the Division section may refer an antitrust- related complaint or lead to another Division section, to FTC, or to the appropriate state attorney general’s office. If a Division attorney determines that there are sufficient indications of an antitrust violation to open an investigation beyond discussions with the complainant, he or she is to draft a PI request memorandum. According to Division officials, staff can generally determine whether the information provided by a complainant merits further investigation. The Division may respond to a complainant by telephone or in writing. Division officials also said that many complaints in the agriculture industry are general in nature, with no information useful for an investigation, and that the Division’s response to such a general complaint typically attempts to educate the complainant on antitrust laws and the Division’s role in enforcing them, describes the type of evidence needed to open an investigation or bring a case, and invites the complainant to come back to the Division with more specific information that might indicate a possible antitrust violation. On October 24, 2000, the Division rescinded the 1980 policy and issued a new directive requiring all of its sections and offices to (1) use the Division’s CCTS to document and track all unsolicited public contacts (such as complaints and leads) and (2) maintain a central file of these contacts. This directive made section and field office chiefs responsible for ensuring that the procedures are routinely followed. The October 2000 directive was replaced with a November 27, 2000, directive. The only difference between the two directives was that the November directive made reference to an optional word perfect form that staff could use—in addition to CCTS, not in lieu of it—to document information for the section’s or office’s own central file. Consistent with the 1980 directive, neither the October 24, 2000, directive nor the November 27, 2000, directive provided guidance on the information that staff are required to document on each contact. Consequently, it is not clear that the latest directive will resolve the data availability issues discussed above, including inconsistently recorded information on complaints and leads. To obtain information on the number and type of closed matters in the agriculture industry for fiscal years 1997 thorough 1999, we relied on data from the Division’s MTS, its primary management information system for tracking the Division’s matters. We encountered several problems with the MTS data, including inconsistencies on matter status and type. For example, there were many matters for which the final disposition was unclear according to our review of the MTS data. For some of the matters we reviewed, the MTS data indicated that actions had been taken on matters after the matters had been recorded as closed. In addition, the dispositions for some of the matters were not appropriate for the particular phase of the matters. We worked with Division officials to resolve these issues wherever possible. Recognizing its limitations, we believe the adjusted MTS data we used can provide a general profile of agriculture-related matters closed by the Division during fiscal years 1997- 1999. We used the MTS data because there is no other comprehensive listing of the Division’s matters and their status. According to Division officials, there were several causes for the MTS data problems we encountered. Some of the matters we reviewed were entered into the Division’s older system, the Antitrust Management Information System (AMIS), and were incorporated into MTS without change. A 1991 GAO report reported that there were inaccuracies and inconsistencies in the AMIS data. Concerning the matters for which MTS incorrectly indicated that the matter was closed following the PI, Division officials said it is possible to have further action after a matter is closed. Occasionally, evidence associated with a particular investigation would initially prove inadequate for further prosecutorial action. However, several months or years later, evidence may become available, and instead of opening a new PI, the Division would reopen the original investigation. We did not test the MTS data to determine the full nature or extent of the data reliability problems. However, with the corrections made with the assistance of Division officials, the MTS data we used can provide a general profile of the status of agriculture-related matters closed by the Division in fiscal years 1997 through 1999. MTS data showed that the Division closed 1,050 matters involving mergers and potential antitrust violations in the agricultural industry during fiscal years 1997 through 1999. Of these 1,050 matters, 935 (89 percent) were Hart-Scott-Rodino merger filings. Of these 935 filings, 827 (88 percent) expired within the initial 30-day HSR waiting period. The Division took no formal action, such as opening a PI, beyond reviewing the documents submitted by the merging parties and public sources of information for these 827 matters—a normal outcome for mergers that are permitted to proceed after the required premerger notification period. Table 2 shows the classification of the 1,050 matters closed by the Division during this time period. As discussed previously, the Division included several SIC codes in its definition of the agricultural industry. The largest number of the 1,050 matters closed by the Division during the 3 fiscal years was in SIC codes for food manufacturing—442, or 42 percent. Table 3 shows the number of matters that were closed by the Division during fiscal years 1997 through 1999 by the agriculture-related SIC categories that make up the Division’s definition for the agricultural industry. It should be noted that FTC also handles matters in the agriculture industry, so the figures above do not include the entire universe of agriculture antitrust matters and merger filings within these SIC codes. The Division could not provide reliable data on the direct labor costs for agriculture-related matters that were closed during fiscal years 1997 through 1999. According to Division officials, MTS does not contain reliable data on direct labor costs, although such data could be obtained with some effort from other sources. Not all of the matters handled by the Division culminated in a case being filed in civil or criminal court. We analyzed the Division’s MTS database to ascertain how many matters completed the various phases of an inquiry or what the last actions taken were when no inquiry was done. As previously noted, the majority of the 1,050 matters closed by the Division during this period were filings submitted under the premerger notification provisions of the HSR Act. In about 88 percent of these filings, the Division did not initiate any formal investigative inquiries because it concluded that the matters did not appear to raise significant competitive concerns warranting a more thorough review. These matters were closed without further action on or before the expiration of the initial 30-day HSR waiting period expired. Table 4 shows the number of matters closed by the Division by the last phase of inquiry or action taken in the matter. The categories are listed in the general order of resources expended on the matter. For example, HSR filings that expired within the initial 30-day waiting period or that were cleared to FTC would not generally result in a preliminary inquiry. Conversely, a civil investigative demand (CID) would generally be issued following a preliminary inquiry. To determine any differences among various agriculture-related industries, we arrayed the inquiry phases in table 4 by primary SIC category. Table 5 shows the last phase of each matter by the primary SIC category for all 3 fiscal years. In appendix IV, we also summarize information about the 64 matters shown in table 4 that were closed following a PI with no further action. Upon reviewing these tables, the Division questioned the category “Non- HSR matters cleared to Justice for which a PI was not initiated” in table 4. We provided the Division with a list of the matters that we placed in that category as a result of our analysis of the MTS database. After additional research of the files and discussions with the attorneys who handled some of these matters, the Division provided additional information on all 25 of the matters in that category. According to Division officials, of these 25 matters, the last action or phase for 4 was PI, CID or grand jury for 5, and a case filed in court for 3. The officials indicated that the Division initiated a PI memorandum but that a PI was not conducted for one matter; and the Division had considered opening a PI for another, but then decided not to after gathering additional information about the matter. They indicated that the remaining 11 matters involved Division activities that do not require that a PI be opened—5 involved the Division’s responsibilities for granting an export trade certificate, 2 involved judgment enforcement actions, 2 involved judgment modification actions, 1 was a business review letter, and 1 was an internal matter—one field office reviewing another office’s files for ideas on how to approach a possible investigation. According to Division officials, the 12 matters for which a PI had been opened, a CID issued, a grand jury convened, or a case filed in court should all have been linked in MTS to other matters in the database. They said that the links apparently had not been made in the database and believed that including these matters in our profile would, therefore, be double-counting. As previously noted, we based our profile on the information that was available in the MTS database. Although we understand the Division’s concern about double-counting, we have not dropped these matters from our analysis for mainly two reasons. First, we do not know whether there were additional cases for which the MTS database was missing appropriate links. It is possible that additional case file review and analysis would have resulted in alterations to the MTS data for other matters. Second, removing these 25 matters from our profile would afford them preferential treatment based on detailed Division research and analyses that were not conducted for all agriculture-related matters in the database. Moreover, the fact that the Division had to conduct additional review on these matters to clarify their status further illustrates that the information in the database is not wholly reliable. Given the problems we encountered with the database throughout this assignment, we believe that the data available from MTS cannot be used without the type of time-consuming checking and scrutiny that the Division and we performed. An accurate, detailed picture of the Division’s workload cannot readily be determined using MTS alone until the Division corrects the errors that have been identified and verifies that all of the data in MTS are accurate and reliable. Although the Division had a specific policy to document all complaints and leads received, the policy was not consistently followed in the past by all sections and field offices. Federal internal control standards, among other things, note that all transactions and significant events should be clearly documented and promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. The Division’s new policy requiring staff to document all complaints and leads in CCTS is a step in the right direction. However, given our 1991 finding that the 1980 policy was not being followed by all sections and field offices and given the findings of our current review, we believe that better guidance and closer management attention will be needed to ensure compliance with whatever policy is in place. However, even if the new policy is implemented consistently, the CCTS system is inadequate for providing the information needed to assess patterns in complaints and leads by industry or track and analyze the results of complaints and leads. CCTS does not include specific fields for SIC code or the final outcome of complaints and leads received from the public. Furthermore, the Division’s primary management information system for tracking matters, MTS, does not include information on the source of a matter, nor is it linked to CCTS. As a result, Division officials may not have readily available information on the source of matters and the specific industries about which the Division is receiving complaints. The data in the Division’s MTS, which are used for developing management and budget reports, are not totally accurate and reliable. Errors have been identified that have not been corrected. For example, links were not consistently established between related matters, and actions were recorded after matters had reportedly been closed. Given the problems we encountered with the MTS database throughout this assignment, we believe that MTS information should be used with caution and that an accurate, detailed picture of the Division’s workload cannot readily be determined until the Division corrects the errors that have been identified and verifies that all MTS data are accurate and reliable. To ensure that (1) all complaints and leads are documented as required by the Division policy; (2) data are readily available on the final outcome of complaints and leads, the source of matters, referrals of matters to other agencies, and the specific industries in which the Division receives complaints and leads; and (3) the Division has an accurate and reliable database that can be used to prepare meaningful management and budget reports, we recommend that the Attorney General direct the Assistant Attorney General for Antitrust to modify CCTS to capture the related SIC code(s); monitor compliance with the November 27, 2000, policy regarding documenting and tracking public inquiries; link CCTS to MTS to provide a mechanism to determine the source of the matter and the ultimate outcome of a complaint; and correct the errors that have been identified in MTS as a result of our work and verify that the MTS data are accurate and reliable. Justice’s Antitrust Division provided written comments on a draft of this report. In its comments, which are included as appendix V, the Division stated that our recommendation to incorporate a data field for SIC codes into CCTS was reasonable and that it would also consider our recommendation to link CCTS to MTS to enable the Division to track the results of citizens’ complaints. The Division was silent on our recommendation that it monitor compliance with the November 27, 2000, policy regarding documenting and tracking public inquiries. The Division took issue with our recommendation to correct errors and improve the accuracy and reliability of MTS data. The Division stated that although data linkages can be improved and data coding reviewed to ensure consistency, MTS is soundly structured, logically presented, and contains fully reliable data. The Division contended that we based our conclusion concerning the reliability of MTS data largely on the results of 25 of 1,050 agriculture-related matters. We continue to believe that the Division needs to take steps to improve the accuracy and reliability of MTS data. Contrary to the Division’s assertions, the basis for our conclusion goes beyond the 25 matters highlighted in the report. On numerous occasions during our review, we had to ask officials for clarification about data issues because the information as presented was not clear or appeared to be in error. For example, there were many matters for which we could not determine the final disposition on the basis of the data recorded. Division officials needed to devote considerable time and effort to respond to questions and issues we raised. MTS data should not require repeated reviews and refinements to produce accurate data on simple frequency counts of the number of matters closed. The finding of missing database links for 12 of the 25 matters further illustrates the difficulty of creating accurate frequency counts using the MTS database without careful and time-consuming review of the data, sometimes requiring a review of the case files. The Division acknowledged in its letter that it is currently reviewing how best to ensure that any similar linkage issues are identified and corrected, and it is updating MTS as new information becomes available and inaccuracies or gaps are discovered. These statements show that the Division recognizes that information in the MTS database needs improvement. In connection with its discussion on data reliability problems, the Division mistakenly stated that we found that its reopening of previously closed matters was an indication of incorrect closures. We recognize that closed matters are sometimes reopened in light of new evidence and do not object to the Division doing so. Our point was that the database does not accurately portray this activity, and we noted this as another example of MTS failing to accurately represent Division activity. However, we modified the language in our report to clarify this issue. The Division also commented on four other items discussed in our report. First, the Division stated that contrary to our assertion, it could report direct labor costs for matters but not from one central data source. During our review a high-level Division official stated that although it is true that MTS does not contain accurate data on direct labor costs for specific matters, such data would be available through various other sources. However, at a meeting to discuss direct labor costs, among other things, this official told us that although the Division could provide us with these data with considerable effort, the data would be incomplete and lack credibility. Second, the Division indicated that its efforts to improve its documentation of complaints and leads began well before we initiated our review. However, the Division’s response ignores the fact that the problems we identified in tracking complaints and leads were also identified in our 1991 report. Thus, any corrective actions taken over the last 10 years were insufficient to address the deficiencies identified in that report. As noted in our report, at the beginning of our review Division officials were unsure whether the 1980 policy to document and track complaints and leads was still in effect. Further, at the beginning of our review the Division’s sections and offices were not using a uniform system to document and track complaints and leads. The Division issued a memorandum in February 2000 that instructed the legal sections and field offices to use CCTS to track all public inquiries and ensure that they were addressed in a timely manner. In June 2000, we found that 7 out of the 12 sections and field offices were not using CCTS to document and track complaints and leads. Of the five that were using CCTS, three had begun using it only after March 2000. In October 2000, the Division issued a directive requiring all sections and field offices to (1) use the CCTS to document and track all unsolicited pubic contacts and (2) maintain a central file of these contacts. Third, the Division stated that the draft report gave the impression that if a merger filing under the HSR process did not result in a PI being opened, the Division had taken no action. It was not our intent to give this impression. Our report states that the Division took no formal action in these filings. Further, our report states that it is a normal outcome that PIs are not initiated for the majority of HSR mergers and that such mergers are permitted to proceed at the end of the required waiting period after the Division has concluded that they should be permitted to proceed. However, we modified the language in our report to clarify this issue. Fourth, the Division commented on the definition of “agriculture” used to determine which matters to include in our review, intimating perhaps that this was a shortcoming of the Division. We did not take issue with the Division’s lack of a working definition of agriculture. We merely stated that because there was no set definition, we needed to determine one to define the constraints of our review. Furthermore, we relied on the Division’s judgment in this matter and accepted its definition as given. In fact, the definition the Division provided was the same one that it used in its April 2000 response to Senator Lugar on another agriculture-related antitrust matter. The Division also offered technical comments on the draft, which we incorporated where appropriate. However, we disagree with the Division’s characterization of our description of the antitrust legal framework and antitrust enforcement, as the vast majority of its comments were technical suggestions or editorial preferences and not substantive. Moreover, in some cases, the Division’s suggested changes altered wording taken directly from the Division’s written policies and procedures. In such cases, we generally retained the wording from the Division’s official documentation. We also requested comments from FTC and USDA officials with whom we had met during this review. Both agencies provided technical comments, which we incorporated where appropriate. We will send copies of this report to Senator Orrin G. Hatch, Chairman, and Senator Patrick J. Leahy, Ranking Member, Senate Committee on the Judiciary; Representative Jim Sensenbrenner, Jr., Chairman, and Representative John Conyers, Jr., Ranking Minority Member, House Committee on the Judiciary; the Honorable John D. Ashcroft, Attorney General; the Honorable Mitchell E. Daniels, Jr., Director of Management and Budget; and other interested parties. We will also make copies available to others upon request. This report will also be available on GAO’s home page at http:/www.gao.gov. Please contact William Jenkins or me on (202) 512-8777 if you or your staff have questions about this report. Major contributors to this report are acknowledged in appendix VI. Our objectives were to (1) describe the Department of Justice’s Antitrust Division’s (Division) interaction with the Federal Trade Commission (FTC) and the U.S. Department of Agriculture (USDA) with regard to antitrust matters in the agriculture industry, (2) provide information on the number of complaints and leads in the agriculture industry received by the Division for fiscal years 1997 through 1999, and (3) provide information on the number and type of closed matters in the agriculture industry for fiscal years 1997 through 1999. Because the Antitrust Division does not have a working definition of the agriculture industry, we first had to determine what constituted the agriculture industry. To define what complaints, leads, matters, and cases were related to agriculture, we met with the Division and requestor staff to identify the Standard Industrial Classification (SIC) codes that would encompass agriculture. The Division assigns each matter or case one or more SIC codes. As agreed with Senator Grassley’s office, we used the Division’s selected SIC codes to define the agriculture-related activities. The Division’s selected SIC codes used to define the agriculture industry included those specifically related to plant and animal products that originate on land and are commercially cultivated or raised for human or animal consumption, meaning oral ingestion. If any identified complaint, lead, matter, or case included one of these SIC codes, we included it in our analysis. The SIC codes the Division included in its definition for the agricultural industry, as well as the way we grouped them for analysis purposes, are shown in table 6. To obtain information on the Division’s interaction with the FTC and USDA, in each of the three agencies, we relied primarily on interviews with key officials in the Division, FTC, and USDA. We relied on these officials’ representation of the agencies’ interaction. We also reviewed the Division’s policies and procedures for interacting with other agencies and relevant interagency agreements, including the 1999 Memorandum of Understanding that addresses cooperation among the Division, FTC, and USDA for monitoring competitive conditions in the agricultural marketplace. In addition, we reviewed Division testimonies, speeches, and press statements that addressed the interaction among the agencies. With regard to merger investigations, we reviewed agreements between the Division and FTC for cases that fell within the joint antitrust jurisdiction of both agencies. These included a 1993 agreement that established clearance procedures for investigations and a 1995 agreement in which the agencies agreed to specific time frames for deciding which agency should investigate a matter. Because the Division had not collected consistent, divisionwide information on complaints and leads, we relied on data provided by Division officials to obtain information about the agriculture-related complaints and leads the Division received by mail, telephone, facsimile, e-mail, or personal visit during fiscal years 1997 through 1999. To obtain information on the policies and procedures for documenting and tracking complaints and leads, we interviewed the Director of Legal Policy; the Chief of the Legislative Unit; the Executive Officer; the Special Counsel for Agriculture; and the Directors of Criminal Enforcement, Civil Merger Enforcement, and Civil Nonmerger Enforcement. In addition, we reviewed the Division’s directives for tracking complaints and leads. We also held structured interviews with Division officials representing the five headquarters sections or task forces—Civil Task Force; Litigation I section; Litigation II section; Transportation, Energy and Agriculture section, and Merger Task Force—and the seven field offices—Atlanta, Chicago, Cleveland, Dallas, New York, Philadelphia, and San Francisco— that Division officials said would most likely have reviewed anticompetitive matters in the agriculture industry for the period we reviewed. We asked them to describe their policies and procedures for documenting and tracking complaints and leads. We provided them with the agriculture-related SIC codes, and we asked them to provide us with data on, among other things, the total number of agriculture-related complaints and leads received by SIC code for each fiscal year 1997 through 1999 and the outcome of the complaint or lead. Because of the variety of methods used to track complaints and leads within the Division, we could not verify the accuracy or completeness of the data provided to us. To summarize and describe the matters that were closed during fiscal year 1997 through 1999, we analyzed data from the Division’s Matter Tracking System (MTS). MTS contains information on all civil merger, civil nonmerger, and criminal matters handled by the Division. As each matter goes through various phases in the enforcement process, MTS is to capture each phase and its related disposition. MTS is designed and used primarily for Division management purposes. We obtained data on the characteristics of the matters for those agriculture-related SIC codes for fiscal years 1997 through 1999. To understand the data elements in MTS and how the Division systematically maintains information on matters during its review of potential anticompetitive conduct, we reviewed the Antitrust Division Manual and the Division’s MTS data dictionary, and we met with officials from the Division’s Information Systems Support Group. We defined a closed matter as any effort (e.g., a case, an investigation, an inquiry, etc.) in the database that the Division closed during fiscal years 1997, 1998, or 1999. The database defines matters by use of a unique identification number, which tracks efforts that may span multiple years and/or contain multiple phases. We used this number to identify each individual matter. To provide a profile, we categorized the closed matters in the agriculture industry into the following groups: 1. Hart-Scott-Rodino (HSR) matters that expired by the 30-day limit (under Division purview), 2. HSR matters that were cleared to FTC, 3. non-HSR matters on which a formal PI was not opened (under Division 4. non-HSR matters that were cleared to FTC, 5. matters in which a PI was opened but no further action was taken by 6. matters on which a civil investigation was conducted or a grand jury 7. matters for which a civil or criminal case was filed in court, or 8. business review matters. The groups were defined on the basis of the last phase recorded in the database for the matter. A matter in group (7), for example, might have originated as an HSR filing; but it was assigned to group (7) because the last phase recorded either indicated that a case was filed, or it gave the outcome (e.g., “won”) of the case. To augment MTS information for our profile of closed matters, we reviewed Antitrust Division memorandums for matters for which a PI was conducted but no further actions were taken by the Division. Our data collection instrument included the dates that the investigation was authorized to be opened and closed, the type of matter, statute and potential antitrust conduct violated, geographic market, amount of commerce affected, SIC code(s), section assigned to review the matter, reason for closing the investigation, and whether or not there was economic analysis group concurrence. For those matters for which we were unable to understand the information provided in the memoranda, we spoke with a Division official to clarify the information. We verified summary results with Antitrust Division officials, and we discussed and came to agreement on matters of discrepancy. The Department of Justice’s Antitrust Division’s (Division) mission is to promote and maintain competition in the American economy. According to its manual, the Division’s primary functions and goals include the general criminal and civil enforcement of the federal antitrust laws and other laws relating to the protection of competition and the prohibition of restraints of trade and monopolization; the intervention or participation before administrative agencies functioning wholly, or partly, under the regulatory statutes in proceedings requiring consideration of the antitrust laws or competitive policies; and the advocacy of procompetitive policies before other branches of government. To accomplish its mission, the Division has 16 sections or task forces in headquarters and 7 field offices located throughout the United States. As shown in table 7, as of July 21, 2000, the Division had 561 full-time staff and 237 part-time staff, of which about 29 percent of the full-time staff and about 21 percent of the part-time staff were assigned to the field offices. The Assistant Attorney General (AAG) for the Antitrust Division is responsible for leadership and oversight of all Division programs and policies. The Division AAG is nominated by the president and confirmed by the Senate. The AAG’s Chief of Staff is responsible for managing the Office of the Assistant Attorney General, advising the AAG on the formulation and implementation of highly sensitive antitrust policy issues, and coordinating that policy with other federal and state government agencies. As of November 2000, the Division AAG had has three special counsels— the Special Counsel for Civil Enforcement, the Special Counsel for Information Technology, and the Special Counsel for Agriculture (who was appointed in January 2000). The special counsels are generally responsible for: maintaining expertise in assigned program areas and the competitive advising the AAG and other senior Division management about enforcement priorities in assigned program areas and assisting in the development and implementation of Division policy in these areas; assisting the Division in articulating its views on issues involving assigned program areas before other government entities, the press, the bar, and the public; and participating as appropriate in the identification of potential investigations assigned to program areas and any resulting investigation or enforcement action. The Division also has five Deputy Assistant Attorneys General (DAAG), at least one of whom has traditionally been a career employee; and three Directors of Enforcement, one of whom serves as the Director of Operations. Each DAAG has a number of components that report to him or her. (See fig. 1.) The DAAG for Criminal Enforcement, who traditionally has been a career employee, has overall supervisory and management responsibility for the Litigation I Section and the Division’s seven field offices and is primarily responsible for the Division’s criminal enforcement program. The civil enforcement responsibilities are divided among the remaining three legal DAAGs. The DAAG for Economic Analysis has supervisory and management responsibility for the three economic sections (i.e., Economic Litigation, Economic Regulatory, and Competition Policy). The Economic Litigation Section focuses on general industries that historically have not been regulated; it also has a Corporate Finance Unit, which provides financial analyses of failing firm defenses, divestitures, and efficiencies defenses; makes recommendations as to fines; and reviews financial issues involved in damage analyses and other issues requiring financial, accounting, and corporate analysis. The Economic Regulatory Section focuses on industries that currently are regulated or historically have been regulated (e.g., telecommunications, airlines, and energy matters). The Competition Policy Section assists in matters that have a strong foreign outreach connection as well as matters involving certain specialized industries. In total, the economic sections are made up of 52 full-time and 3 part-time economists. According to a Division official, the economic sections have a common pool of staff members. Although economists are assigned to one of the three sections, in practice they work for all three sections. According to a Division official, this practice helps balance workload. The three Directors of Enforcement—which include the Director of Merger Enforcement, the Director of Civil Non-Merger Enforcement, and the Director of Criminal Enforcement—have direct supervisory authority over the activities of the various litigating sections, task forces, and field offices, each of which is headed by a Chief and Assistant Chief. These sections, task forces, and field offices carry out the bulk of the Division’s investigatory and litigation activities. According to the Division manual, the three Directors of Enforcement work closely with the DAAGs in overseeing Division activities. Four special assistants to the Directors of Enforcement are each assigned several sections and field offices and play a liaison role between those sections and the Directors, in addition to performing other duties as assigned by the Directors. The senior Special Assistant also serves as the Liaison Officer to FTC. The Director of the Office of Operations, who also serves as one of the Directors of Enforcement, reports to the AAG. The Office of Operations coordinates the policies and procedures governing the Division’s civil investigations and enforcement actions and includes four support units. The Premerger Notification Unit/FTC Liaison Office (commonly referred to as the Premerger Office) receives the Division’s copy of all Hart-Scott- Rodino filings and assigns them to the appropriate sections. The Premerger Office communicates to FTC the Division’s interest in conducting an investigation or its willingness to grant early termination of the filing period. The Freedom of Information Act (FOIA) Unit is responsible for receiving, evaluating, and processing all FOIA requests made of the Division; assisting in the preparation of materials to be provided to state attorneys general; and maintaining and indexing pleadings, business review letters, and other frequently used files. The Paralegal Unit provides paralegal support on request to investigations and cases handled in Washington, D.C., and the field offices. The Training Unit coordinates training opportunities for the Division’s legal and support personnel. The Division has seven Washington, D.C., litigating sections and task forces, including the following: The Computers and Finance Section enforces antitrust laws and competition policy in the banking, finance, insurance, securities, and computer industries. The Civil Task Force handles civil nonmerger antitrust enforcement in some assigned industries, in intellectual property matters, and in all industries not specifically assigned elsewhere and handles merger matters in its assigned industries. The Health Care Task Force investigates and litigates civil merger and nonmerger antitrust law violations involving the health care industry and provides legal guidance to the American health care industry through an extensive business review program. The Litigation I Section conducts criminal investigations and litigation in conjunction with its field office counterparts. The Litigation II Section enforces antitrust laws with regard to mergers and acquisitions in unregulated industries; handles some civil nonmerger work in its assigned industries; and reviews, investigates, and litigates matters in a large variety of industries. This section is also responsible for the review of bank mergers. The Telecommunications Task Force enforces antitrust laws and promotes procompetitive regulatory policies in the communications industry, investigating and litigating violations of antitrust laws within that industry, and participating in proceedings before the Federal Communications Commission. The Transportation, Energy and Agriculture Section enforces antitrust laws and promotes procompetitive regulatory policies in transportation, energy, and agricultural commodities, investigating and litigating violations of antitrust laws within those industries, and participating in proceedings before a number of federal regulatory agencies, including the Department of Agriculture; and prepares reports to Congress and the executive branch on policy issues related to various transportation, energy, and agriculture industries. The Division’s seven field offices conduct criminal investigations and litigation. Field offices also handle some civil merger and nonmerger matters, depending on resource availability and particular expertise. The offices act as the field liaison with U.S. Attorneys, state attorneys general, and other law enforcement agencies within their areas of jurisdiction (see table 8). The Division also has several specialized components, in addition to those included in the Office of Operations, that assist in carrying out the Division’s mission. The Executive Office formulates and administers the Division’s budget, manages its reporting and records, handles personnel matters, and provides information systems services for all Division activities. This Office includes the Information Services Support Group, which provides automated services and resources to handle information in support of the Division’s attorneys, economists, and managers. The Appellate Section represents the Division in appeals to the U.S. Courts of Appeals and appeals before the U.S. Supreme Court. The Legal Policy Section provides analyses of complex antitrust policy matters; coordinates the Division’s legislative program; and handles long- range planning, projects, and programs of special interest to the AAG. This Section includes the Legislative Unit, which coordinates the Division’s relations with Congress and responds to congressional requests and inquiries of the Division. The Foreign Commerce Section assists other sections in matters with international aspects and is primarily responsible for the development of Division policy on international antitrust enforcement and competition issues involving international trade and investment. The Antitrust Division is responsible for, among other things, promoting and maintaining competition in the United States by enforcing the federal antitrust laws. It is charged with investigating and prosecuting violations of these laws. The Division’s Antitrust Division Manual is intended to provide a comprehensive source of information about the Division’s mission and investigative and enforcement procedures and practices. The following is a general overview of the antitrust investigative and enforcement processes, up to the point at which an enforcement action is filed, for the three principal types of antitrust enforcement actions brought by the Division—Hart-Scott-Rodino merger enforcement actions, civil enforcement actions, and criminal prosecutions. The Division may become aware of a possible antitrust violation through a variety of sources, including a confidential informant; individuals or corporations applying for amnesty; complaints and referrals from other government departments or agencies; an anonymous tip; or through reviews of newspapers, journals, and trade publications. New investigations may also begin with information the Division obtains in other grand jury proceedings, or in merger filings required by the premerger notification provisions of the Hart-Scott-Rodino Antitrust Improvements Act of 1976 (HSR Act, 15 U.S.C. 18a). Economists from one of the Economic Analysis Group sections may also discover a possible anticompetitive activity, which they would then discuss with an attorney in one of the Division’s legal sections. When an attorney makes a determination that there is sufficient evidence to open an investigation beyond discussions with the complainant, he or she is to draft a preliminary inquiry (PI) request memorandum to the section, task force, or field office chief describing the conduct involved and the possible violation. For civil matters, the memorandum is also to state whether the economist concurs. The memorandum is to include, among other things a factual summary of the information upon which the request is based, evidence supporting a potential antitrust violation as well as any contrary an evaluation of the significance of the matter from an economic and a description of the proposed course of the investigation. Additionally, the memorandum is to include basic information on (1) the commodity or service to be investigated; (2) the alleged anticompetitive conduct or merger and, for civil matters, the theory of competitive harm; (3) the relevant statute; (4) the parties involved; (5) the amount of commerce affected on an annual basis; and (6) the geographic area involved. If the section chief does not agree with the staff’s recommendation to open a PI, no further action is taken. If the section chief agrees with the recommendation, he or she is to submit a request to Operations or, in the case of a criminal PI, to the Office of Criminal Enforcement. The appropriate Director of Enforcement is to approve or disapprove the PI request on the basis of four standards: (1) the facts presented must provide sufficient indications of an antitrust violation; (2) the amount of commerce affected must be substantial, or the matter must have some broader significance or implicate an important legal principle; (3) the investigation should not needlessly duplicate or interfere with other efforts of the Division, FTC, a U.S. Attorney, or a state Attorney General; and (4) Division resources must be available for the investigation. Because both the Division and FTC have antitrust jurisdiction, they must agree on which agency is to conduct the investigation. To ensure that both agencies are not investigating the same conduct, they have established clearance procedures to determine which agency will investigate a potential violation. If either the Division or FTC objects to the other agency conducting the investigation, the staffs are to follow the clearance dispute procedures to determine which agency will proceed with the investigation. According to Division and FTC officials, clearance disputes are relatively rare. Clearance must be obtained for all PIs, business reviews, grand jury requests that have not resulted from an existing PI, and any expansion of a previously cleared matter. The agencies cannot begin a PI until clearance is granted. For a typical investigation, the clearance request is to specify, among other things, the parties to be investigated, the product line involved, the potential offenses, and the geographic area. The Division’s Premerger Notification Unit/FTC Liaison Office oversees the clearance process. The primary determinant of which agency will conduct an investigation is current agency expertise about the product or service market(s) at issue, so that a merger will usually be reviewed by whichever of the two agencies is most knowledgeable about the relevant market(s). According to Division and FTC officials, the Division has investigated the preponderance of mergers affecting agriculture, with a prominent exception being grocery store transactions, in which FTC has substantial experience and expertise. The Division has handled investigations in the cattle, hog, and lamb sectors, and FTC has traditionally handled investigations in the poultry sector. Once clearance is obtained and the PI is opened, staff are to investigate the merger or alleged conduct through interviewing complainants, customers, competitors, other possible witnesses and victims and reviewing other public sources of information. They may request information on a voluntary basis from any party involved. They also may use a compulsory process to obtain further information and documents. According to Division officials, the staff determine whether to conduct a criminal or civil investigation early in their deliberations, usually when the PI request is submitted. Where it is unclear whether the conduct in question would be a civil or criminal violation, the Division’s policy is usually to open a civil investigation. This policy stems from two Supreme Court decisions that place restrictions on the government’s ability to use evidence gathered during the course of a grand jury investigation in a subsequent civil case. After the staff evaluates the results of the PI, the attorneys and economist (in civil matters) are to recommend either closing the PI or proceeding with a civil or criminal investigation. For a civil matter, this means preparing a lawsuit for filing. For a criminal matter, this means convening a grand jury. In making this decision, the staff are to consult with their section or office chief and, in civil matters, the relevant economic analysis group chief to discuss the results of the PI. To close a PI, the attorney is to prepare a closing memorandum. In a civil matter, the legal staff’s memorandum is to state whether the economist concurs. The memorandum generally is to provide the factual and legal bases for the staffs’ recommendation to close the PI. Operations or, in criminal matters, the Office of Criminal Enforcement is to review the memorandum and consult with the appropriate DAAG and AAG if the matter involves significant policy questions. The appropriate Director of Enforcement then notifies the cognizant section or office chief of the decision to close the PI. Staff are to then notify the subjects of the investigation that the matter is closed and close the file on the matter. Most mergers and acquisitions that have the potential to raise competitive concerns must be reported to the Division and FTC before they occur. The premerger notification provisions of the Hart-Scott-Rodino Act require companies exceeding certain thresholds of company size and value of the transaction to notify the Antitrust Division and FTC of the proposed merger transaction, submit documents and other information to the agencies concerning the transaction, and refrain from closing the transactions until a specified waiting period has expired. There are three tests, all of which must be met in order for the transaction to be reportable. The first test is the commerce test, in which either the acquiring party or the acquired party must be engaged in commerce or in any activity affecting interstate commerce, as defined in Section 1 of the Clayton Act. The second test is the size-of-person test. For the period we reviewed, one party to the transaction had to have annual sales or assets of at least $100 million and the other party of at least $10 million. The third test is the size-of-transaction test. Under this test, for the period we reviewed, as a result of such acquisition, the acquiring party had to hold (1) voting securities or assets worth in the aggregate more than $15 million, or (2) voting securities that confer control (50 percent) of an issuer with annual sales of $25 million or more. HSR merger reviews usually begin with the parties filing a proposed merger. However, the Division also may become aware of a merger prior to the required filing through other sources, such as its own research or notification by a concerned citizen. The Division has up to 30 days (15 days for cash tender offers and bankruptcy filings) from the time of the filing of the proposed merger to review the filing and make a determination as to whether the Division should seek additional information and documents from the merging parties and thereby extend the waiting period to enable further review. Generally, staff should decide within 5 business days of receipt of an HSR filing (3 days in the case of a cash tender offer or a bankruptcy filing) whether the filing raises competitive issues that need to be investigated. According to Division officials, in markets already characterized by high concentration levels, there is a substantially increased likelihood that a proposed merger will be investigated. The majority of mergers that raise antitrust concerns are horizontal mergers. According to Division officials, the Division’s and FTC’s joint Horizontal Merger Guidelines are a reasonably accurate portrayal of how the Division and FTC generally conduct their analyses of proposed mergers. The guidelines were originally developed in 1982 and were updated in 1992. In 1997, the efficiencies section of the guidelines was expanded. The unifying theme of the guidelines is that mergers should not be permitted to create or enhance market power or facilitate its exercise. The guidelines define a seller’s market power as the ability to profitably maintain selling prices above competitive levels for a significant period of time. Similarly, a buyer’s market power is defined as the ability to profitably maintain buying prices below competitive levels for a significant period of time. The guidelines outline the five-step analytical process the Division is to use to determine whether a merger is likely to substantially lessen competition and, ultimately, whether to challenge a merger. The Division is to (1) delineate the relevant market and assess whether the merger would significantly increase concentration and result in a concentrated market; (2) identify the market participants, assign shares, and assess whether increased market concentration from the proposed merger raises concern about potential adverse competitive effects; (3) assess whether entry into the market would be timely, likely and sufficient either to deter or to counteract the competitive effects of concern; (4) assess any efficiency gains that cannot be reasonably achieved by the parties absent the proposed merger; and (5) determine whether, but for the merger, either party to the transaction would be likely to fail and exit the market. A post-merger HHI between 1,000 and 1,800 indicates moderate concentration. Mergers producing an increase in the HHI of less than 100 points in moderately concentrated markets postmerger are unlikely to have adverse competitive effects and generally require no further analysis. Mergers resulting in an increase in the HHI of more than 100 points in moderately concentrated markets postmerger potentially raise significant competitive concerns. A postmerger HHI above 1,800 indicates a highly concentrated market. Mergers producing an increase in the HHI of less than 50 points even in highly concentrated markets postmerger are unlikely to have adverse competitive consequences and ordinarily require no further analysis. Mergers resulting in an increase in the HHI of more than 50 points in highly concentrated markets postmerger potentially raise significant competitive concerns. When the postmerger HHI exceeds 1,800, the Division assumes that mergers producing an increase in the HHI of more than 100 points are likely to create or enhance market power or facilitate its exercise. However, this assumption may be refuted by showing that other factors in the guidelines make it unlikely that the merger will create or enhance market power or facilitate its exercise, in light of market concentration and market shares. termination” of the waiting period when requested in writing by one of the merging parties. If the attorney, in consultation with the economist, concludes that the information reviewed raises significant competitive concerns warranting a more thorough review, the attorney is to submit a request to open a PI. The PI request is to be reviewed by the section chief and, if the section chief concurs, by Operations. While deciding whether to authorize the PI, Operations is to request clearance to proceed from FTC. If clearance is granted to the Division, the Director of Merger Enforcement in Operations decides whether to authorize the PI. According to Division officials, Operations generally approves merger PI requests. If the PI is authorized, the attorney and economist are to investigate the proposed merger during the initial waiting period, generally using voluntary procedures. They are to determine whether the proposed transaction raises issues substantial enough to warrant the issuance of a second request. At this point, the merging parties often begin to meet with Division staff to discuss any problems the Division has with the proposal and to provide their own analysis of the transaction. If the attorney concludes, in consultation with the economist, that there are no significant competitive concerns, then the attorney is to submit a memorandum to the section chief recommending that the investigation be closed. If the section chief concurs, the memorandum is to be sent to Operations, where the Director of Merger Enforcement makes the final decision on whether to close the investigation. If the attorney concludes, in consultation with the economist, that there are significant competitive concerns that require the Division to obtain additional information to be obtained from the parties to enable a more thorough review, the attorney is to submit a proposed “Second Request” for that information to be submitted to the merging parties before the waiting period expires. The proposed second request is to be reviewed by the section chief and, if the section chief concurs, by Operations, where the Director of Merger Enforcement makes the final decision whether to approve the second request. According to Division officials, if the second request is not approved, the investigation typically is closed. If the second request is approved, it is sent to the merging parties. For the period we reviewed, this extended the waiting period until 20 days (10 days for cash tender offers and bankruptcy sales) after the merging parties substantially comply with the second request. In order for the extended waiting period to end, the merging parties must substantially comply with the second request. The attorney also may apply for authorization to issue CIDs to any person. According to Division officials, the attorney and merging parties will likely meet to discuss the competitive concerns as well as any possible modifications to the second request. When the merging parties have supplied the information requested, they are to certify substantial compliance with the second request. If the attorney does not agree that the parties are in substantial compliance, the attorney is to notify the merging parties of the areas of noncompliance and submit a proposed deficiency letter to the section chief. If the section chief concurs, the letter is to be sent to the merging parties, and the waiting period is further extended. If the attorney agrees that there has been substantial compliance with the second request, or if the section chief does not concur in sending the deficiency letter, the waiting period is not further extended and comes to an end within the prescribed number of days after certification of substantial compliance. If the attorney concludes in consultation with the economist that the merger is not likely to substantially lessen competition in violation of the antitrust laws, the attorney is to submit a memorandum to the section chief recommending that the investigation be closed. If the section chief concurs, the memorandum is to be sent to Operations, where the Director of Merger Enforcement decides whether to close the investigation. If the Director of Merger Enforcement approves closing the investigation, the Division typically requests that FTC early terminate the extended waiting period. If the attorney concludes in consultation with the economist that the merger is likely to substantially lessen competition in violation of the antitrust laws, the attorney is to inform the section chief. If the section chief concurs, Operations is to consider the matter in consultation with the attorney, the economist, the section chief, and the relevant DAAGs. If the section chief does not concur with the attorney’s conclusion, then the investigation is typically closed; or occasionally, it is sent back to the attorney for further work. If the Director of Merger Enforcement concurs with the attorney’s conclusion, the attorney is to notify the merging parties of the specific competitive concerns and that staff intends to recommend challenging the merger in court if the parties proceed with the merger as proposed. If the Director does not concur, the investigation is closed or, occasionally, sent back to the section for further work. If the Division determines that the proposed merger is anticompetitive, the merging parties may offer to divest assets in a manner that resolves the competitive concerns. If they do so, the Division files a complaint and proposed consent decree with the court that binds the parties to the arrangement. Alternatively, the merging parties may elect to “fix it first” by divesting the assets of competitive concern before the merger takes place, or to “restructure” the merger by forgoing acquisition of those specific assets in the first place. If the merging parties do not offer to divest assets in a manner that resolves the competitive concerns, the attorney is to submit a case recommendation package to the section chief. The package is to include the recommendation memorandum, an order of proof with the key documents and other evidentiary support, a draft complaint, and draft motions for temporary restraining order and preliminary injunction. The case recommendation memorandum is to include, among other things: the date by which the Division must file any temporary restraining order or preliminary injunction papers, and any other dates that bear on timing; a brief description of the transaction, including the identity of the merging parties, the form of the transaction, and the consideration; a brief description of the proposed suit, including proposed defendants, the statutes under which the merger is to be challenged, the proposed judicial district, and the relief sought; a general description of the impact of the transaction, including the relevant product and geographic markets, volume of commerce, market shares, and HHIs; a brief description of the basic theory of competitive harm; a short discussion of the weaknesses of the case; any settlement possibilities; and an explanation of why litigation is worth the expenditure of the necessary Division resources. The economist also provides a memorandum on the key economic issues. These memorandums and accompanying materials are to be forwarded to Operations and to the relevant DAAG. After conferring, they make a recommendation to the AAG. At this point, the merging parties may request meetings with the Director of Enforcement, the DAAG, and the AAG to discuss the merger. During this period, the merging parties may still offer to divest assets to resolve the competitive concerns. If the parties do not offer to divest assets in a manner that resolves the competitive concerns, and if the AAG approves taking enforcement action, the Division files a complaint and, usually, motions for a preliminary injunction and a temporary restraining order with the court. The preliminary injunction prevents consummation of the transaction before the court can determine its legality. If the AAG does not approve taking enforcement action, the investigation is closed. Figure 2 shows the general process the Division follows for HSR merger enforcement actions. When alleged conduct would not be appropriate for criminal prosecution but might be found to be anticompetitive, the Division may initiate a civil investigation. Civil investigations differ from criminal investigations in the involvement of the Division’s economists and in the manner of the Division’s interaction with the parties under investigation. The Division generally follows the same procedures for reviewing civil non-HSR mergers and HSR mergers, except that non-HSR merger investigations are not subject to statutorily prescribed waiting periods. The Division’s procedures for reviewing non-HSR merger matters and civil nonmerger matters are also very similar. Civil nonmerger matters involve the investigation and civil prosecution of a variety of conduct under Sections 1 and 2 of the Sherman Act. According to the Division, such conduct may constitute an illegal restraint of trade or unlawful monopolization or attempted monopolization. Examples of conduct that may raise competitive issues include strategic alliances between companies, joint ventures among suppliers, and misuse of intellectual property rights. According to the Division manual, considerations in civil matters include legal theory, relevant economic learning, the strength of the likely defense, any policy implications, and the potential doctrinal significance of the matter. The investigative process begins when the Division becomes aware of potentially anticompetitive conduct, or a potentially anticompetitive merger not subject to HSR reporting, and it refers the matter to the appropriate section for handling. Attorney(s) and economist(s) are assigned to assess the conduct or merger using public sources of information. If the attorney, in consultation with the economist, concludes that the conduct or merger raises significant competitive concerns warranting a more thorough review, the attorney is to submit a request to open a PI. Otherwise, the attorney recommends that no action be taken; and, if the section chief concurs, no investigation is opened. The PI request is to be reviewed by the section chief and, if the section chief concurs, by Operations. Before deciding whether to authorize the PI, Operations is to request clearance to proceed from FTC. If clearance is granted to the Division, the Director of Enforcement decides whether to authorize the PI. According to Division officials, civil PI requests are generally authorized. If the PI is authorized, the attorney and the economist are to investigate the conduct or merger, generally using voluntary procedures, such as interviews and voluntary requests for documents. The attorney may also apply for authorization from Operations to issue CIDs to parties subject to the investigation and to third parties who may have relevant information. According to the Division manual, a decision to issue CIDs generally involves a significant expansion in resources committed by the Division and should be made only after serious consideration and a thoughtful reassessment of the matter’s potential significance. If the attorney and economist recommend closing the investigation and if the section chief and Director of Enforcement concur, the investigation is closed. The closing recommendation is to include a description of the conduct or market involved in a violation, an analysis of competitive issues, a development of the facts and law, and recommendations. The Director of Enforcement may send the matter back for further work. If the attorney, in consultation with the economist, concludes that the conduct or merger can be proven to violate the antitrust laws, the attorney is to submit a case recommendation to the Director of Enforcement. The case recommendation is to include the following information: a brief description of what the prospective case is fundamentally about; a conceptual discussion of the case and why it is an important one for the Division to bring, including the theory and statute(s) on which the case would be based; theories investigated but not recommended to be pursued; and the justifications or defenses likely to be raised by the prospective defendants; an assessment of whether the case is winnable at trial, including a short order of proof (which will typically be attached to the case recommendation as a separate document), a summary of the relative strengths and weaknesses of the evidence supporting the case, and a summary of likely defense evidence and arguments; and a discussion of potential settlement options. The case recommendation is to be reviewed by the section chief, and if the section chief concurs, the recommendation is reviewed by Operations, in consultation with the attorney, the economist, the section chief, the economic chief, and often the relevant DAAGs. The economist is also to submit a case recommendation memorandum on the key economic issues. If the Director of Enforcement does not concur, the investigation is closed or, occasionally, sent back to the section for further work. If the Director of Enforcement concurs in the attorney’s conclusion, the attorney is to notify the parties of the Division’s competitive concerns and that staff intends to recommend challenging the conduct or merger in court. Operations is to forward the case recommendation to the relevant DAAGs and to the AAG. The AAG is to review the case recommendation and consider whether to bring an enforcement action. At this point, the parties may request meetings with the DAAG and with the AAG to discuss the matter. The parties may offer to cease the conduct giving rise to the competitive concerns and take other action as necessary to resolve those concerns, or to restructure the merger to resolve the competitive concerns. If the offer satisfactorily resolves the competitive concerns, the attorney is to file a complaint and proposed consent decree in court. If the parties do not satisfactorily offer to resolve the competitive concerns, and if the AAG approves taking enforcement action, he or she is to sign the pleadings and other documents. The Division then files a case in court and begins litigation. If the AAG does not approve taking enforcement action, the investigation is closed. Figure 3 shows the general process the Division follows for civil nonmerger and non-HSR merger enforcement actions. When the Division becomes aware of a possible criminal antitrust violation, it assigns the matter to the appropriate section or office to handle the review. The attorney reviews the information to decide whether to seek authority to open a criminal investigation. If the attorney concludes that there is not sufficient evidence to warrant opening a criminal PI, and if the section or office chief concurs, the Division would not open a criminal investigation, but it might open a civil investigation. If the attorney concludes that there is not sufficient evidence to warrant opening a grand jury investigation, but there is sufficient evidence to warrant opening a preliminary inquiry, the attorney is to submit a PI request to the section or office chief. (If the attorney concludes that there is already sufficient evidence to warrant opening a grand jury investigation, the attorney may bypass the criminal PI and proceed directly to submit a request for grand jury authority to the section or office chief.) If the section or office chief concurs in the PI request, the request is to be sent to the Office of Criminal Enforcement. If the section or office chief does not concur in the PI request, a criminal investigation is not opened, but a civil investigation might be opened. If the Director of Criminal Enforcement approves the criminal PI request,the Office of Criminal Enforcement is to request clearance from FTC through the clearance process; and, once clearance is obtained, the attorney is to investigate the potential violation, using voluntary procedures. If the Director of Criminal Enforcement does not approve the criminal PI request, the Division would not open a criminal investigation, but it might open a civil investigation. If the attorney concludes that the PI does not reveal sufficient evidence to warrant opening a grand jury investigation, the attorney is to submit a memorandum to the section or office chief recommending that the investigation be closed. If the section or office chief concurs with the closing memorandum, it is to be sent to the Director of Criminal Enforcement, who will close the criminal investigation or, occasionally, send it back to the section or office to develop more evidence. If the criminal investigation is closed, a civil investigation might be opened. If the attorney concludes that the PI reveals sufficient evidence to warrant opening a grand jury investigation (or if the PI phase is bypassed and a grand jury is requested at the outset), the attorney is to submit a memorandum to the section or office chief outlining the evidence and requesting grand jury authority. To the extent possible, the request for grand jury authority is to identify the companies, individuals, industry, or commodity or service involved; estimate the amount of commerce involved on an annual basis; identify the geographic area affected and the judicial district in which the investigation will be conducted; describe the suspected violations, including non-antitrust violations, and summarize the supporting evidence; evaluate the significance of the possible violation from an antitrust explain any unusual issues or potential difficulties the staff has identified; identify the attorneys who will be assigned to the investigation; explain initial steps in the staff’s proposed investigative plan; and estimate the duration of the investigation. If the request is approved by the section or office chief, the Special Assistant is to prepare a memorandum for the Director of Criminal Enforcement, who makes a recommendation to the Division’s AAG, with a copy to the DAAG. If the AAG does not approve the request, the criminal investigation is closed or, sometimes, sent back to the section or office to develop more evidence. If the criminal investigation is closed, a civil investigation might be opened. If the grand jury request is approved by the AAG, the Office of Criminal Enforcement is to obtain FTC clearance if it has not already been obtained previously for a PI, or if the scope of the investigation is expanded. Then the attorney is to meet with the local U.S. Attorney’s office, and a grand jury is convened. The grand jury investigation phase involves (1) issuing a subpoena to companies for records, (2) calling witnesses, and (3) presenting evidence on the alleged violation to the grand jury. After completing the grand jury investigation, the attorneys are to recommend either closing the investigation; proceeding with a criminal case and prosecuting the defendants; or, occasionally, continuing the investigation as a civil matter. If the attorney concludes that the grand jury investigation does not reveal sufficient evidence to warrant filing criminal charges, the attorney is to submit a memorandum to the section or office chief recommending that the investigation be closed. If the section or office chief concurs with the closing memorandum, it is sent to the Director of Criminal Enforcement, who will either authorize closing the investigation or, occasionally, send it back to the section or office for further grand jury investigation. If the attorney concludes that the grand jury investigation reveals sufficient evidence to warrant filing criminal charges, and if the section or office chief concurs, the attorney and section or office chief are to submit a memorandum to the Office of Criminal Enforcement recommending criminal action and providing the factual and legal bases of their investigation. The memorandum is to include the following information: a summary of the offense; a list and description of the proposed defendants; a summary of the evidence establishing the offense and a summary of the evidence against each proposed defendant; the names of the persons and companies that were potential targets of the investigations but are not being recommended for indictment; a detailed analysis of the weaknesses of the case, and any anticipated defenses, with appropriate staff responses; and a list of the defense counsel for the proposed defendants, a description of the arguments made to staff, and staff responses to the arguments. The Director of Criminal Enforcement is to analyze all the related documents, assess the merits of the case, and recommend what action, if any, to bring against the proposed defendant(s). The documents are then to be reviewed by the Division’s criminal DAAG. According to the Division manual, staff will ordinarily inform defense counsel that staff is seriously considering recommending indictment and give counsel an opportunity to present their views to the staff and section or office chief, or to the Director of Enforcement or the DAAG, before the indictment recommendation is forwarded to the AAG. At any time, the party may agree to plead guilty. Defense counsel do not have an absolute right to be heard by the Director of Criminal Enforcement or the criminal DAAG, although it is routine that they are. If the DAAG (or the Director of Criminal Enforcement) does not concur in the recommendation to proceed with criminal prosecution, the criminal investigation is closed or, occasionally, sent back to the section or office to develop more evidence. If the criminal investigation is closed, occasionally, a civil investigation might be opened. If the DAAG (or the Director of Criminal Enforcement) concurs in the recommendation to proceed with criminal prosecution, the DAAG (or the Director of Criminal Enforcement) is to forward a recommendation to the Division’s AAG. The Division’s AAG is to decide whether to bring a legal action or decline prosecution. If the AAG does not approve the criminal prosecution, the criminal investigation is closed or, occasionally, sent back to the section or office to develop more evidence. Occasionally, if the criminal investigation is closed, a civil investigation might be opened. If the AAG approves criminal prosecution and the party has not agreed to plead guilty, the Division presents the recommended indictment to the grand jury. If the grand jury returns an indictment, the Division begins criminal proceedings in court. If the grand jury does not indict, the investigation is closed. If the party has agreed to plead guilty, staff are to prepare a memorandum recommending filing an information and entering a plea agreement with a sentence recommendation. The memorandum is to be forwarded to the criminal DAAG through the Director of Criminal Enforcement if it is the first case to arise from an investigation, or to the Director of Criminal Enforcement if it is not the first case. The memorandum is to include the following informaiton: a brief description of the proposed charges; a description of the illegal conduct and an analysis of the available evidence demonstrating the existence of that conduct; a brief description of the elements of the proposed plea agreement, with a more detailed explanation of any unusual provisions, and an analysis of the potential criminal penalty pursuant to the United States Sentencing Commission’s Federal Sentencing Guidelines; a description of the potential charges faced by the proposed defendant, had the case proceeded to indictment; an analysis of the benefits and disadvantages of the proposed plea agreement, including the impact of the proposed agreement on any continuing investigation or future trial; and a discussion of relevant victims’ rights issues. The DAAG is to review the memo and forward it to the AAG with a recommendation. If the AAG concurs with the recommendation, the Division files the information, plea agreement, and sentence recommendation with the court. Figure 4 shows the general process the Division follows for criminal enforcement actions. As shown in table 4 of our report, during fiscal years 1997 through 1999, the Division closed a total of 64 agriculture-related matters in the SIC codes we examined after conducting a PI. We reviewed opening and closing memorandums for all 64 matters to determine, among other things, (1) the source of these matters, (2) the geographic market and amount of commerce affected, (3) the SIC codes for these matters, (4) the number of days the matters were open, and (5) the reasons the matters were closed with no action beyond the PI phase. For the 64 matters that were closed following the PI, 47 (73 percent) of the PIs were initiated as a result of HSR filings. Of the remaining matters, seven resulted from complaints received from the public; four were referred from another federal agency, including two from the Federal Bureau of Investigation (FBI), one from USDA, and one from FTC; two resulted from an investigation of a related matter; and two were self- initiated. In addition, there were two matters for which there appeared to be more than one source for the investigation. Table 9 summarizes how the Division became involved in each of the 64 matters. Table 10 shows the distribution of these 64 matters by the geographic market and amount of commerce affected. Forty-three (67 percent) of the matters were regional in scope, and the amount of commerce affected was not shown in the closing memorandums in 16 (25 percent) of the 64 matters. Of the 47 matters in which the amount of commerce was known, 12 (25 percent) involved commerce estimated at between $150 million and $499.9 million. Table 11 shows the breakdown of the primary SIC categories for each of the 64 PIs. As can be seen, food manufacturing is the largest overall category and accounted for 41 of 64 PIs. No other primary industry code had more than 5 PIs. Table 12 shows the number of days the PI was open for each matter classification for matters closed after staff conducted a PI. According to the Division manual, the normal time period required to conduct a PI ranges from a few weeks to a few months. Table 12 shows that about 66 percent of the matters were closed within 3 months. All of the matters that closed within 3 months were HSR merger matters. The PI for each of the criminal matters lasted more than 6 months. As can be seen in table 13, 58 (91 percent) of these matters were closed because the Division found insufficient evidence of potential antitrust violations. In addition to those named above, Chan My J. Battcher, Cathy Hurley, Jan Montgomery, Tim Outlaw, Anne Rhodes-Kline, Maria Strudwick, and Bonita Vines made key contributions to this report.
This report reviews the Department of Justice's Antitrust Division's overall policies and procedures for carrying out its statutory responsibilities, particularly as they apply to the agriculture industry. GAO describes (1) the Division's interaction with the Federal Trade Commission (FTC) and the Department of Agriculture (USDA) with regard to antitrust matters in the agriculture industry, (2) the number of complaints and leads in the agriculture industry received by the Division for fiscal years 1997 through 1999, and (3) the number and types of closed matters in the agriculture industry for fiscal years 1997 through 1999. GAO also describes the Division's policies for and procedures for investigating potential anti-trust violations. GAO found that the Division (1) maintains a cooperative working relationship with regard to anticompetitive matters in the agriculture industry with FTC and USDA, (2) received an estimated 165 complaints and leads related to the agriculture industry in fiscal years 1997 through 1999, (3) closed 1,050 matters during that period.
Our March 2015 report found that FCC has made progress implementing reform efforts contained in the Order. In particular, FCC has implemented eight reforms, including the one-subscription-per-household rule, uniform eligibility criteria, non-usage requirements, payments based on actual claims, and the audit requirements. Furthermore, FCC eliminated Link-up on non-Tribal lands and support for toll limitation service, and the National Lifeline Accountability Database (NLAD) is operational in 46 states and the District of Columbia. In May 2015, FCC reported the results of the broadband pilot program. However, FCC has not fully implemented three reform efforts: Flat-rate reimbursement: To simplify administration of the Lifeline program, FCC established a uniform, interim flat rate of $9.25 per month for non-Tribal subscribers. FCC sought comment on the interim rate, but has not issued a final rule with a permanent reimbursement rate. Initial eligibility verification and annual recertification procedures: To reduce the number of ineligible consumers receiving program benefits, the Order required that Lifeline providers verify an applicant’s eligibility at enrollment and annually through recertification; these requirements have gone into effect. In addition, to reduce the burden on consumers and Lifeline providers, the Order called for an automated means for determining Lifeline eligibility by the end of 2013. FCC has not met this timeframe or revised any timeframes for how or when this automated means would be available. Performance goals and measures: FCC established three outcome- based goals: (1) to ensure the availability of voice service for low- income Americans, (2) to ensure the availability of broadband for low- income Americans, and (3) to minimize the Universal Service Fund contribution burden on consumers and businesses. FCC identified performance measures it will use to evaluate progress towards these goals, but it has not yet fully defined these measures. FCC officials noted they are working on defining them using the Census Bureau’s American Community Survey data, which were made available in late 2014. In our March 2015 report, we found that FCC has not evaluated the effectiveness of the Lifeline program, which could hinder its ability to efficiently achieve program goals. Once adopted, performance measures can help FCC track the Lifeline Program’s progress toward its goals. However, performance measures alone will not fully explain the contribution of the Lifeline program toward reaching program goals, because performance measurement does not assess what would have occurred in the absence of the program. According to FCC, Lifeline has been instrumental in narrowing the penetration gap (the percentage of households with telephone service) between low-income and non-low- income households. In particular, FCC noted that since the inception of Lifeline, the gap between telephone penetration rates for low-income and non-low-income households has narrowed from about 12 percent in 1984 to 4 percent in 2011. Although FCC attributes the penetration rate improvement to Lifeline, several factors could play a role. For example, changes to income levels and prices have increased the affordability of telephone service, and technological improvements, such as mobility of service, have increased the value of telephone service to households. FCC officials stated that the structure of the program has made it difficult for the commission to determine causal connections between the program and the penetration rate. However, FCC officials noted that two academic studies have assessed the program. These studies suggest that household demand for telephone service—even among low-income households—is relatively insensitive to changes in the price of the service and household income. This suggests that many low-income households would choose to subscribe to telephone service in the absence of the Lifeline subsidy. For example, one study found that many households would choose to subscribe to telephone service in the absence of the subsidy. As a result, we concluded that the Lifeline program, as currently structured, may be a rather inefficient and costly mechanism to increase telephone subscribership among low-income households, because several households receive the subsidy for every additional household that subscribes to telephone service due to the subsidy. FCC officials said that this view does not take into account the Lifeline program’s purpose of making telephone service affordable for low-income households. However, in the Order, the commission did not adopt affordability as one of the program’s performance goals; rather, it adopted availability of voice service for low-income Americans, measured by the penetration rate. These research findings raise questions about the design of Lifeline and FCC’s actions to expand the pool of eligible households. We estimated approximately 40 million households were eligible for Lifeline in 2012. The Order established minimum Lifeline eligibility, including households with incomes at or below 135 percent of the federal poverty guidelines, which expanded eligibility in some states that had more limited eligibility criteria. Further, FCC proposed adding qualifying programs, such as the Special Supplemental Nutrition Program of Women, Infants, and Children (WIC) program, and increasing income eligibility to 150 percent of the federal poverty guidelines. We estimated that over 2 million additional households would have been eligible for Lifeline in 2012 if WIC were included in the list of qualifying programs. These proposed changes would add households with higher income levels than current Lifeline- eligible households. Given that the telephone penetration rate increases with income, making additional households with higher incomes eligible for Lifeline may increase telephone penetration somewhat, but at a high cost, since a majority of these households likely already purchase telephone service. This raises questions about expanding eligibility and the balance between Lifeline’s goals of increasing penetration rates while minimizing the burden on consumers and businesses that fund the program. In our March 2015 report, we recommended that FCC conduct a program evaluation to determine the extent to which the Lifeline program is efficiently and effectively reaching its performance goals of ensuring the availability of voice service for low-income Americans while minimizing program costs. Our prior work on federal agencies that have used program evaluation for decision making has shown that it can allow agencies to understand whether a program is addressing the problem it is intended to and assess the value or effectiveness of the program. The results of an evaluation could be used to clarify FCC’s and others’ understanding of how the Lifeline program does or does not address the problem of interest—subscription to telephone service among low-income households—and to assist FCC in making changes to improve program design or management. We believe that without such an evaluation, it will be difficult for FCC to determine whether the Lifeline program is increasing the telephone penetration rate among low-income customers, while minimizing the burden on those that contribute to the Universal Service Fund. FCC agreed that it should evaluate the extent to which the Lifeline program is efficiently and effectively reaching its performance goals and said that it would address our recommendation. In our March 2015 report we also found that FCC’s broadband pilot program includes 14 projects that test an array of options and will generate information that FCC intends to use to decide whether and how to incorporate broadband into Lifeline. According to FCC, the pilot projects are expected to provide high-quality data on how the Lifeline program could be structured to promote broadband adoption by low- income households. FCC noted the diversity of the 14 projects, which differed by geography (e.g., urban, rural, Tribal), types of technologies (e.g., fixed and mobile), and discount amounts. FCC selected projects that were designed as field experiments and offered randomized variation to consumers. For example, one project we reviewed offered customers three different discount levels and a choice of four different broadband speeds, thereby testing 12 different program options. FCC officials said they aimed to test and reveal “causal effects” of variables. FCC officials said this approach would, for example, test how effective a $20 monthly subsidy was relative to a $10 subsidy, which would help FCC evaluate the relative costs and benefits of different subsidy amounts. However, FCC officials noted that there was a lack of FCC or third party oversight of the program, meaning that pilot projects themselves were largely responsible for administration of the program. We found that FCC did not conduct a needs assessment or develop implementation and evaluation plans for the broadband pilot program, as we had previously recommended in October 2010. At that time, we recommended that if FCC conducted a broadband pilot program, it should conduct a needs assessment and develop implementation and evaluation plans, which we noted are critical elements for the proper development of pilot programs. We noted that a needs assessment could provide information on the telecommunications needs of low-income households and the most cost-effective means to meet those needs. Although FCC did not publish a needs assessment, FCC officials said they consulted with stakeholders and reviewed research on low-income broadband adoption when designing the program. Well-developed plans for implementing and evaluating pilot programs include key features such as clear and measurable objectives, clearly articulated methodology, benchmarks to assess success, and detailed evaluation time frames. FCC officials said they did not set out with an evaluation plan because they did not want to prejudge the results by setting benchmark targets ahead of time. FCC officials said they are optimistic that the information gathered from the pilot projects will enable FCC to make recommendations regarding how broadband could be incorporated into Lifeline. FCC officials noted that the pilot program is one of many factors it will consider when deciding whether and how to incorporate broadband into Lifeline, and to the extent the pilot program had flaws, those flaws will be taken into consideration. Since our report was issued, FCC released a report on the broadband pilot program, which discusses data collected from the 14 projects. We also found that the broadband pilot projects experienced challenges, such as lower-than-anticipated enrollment. The pilot projects enrolled approximately 12 percent of the 74,000 low-income consumers that FCC indicated would receive broadband through the pilot projects. According to FCC’s May 2015 report, 8,634 consumers received service for any period of time during the pilot. FCC officials said that the 74,000 consumers was an estimate and was not a reliable number and should not be interpreted as a program goal. FCC officials said they calculated this figure by adding together the enrollment estimates provided by projects, which varied in their methodologies. For example, some projects estimated serving all eligible consumers, while others predicted that only a fraction of those eligible would enroll. FCC officials told us they do not view the pilot’s low enrollment as a problem, as the program sought variation. Due to the low enrollment in the pilot program, a small fraction of the total money FCC authorized for the program was spent. Specifically, FCC officials reported that about $1.7 million of the $13.8 million authorized was disbursed to projects. FCC and pilot project officials we spoke to noted that a preliminary finding from the pilot was that service offered at deeply discounted or free monthly rates had high participation. FCC officials and representatives from the four pilot projects we interviewed noted that broadband offered at no or the lowest cost per month resulted in the highest participation. For example, we found one project that offered service at no monthly cost to the consumer reported 100 percent of its 709 enrollees were enrolled in plans with no monthly cost as of October 2013, with no customers enrolled in its plans with a $20 monthly fee. This information raises questions about the feasibility of including broadband service in the Lifeline program, since on a nationwide scale, offering broadband service at no monthly cost would require significant resources and may conflict with FCC’s goal to minimize the contribution burden. Chairman Wicker, Ranking Member Schatz, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Michael Clements, Acting Director, Physical Infrastructure Issues at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Antoine Clark and Emily Larson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Through FCC's Lifeline program, companies provide discounts to eligible low-income households for telephone service. Lifeline supports these companies through the Universal Service Fund (USF); in 2014, Lifeline’s disbursements totaled approximately $1.7 billion. Companies generally pass their USF contribution obligation on to their customers, typically in the form of a line item on their telephone bills. In 2012, FCC adopted reforms to improve the program’s internal controls and to explore adding broadband through a pilot program. This testimony summarizes the findings from GAO’s March 2015 report (GAO-15-335) and provides information on (1) the status of Lifeline reform efforts, (2) the extent to which FCC has evaluated the effectiveness of the program, and (3) how FCC plans to evaluate the broadband pilot program. GAO reviewed FCC orders and other relevant documentation; analyzed 2008-2012 Census Bureau data; and interviewed FCC officials, officials at four pilot projects selected based on features such as technology, and officials from 12 Lifeline providers and four states selected based on factors such as disbursements and participation. The Federal Communications Commission (FCC) has made progress implementing reforms to the Lifeline Program (Lifeline), which reduces the cost of telephone service for eligible low-income households. In 2012, FCC adopted a Reform Order with 11 key reforms that aimed to increase accountability and strengthen internal controls, among other things. FCC has made progress implementing eight of the reforms, including the National Lifeline Accountability Database, which provides a mechanism to verify an applicant’s identify and whether the applicant already receives Lifeline service. FCC has partially implemented three of the reforms. For example, FCC established performance goals for the program, but it has not fully defined performance measures. FCC has not evaluated the extent to which Lifeline is efficiently and effectively reaching its performance goals—to ensure the availability of voice service for low-income Americans and minimize the burden on consumers and businesses that fund the program. FCC attributes improvements in the level of low-income households' subscribing to telephone service over the past 30 years to Lifeline, but other factors, such as lower prices, may play a role. FCC officials stated that Lifeline's structure makes evaluation difficult, but referred GAO to two academic studies that have evaluated the program. These studies suggest that household demand for telephone service—even among low-income households—is relatively insensitive to changes in the price of service and household income; therefore, several households may receive the Lifeline subsidy for every additional household that subscribes to telephone service due to the subsidy. GAO has found that program evaluation can help agencies understand whether a program is addressing an intended problem. Without a program evaluation, FCC does not know whether Lifeline is effectively ensuring the availability of telephone service for low-income households while minimizing program costs. The usefulness of the broadband pilot program may be limited by FCC’s lack of an evaluation plan and other challenges. The pilot program included 14 projects to test an array of options and provide data on how Lifeline could be structured to promote broadband. Although GAO recommended in 2010 that FCC develop a needs assessment and implementation and evaluation plans for the pilot, FCC did not do so. A needs assessment, for example, could provide information on the telecommunications needs of low-income households and the most cost-effective means to meet those needs. In addition, the 14 projects enrolled about 12 percent of the 74,000 customers anticipated. FCC officials said they do not view the pilot’s low enrollment as a problem, as the program sought variation. FCC officials noted that the pilot program is one of many factors it will consider when deciding whether and how to incorporate broadband into Lifeline, and to the extent the pilot program had flaws, those flaws will be taken into consideration. In May 2015, FCC released a report which discusses data collected from the pilots. In its March 2015 report, GAO recommended that FCC conduct a program evaluation to determine the extent to which the Lifeline program is efficiently and effectively reaching its performance goals. FCC agreed that it should evaluate the extent to which the program is efficiently and effectively reaching its performance goals and said that it will address GAO's recommendation.
Medicare pays for most DMEPOS through fee schedules based on suppliers’ previous charges to Medicare. The fee schedule payment is generally equal to 80 percent of the lesser of either the supplier’s actual charge or the Medicare fee schedule for a particular item or service. In general, Medicare beneficiaries are responsible for paying the supplier To process all Medicare the remaining 20 percent—the coinsurance. DMEPOS payment claims including coverage and payment determinations, CMS contracts with four DME Medicare Administrative Contractors. CMS and its CBP implementation contractor—Palmetto GBA—administer and implement CBP and its bidding rounds. To be eligible to submit bids to furnish CBP-covered DME items in one or more product categories in one or more of the competitive bidding areas, suppliers must first meet several requirements. Specifically, suppliers must have an active National Supplier Clearinghouse (NSC) number that makes them eligible to bill Medicare for DME, have met Medicare enrollment and quality standards, have a surety bond, and be accredited. After the bid window closes, Palmetto GBA reviews bids to determine whether each supplier’s bid submission is complete and compliant with the bidding requirements, and whether the supplier’s financial score meets CMS’s minimum financial standard threshold to be eligible to compete on price. If the bid meets these requirements, it is considered a qualified bid and can then compete on price. Before comparing prices, Palmetto GBA reviews each qualified bid’s estimated capacity projections—the supplier’s anticipated ability to provide the volume of items claimed in the bid in light of the supplier’s historical capacity, expansion plans, and financial score. Palmetto GBA uses several steps to compare prices and identify the winning bids. First, Palmetto GBA reviews the DME bid item prices submitted by suppliers with qualified bids and uses a methodology to calculate what is known as a composite bid to allow for a comparison of prices submitted across bidding suppliers with qualified bids. A composite bid is determined by summing all of the weights assigned to each item in a product category—with each item weight calculated using national beneficiary utilization data for that item compared to the other items within that product category. Once the composite price has been calculated, the bids are ordered by the lowest to highest composite bid price in each product category in each competitive bidding area. When the bids have been ordered, Palmetto GBA calculates the cumulative projected capacity of the competing bids—which indicates the capacity that each supplier projects it could furnish throughout an entire competitive bidding area each year. Palmetto GBA begins with the lowest composite price and moves up the ordered list to identify the bid where the suppliers’ cumulative projected capacity meets or exceeds CMS’s estimated beneficiary demand, which is referred to as the pivotal bid. Although many bids can be qualified to compete on price, only those with composite prices that are equal to or less than the pivotal bid are determined to be winning suppliers, based on price, and are used to establish Medicare’s CBP single payment amounts for each item in a product category in a competitive bidding area. Specifically, for each item, the winning bids’ price offers are ordered from lowest to highest and the median bid price offered by these suppliers for that item becomes the single payment amount. To ensure there is a sufficient number of suppliers and to meet its target goal of awarding at least five contracts in CMS caps the each product category in each competitive bidding area,estimated projected capacity of any single supplier to 20 percent of the total projected beneficiary demand for each product category in each competitive bidding area, regardless of the capacity estimated by the supplier in its bid. The CBP single payment amounts are required to be less than or equal to the Medicare fee-for-service payments for the same items. The same DME item may have a different CBP single payment amount in each competitive bidding area. CMS offers the winning suppliers 3-year contracts to furnish items in the product categories and competitive bidding areas in which they won. All contract suppliers that accept the contract offers must maintain their Medicare billing privileges, state licensure, and accreditation throughout the contract period and accept CMS is required, assignment on all DME items under their contracts.under federal law, to conduct another bidding round to select contract suppliers no less often than once every three years. CBP round 1 was conducted in 2007 and 2008 for 10 competitive bidding areas. For the bidding, CMS chose certain DME items in 10 product categories—generally high-cost and high-volume items and services— that were most likely to result in Medicare savings if competitively acquired. The round 1 contract suppliers were announced in May 2008. However, round 1’s bid submission and contract award processes caused concerns about CMS’s CBP implementation. In our November 2009 report, we found problems with the bidding process, including poor timing and lack of clarity in bid submission information and CMS’s inability to inform suppliers of missing financial documentation. We also found that CMS did not provide suppliers with timely and clear bid submission information, used an inadequate electronic bid submission system, and did not have a process to inform bidders of missing financial documentation—42 percent of all submitted bids were disqualified due to incomplete financial documentation. In our report, we recommended that if CMS reviews suppliers’ disqualified bids during the round 1 rebid and future rounds, it should notify all suppliers of any such process, give suppliers equal opportunity for such reviews, and clearly indicate how suppliers can request a review. The enactment of MIPPA stopped CBP round 1 two weeks after it began operating and required CMS to repeat the competition for CBP round 1 in 2009. MIPPA also imposed additional criteria for how CMS should conduct later CBP rounds and expand the CBP to additional areas. In addition, MIPPA required that CMS notify bidding suppliers about any missing financial documentation if the suppliers submitted their documentation within a time period known as the covered document review date.acquisition ombudsman (CAO) to respond to inquiries and complaints made by DME suppliers and individuals concerning the CBP’s application. The CAO can work with Palmetto GBA and its local offices. In the CBP round 1 rebid, the product categories were revised to delete the negative pressure wound therapy category—pumps that apply controlled negative or subatmospheric pressure used to treat ulcers or wounds that have not responded to traditional wound treatment methods—and to exclude group 3 complex rehabilitative power wheelchairs (must meet the highest performance requirements, for example, be able to travel at least 12 miles on a single charge of batteries) from the entire CBP, and to delete San Juan (San Juan-Caguas-Guaynabo, Puerto Rico) as a competitive bidding area. provide DME items and services in nine DME product categories in nine competitive bidding areas. CMS has stated that the CBP round 1 rebid single payment amounts resulted in an average savings of 42 percent in 2011 compared to 2010 for the same items. In January 2012, CMS began CBP’s bidding process for round 2. Round 2 will cover 91 metropolitan statistical areas and CMS has determined the competitive bidding areas within those MSAs. The 60-day round 2 bid window was open from January 30, 2012, to March 30, 2012. CMS intends to announce the round 2 winning contract suppliers in spring 2013, and for the contracts and single payment amounts to become effective July 1, 2013. Round 2 will operate for 3 years and includes the same product categories as the round 1 rebid except for the addition of the negative pressure wound therapy category, the deletion of the complex power wheelchairs and mail-order diabetic supplies categories, and the expansion of the support surfaces category to all competitive bidding areas. A national mail-order diabetic supplies program competition will be conducted at the same time as round 2, and will require bidding suppliers to demonstrate that their bids cover at least 50 percent, by sales volume, of all types of diabetic testing strips on the market.for CBP’s legislative history and program implementation time line.) (See fig. 1 To ensure that small suppliers are considered when selecting contract suppliers, CMS set a target that 30 percent of the qualified suppliers in each product category in each competitive bidding area are small. CMS defines small suppliers as those that generate gross revenue of $3.5 million or less in annual receipts that include both Medicare and non- Medicare revenue. In cases where the small supplier target goal is not met, CMS can award additional CBP contracts to small suppliers after it determines the number of suppliers needed to meet or exceed CMS’s estimated beneficiary demand. Between 2 to 20 small suppliers are allowed to group together as a network to submit a bid as a single entity under CBP, and to provide services as a contract network if awarded a CBP contract. The suppliers involved must certify that they cannot independently furnish all the competitively bid items in the product category to beneficiaries throughout the entire competitive bidding area for which the network is submitting a bid. Some suppliers not awarded contracts have the option to choose to continue to furnish certain CBP-covered rental items to beneficiaries who were their customers when CBP began on January 1, 2011, and who are residing in the competitive bidding areas. These suppliers are referred to as grandfathered suppliers. It is the beneficiaries’ choice whether to remain with their grandfathered supplier or to select a CBP contract supplier. Many CBP-covered items that are rented can be grandfathered including, for example, oxygen and oxygen equipment, capped rental DME—such as hospital beds—and inexpensive and routinely purchased DME for the remaining rental months. Once the relevant rental periods expire or a beneficiary decides to select a contract supplier, the grandfathered supplier can no longer provide the CBP-covered items and services to the beneficiary. Subcontracting allows contract suppliers to work with suppliers that are Medicare-accredited to provide limited services to CBP-covered beneficiaries. A supplier that subcontracts may perform only three services: (1) purchase inventory and fill orders, fabricate or fit items from its own inventory or contract with other companies to purchase items necessary to fill an order, (2) deliver CBP-covered items to beneficiaries, and (3) repair rented equipment. For CBP, subcontracting suppliers may include suppliers that did not bid, that bid and lost, or that won contracts but subcontract with other contract suppliers for a product category not won. The contract suppliers are responsible for billing Medicare for any services that their subcontract suppliers perform since subcontract suppliers are not eligible to bill Medicare themselves. Contract suppliers are to disclose to CMS each subcontracting agreement and are also responsible for ensuring that their subcontractors are Medicare- accredited for the product categories covered by the subcontracting agreement. are the only Skilled nursing facilities (SNF) entities that can bid to win a CBP contract as a CBP specialty supplier. If such a facility wins a specialty supplier contract, the facility can only furnish the CBP enteral nutrients, equipment, and supplies product category to its own residents covered under Medicare Part B. The facilities may also choose to submit bids to win CBP contracts as a regular contract supplier. If they win a regular contract, they may then furnish the CBP-covered items in the product category they have won to beneficiaries throughout their competitive bidding area. and nursing facilities (NF) A SNF provides residents with restorative services such as physical or speech therapy. A SNF provides a level of care distinguishable from the intensive care furnished by a general hospital or the custodial or supportive care furnished by nursing homes primarily designed to provide daily services above the level of room and board. To assist beneficiaries in locating a contract supplier in their competitive bidding area, CMS maintains a CBP supplier locator tool on the Medicare website. The supplier locator contains the names of the contract suppliers in each competitive bidding area and the product categories for which they furnish CBP-covered items. The contract suppliers submit information to CMS each quarter on a form that lists the specific items they furnish—including the brand names and equipment models which CMS uses to update the supplier locator. Beneficiaries with CBP questions—referred to by CMS as inquiries—are directed to call 1-800-MEDICARE. Callers are assisted by CBP customer service representatives (CSR) trained to answer questions about CBP in general and to assist beneficiaries in finding CBP suppliers. Beneficiaries calling from area codes in competitive bidding areas hear a prompt at the beginning of their call, which takes them directly to a CBP CSR. Beneficiaries calling from an area code not in a competitive bidding area can also reach a CBP CSR through a series of prompts. CSRs use CBP scripts—written responses to commonly asked questions—when initially responding to CBP-related calls. CSRs read a response to the beneficiary either from a script about CBP in general, or from a script specific to one of the nine product categories. If the beneficiary’s inquiry cannot be addressed by the scripts, the CSR will forward it to an advanced-level CSR trained to research a CBP-related question and respond after completing research on the caller’s inquiry. For example, an advanced CSR might work with a beneficiary traveling outside a competitive bidding area to ensure the beneficiary continues to receive necessary DME. CMS defines a CBP complaint as a CBP inquiry that cannot be resolved by any CSR with 1-800-MEDICARE and is sent to another entity for resolution. The CBP-related entities include: Palmetto GBA, the CMS regional offices, and the CAO. Palmetto GBA investigates all beneficiary or supplier complaints related to alleged CBP contract violations, supplier or quality standard violations, and CBP and Medicare program violations, including fraud and abuse. CMS’s regional offices are the focal point for unresolved calls; for example, the offices may assist when a CSR is unable to help a beneficiary find a contract supplier. The CAO responds to other unresolved CBP questions from both suppliers and individuals. CMS conducts several monitoring activities to determine whether beneficiary access or satisfaction have been affected by the implementation of CBP. CMS monitors outcomes such as hospitalizations, physician visits, and deaths for beneficiaries in competitive bidding areas, because these outcomes may reflect issues with beneficiary access to necessary DME. CMS posts to its Web site monthly reports on these outcomes in competitive bidding areas and in comparison areas to demonstrate the effects of CBP on health outcomes. CMS also conducted a pre and post-implementation survey to measure beneficiary satisfaction with CBP. The pre-implementation survey was conducted from June 24 to August 3, 2010, and the post-implementation survey was conducted from August 29 to October 20, 2011. CMS surveyed beneficiaries in the nine CBP competitive bidding areas as well as nine comparison markets, chosen to allow a comparison with competitive bidding areas. CMS may also conduct secret shopping in response to complaints such as those concerning diabetic testing suppliers. The number of bidding suppliers and the number of contracts awarded in the CBP round 1 rebid were very similar to CBP round 1. Improvements were made to the bidding process for the CBP round 1 rebid, and significantly fewer bids were disqualified; nevertheless, many suppliers still had difficulty meeting bid requirements. As in round 1, some suppliers that requested that CMS review their disqualified bids were found to have been incorrectly disqualified and offered a contract. Nearly the same number of suppliers bid in both CBP round 1 (1,010 suppliers) and the CBP round 1 rebid (1,011 suppliers). About a third of all the suppliers that bid were awarded at least one CBP contract, and CMS generally met its target—that 30 percent of the suppliers awarded a contract for each product category in each competitive bidding area be small—by awarding contracts to 219 small suppliers of the 356 winning suppliers. (See table 1.) The number of bids that were disqualified in the initial bid review and, therefore, not eligible to compete on price was significantly less in CBP round 1 rebid than in CBP round 1. In CBP round 1 rebid, about 30 percent of bids submitted were disqualified for at least one or more reasons (1,854 of 6,215 submitted). Therefore, about 70 percent of all bids submitted were qualified and used to determine the pivotal bid, which was then used to establish single payment amounts for each item that was included in the CBP round 1 rebid.almost 50 percent of bids submitted were disqualified during the initial bid review (3,143 of 6,374 submitted) and only about half of all bids submitted were qualified to compete on price. In contrast, in CBP round 1, About 20 percent of bids submitted in the CBP round 1 rebid resulted in contracts between CMS and suppliers (1,217 out of 6,215)—which is comparable to 22 percent of bids that resulted in contracts between CMS and suppliers in CBP round 1 (1,372 out of 6,374.) (See table 2 for round 1 rebid results.) CMS made initial contract offers for CBP round 1 rebid within the 3-month period between July 1, 2010, and September 24, 2010, and announced the winning contract suppliers on November 3, 2010. Although CBP round 1 rebid contracts began on January 1, 2011, CMS made additional contract offers between December 17, 2010, and January 24, 2011. Fewer bids were disqualified in CBP round 1 rebid and CMS provided additional feedback to suppliers that had bids disqualified, indicating to suppliers all the reasons for disqualification. While CMS improved the CBP bidding process, many suppliers still had difficulty complying with bid submission requirements, and had particular difficulty with financial documentation requirements. Although the majority of suppliers with disqualified bids that contacted CMS with questions were found to have been correctly disqualified, some suppliers were later found to have incorrectly disqualified bids and were offered contracts. About 20 percent fewer bids were disqualified during the initial bid review of CBP round 1 rebid than in round 1. The number of bids disqualified in CBP round 1 rebid would have been higher if many suppliers had not benefited from a new process giving suppliers the opportunity to be notified of and submit missing required financial documentation—a process that was not available during CBP round 1. The term distinct is used to indicate a supplier that is not being double-counted. For example, if a supplier had multiple bids disqualified because the bids did not meet licensure requirements, the supplier is only counted one time for having bids disqualified for that reason. If a supplier had a bid disqualified for two or more reasons, the supplier would be counted as one distinct supplier for each reason that its bids were disqualified. either for that reason alone or in addition to another bid submission deficiency reason. Although fewer bids were disqualified in CBP round 1 rebid, many suppliers had difficulty meeting the bidding requirements. As occurred in CBP round 1, in which 88 percent of disqualified bids were disqualified because they failed to provide the required financial documentation or did not meet CMS’s minimum financial standard threshold for suppliers, the majority of CBP round 1 rebid bids (73 percent) that were disqualified on initial bid review were also disqualified for the same reasons. Specifically, 44 distinct suppliers (about 4 percent of all bidding suppliers) had 293 bids disqualified because the bidding suppliers did not meet CMS’s minimum supplier financial standards. Bidding suppliers that did not meet minimum financial standards would be unlikely for financial reasons to be able to fulfill their contract obligations, in CMS’s judgment. In addition, 162 distinct suppliers (about 16 percent of all bidding suppliers) submitted 834 bids that were disqualified because of unacceptable or inaccurate financial documentation, while 51 distinct suppliers (about 5 percent of all bidding suppliers) submitted 216 bids with missing financial documentation. The number of CBP round 1 rebid bids disqualified for missing financial documentation would have been higher without CMS’s implementation of the MIPPA provision for financial document review. Under this provision, CMS is required to determine if any suppliers’ financial documents that are submitted by a certain time in a CBP bid window—known as the covered document review date—are missing and to notify and provide suppliers the opportunity to submit them. In CBP round 1 rebid, 791 suppliers—or 78 percent of all bidding suppliers—submitted their financial documentation by the covered document review date. Of those eligible to have their financial documentation reviewed, 321 suppliers (about 41 percent) were notified that they had missing documentation— including 184 small suppliers. Of the 321 suppliers that were notified, 232 suppliers submitted the correct missing documentation, 14 did not provide missing documentation, and 75 resubmitted their documentation, but were ultimately disqualified for unacceptable (such as incomplete or inaccurate) documents. Ninety-three of the 321 suppliers—about 29 percent—that were notified by CMS that they had missing financial documentation, and subsequently provided correct documentation, were ultimately awarded one or more CBP contracts. In both CBP round 1 and the CBP round 1 rebid, the statement of cash flow was the most common reason that suppliers were disqualified for missing or unacceptable financial documentation. Although CMS provided an example of a statement of cash flow in the CBP round 1 rebid bidding instructions and suggested that financial statements be compiled by an independent accounting firm or prepared by the supplier, Palmetto GBA reported that it was obvious that many bidding suppliers still did not understand what constituted an acceptable statement of cash flow during the CBP round 1 rebid bid submission process. According to CMS, one reason that the statement of cash flow was the most difficult financial document to prepare in both CBP rounds is because it is prepared much less often than other types of financial documents—particularly by small suppliers. CMS reported that another obstacle in preparing acceptable statements of cash flow is that suppliers with very limited understanding of accounting practices and how to prepare financial statements compiled the statements of cash flow themselves and relied on results generated by inexpensive accounting software, which CMS told us was not sufficient. As a result, CMS provided additional information in its CBP round 2 bidding instructions, and strongly recommended that suppliers’ financial statements be compiled by an independent accounting firm to discourage suppliers from preparing their own financial documents. In the CBP round 1 rebid, as in CBP round 1, CMS determined that some suppliers’ bids had been incorrectly disqualified. During CBP round 1, we reported that CMS did not effectively communicate to suppliers that they had an opportunity to have their round 1 bids reviewed. CMS officials told us that they conducted a postbidding review process for suppliers which contacted the agency with questions or requested a review and subsequently found that 10 of the 357 round 1 suppliers that had bids reviewed had been incorrectly disqualified. After reviewing the language that CMS provided to suppliers during CBP round 1, we determined that CMS did not effectively communicate to suppliers that they had an opportunity to have disqualified round 1 bids reviewed. As a result, in 2009, we recommended, and CMS agreed, that if CMS chose to conduct a review of disqualification decisions during CBP round 1 rebid and future bids, CMS should notify and give all suppliers an equal opportunity for review, and clearly indicate how suppliers can request a review. Although the notification that CMS provided to suppliers during the CBP round 1 rebid provided more information than was provided during CBP round 1, CMS did not inform suppliers that a review of a disqualified bid could possibly result in reversal of the disqualification and extension of a contract offer. (See fig. 2.) In both CBP rounds, CMS determined that some suppliers that contacted Palmetto GBA and requested a review of their bids had been incorrectly disqualified. (See table 4.) CMS told us it received bid inquiries from 99 suppliers that had bids disqualified in CBP round 1 rebid and subsequently extended contracts to 7 of those suppliers—about 7 percent. In CBP round 1, 10 suppliers—or 3 percent of the 357 suppliers that contacted Palmetto GBA—were found to have bids that were incorrectly disqualified. Suppliers’ bids could have been incorrectly disqualified for various reasons, such as for issues regarding financial documentation, because they were thought not to have the required license in the state or product category in which bids were submitted, or because the bids were deemed not bona fide. Both contract and non-contract suppliers have been affected by the first year of the CBP round 1 rebid. In the first months of 2011, few CBP contract suppliers had their contracts terminated by CMS, voluntarily canceled their contracts, or were involved in ownership changes. Since the CBP round 1 rebid began, many non-contract suppliers have chosen to be grandfathered suppliers for certain CBP rental DME. Some contract and non-contract suppliers have entered into subcontracting agreements to provide certain services to beneficiaries in CBP competitive bidding areas. Some suppliers with no previous experience with the DME product category or no location in a competitive bidding area were awarded contracts as CBP allows. During the first 10 months of 2011, 16 of the original 356 contract suppliers—4 percent—left CBP due to CMS terminating contract suppliers (8), or from contract suppliers voluntarily canceling their CBP contracts (8). (See table 5.) The 16 contract suppliers had 28 affected CBP contracts—about 2 percent of the 1,217 original CBP contracts. Thirteen of the 16 contract suppliers were small suppliers. CBP contracts can be terminated by CMS when a contract supplier fails to meet CBP requirements, for example, when Medicare accreditation is not maintained. A contract supplier can end its CBP contracts by voluntarily withdrawing from Medicare. For example, one contract supplier testified at a CMS CBP Program Advisory and Oversight Committee (PAOC) meeting that it had bid on numerous product categories but won only one contract and would be closing its business due to lost revenue; the supplier withdrew from Medicare in May 2011. Twenty-one of the 28 contracts involved two competitive bidding areas— Miami (15) and Riverside (6). Eighteen contracts involved two product categories—oxygen (10) and standard power wheelchairs (8). The Riverside competitive bidding area had six of the eight standard power wheelchair contracts that were ended. Eight DME supplier ownership changes—about 2 percent of the original 356 contract suppliers—occurred from November 3, 2010, when the winning contract suppliers were first announced, through November 30, 2011. While contract suppliers can be sold, their CBP contracts cannot. If a contract supplier’s ownership changes, CMS decides whether the CBP contract can be assumed by the new purchasing supplier—which can be another contract supplier or a non-contract supplier—by determining if the purchasing supplier meets the CBP contract supplier standards. In all nine changes, CMS determined that the new owners would assume the CBP contracts involved. (See table 6.) The CBP provides a grandfathering option that temporarily benefits some non-contract suppliers while also temporarily disadvantaging some contract suppliers. For non-contract suppliers, grandfathering allows them to retain Medicare revenues for some CBP-covered capped rental DME items for the length of the items’ rental periods, if the beneficiary involved chooses to remain with the grandfathered supplier until the rental period expires. For contract suppliers that won CBP contracts for the same DME capped rental DME items, grandfathering may be a temporary disadvantage in both limiting the number of Medicare beneficiaries they can serve and the amount of Medicare revenue they can immediately try to gain. Unless the beneficiary served by a grandfathered supplier decides to choose a contract supplier, the contract suppliers cannot try to furnish items to the same beneficiary and thus cannot increase their CBP Medicare revenue as quickly as they may have anticipated. The degree of grandfathering varies among the allowed product categories and competitive bidding areas. In CBP’s first 11 months of 2011, the top three grandfathered product categories—both by the number of beneficiaries renting items and by the allowed Medicare payments to grandfathered suppliers—were CPAP/RAD, hospital beds, and oxygen. (See table 7.) The number of grandfathered suppliers has generally steadily declined during 2011 as rental periods expire or beneficiaries chose contract suppliers. In January 2011 when the CBP round 1 rebid began, there were 1,364 grandfathered suppliers or 58 percent of the 2,363 suppliers that billed for beneficiaries they had been serving as of December 31, 2010. In comparison, in December 2011, there were 575 grandfathering suppliers or 22 percent of the 2,594 suppliers that billed for beneficiaries they had been serving as of December 31, 2010. (See fig. 3.) At the end of July 2011, about 31 percent of contract suppliers had subcontracting agreements. There were 112 distinct contract suppliers that had at least one subcontracting agreement with one of 211 distinct subcontractor suppliers. Four contract suppliers had terminated some of their subcontracts, and three contract suppliers had subcontracting agreements pending CMS approval. Some contract suppliers that were new to the competitive bidding area where they won or were new to a product category they won have subcontracting agreements with non-contract suppliers. Among the 44 distinct contract suppliers that did not have a previous business location in the competitive bidding area where they won at least one contract, 30 percent (13 suppliers) had at least one subcontracting agreement with a non-contract supplier. For the 43 distinct contract suppliers that were new to a product category, 37 percent (16 suppliers) had at least one subcontracting agreement. Although CMS requires contract suppliers to notify it of their subcontracting agreements, contract suppliers do not have to provide CMS with copies of their subcontracting agreements or report what they pay their subcontract suppliers. Contract suppliers are free to negotiate their own subcontracting agreements as CMS does not have subcontracting guidelines or an agreement template. For example, one subcontract supplier told us it negotiated with a contract supplier for a flat rate for hospital bed deliveries of $60 and another subcontract supplier had a $75 rate; one also negotiated a $20 delivery fee for walkers. Two subcontract suppliers told us that they have a 30-day termination notice provision in their agreements with contract suppliers. As allowed under CBP, CMS awarded round 1 rebid contracts to some suppliers that at the time they bid had no previous experience in at least one product category, or were new to at least one competitive bidding area—did not have a prior business location in the area—or both. There were 43 distinct44 new to a competitive bidding area—each were about 12 percent of the 356 original contract suppliers awarded contracts. Nine distinct contract suppliers were new to both a product category and a competitive bidding area; four of these were small suppliers. contract suppliers new to a product category and Of the 43 distinct contract suppliers with no previous experience for a product category they won, 23 are small suppliers. The enteral nutritionproduct category had the most contract suppliers new to a product category—19; the complex power wheelchairs product category had none. (See table 8.) Additionally, 44 distinct contract suppliers were new to a competitive bidding area where they won at least one contract; 18 were small suppliers. While all of the competitive bidding areas had suppliers new to the area, the Cleveland competitive bidding area had the most (21), and the Miami area had the least (3). (See table 9.) CMS’s monitoring efforts reported declining inquiries and complaints over the first year of CBP implementation, high levels of beneficiary satisfaction, and no changes in health outcomes. Although some of these efforts have limitations, in the aggregate, they provide useful information to CMS regarding beneficiary access and satisfaction. Information collected from CMS’s monitoring of inquiries to 1-800-MEDICARE suggests that CBP has not adversely affected beneficiary access to or satisfaction with DME. Calls to 1-800-MEDICARE regarding CBP declined during the first year of CBP implementation, and 2 percent of calls were from beneficiaries with an urgent need for CBP- covered DME. CBP-related calls comprised a small fraction of all calls to 1-800-MEDICARE. In 2011, CMS classified 127,466 CBP-related calls to 1-800-MEDICARE as inquiries. (See fig. 4.) The total number of CBP-related inquiries to 1-800-MEDICARE declined from 19,887 in January 2011 to 4,501 in December 2011. In the first 3 months of CBP implementation, most inquiries were regarding CBP in general. In subsequent months, there were more inquiries about specific CBP-covered products than about CBP generally. Over 2 million beneficiaries were involved in CBP round 1 rebid; the ratio of inquiries to 1-800-MEDICARE compared with CBP beneficiaries is approximately 1 inquiry for every 16 beneficiaries. Inquiries to 1-800-MEDICARE regarding CBP comprise less than one-half of 1 percent of inquiries to 1-800-MEDICARE. On average, CBP-related calls to 1-800-MEDICARE comprise nearly 13 percent of all DMEPOS- related calls. The proportion of DME-related 1-800-MEDICARE inquiries pertaining to CBP fell in 2011 from 19 percent in the first quarter to less than 7 percent in the fourth quarter. (See fig. 5.) Inquiries and complaints to 1-800-MEDICARE regarding DMEPOS in general, including CBP- related calls, have remained fairly steady from 2010 to 2011. The majority of product-specific inquiries to 1-800-MEDICARE—over 40,000—were about mail-order diabetic supplies. There were approximately 5,000 inquiries regarding standard power wheelchairs, 4,000 inquiries regarding CPAP/RAD, and 3,000 regarding walkers. (See fig. 6.) CSRs at 1-800-MEDICARE may respond to beneficiaries with time- sensitive inquiries. In 2011, there were no life-threatening inquiries related to CBP and 2,539 immediate-needs inquiries—about 2 percent of all inquiries. Immediate needs inquiries are defined as situations in which beneficiaries have less than 2 days of life-sustaining DME, or in which beneficiaries’ medical condition will be worsened if they are unable to access DME. In the first year of CBP, CMS classified 151 calls as complaints. (See fig. 7.) Seventy-seven percent of these complaints—or 116 complaints— occurred in the first half of 2011. CMS’s definition of inquiry and complaint may be an optimistic characterization of beneficiary calls. According to CMS, all calls are first classified as inquiries and are only classified as complaints when they remain unresolved by CSRs. However, CSRs are able to address most beneficiary inquiries, so the definition of inquiry encompasses the majority of types of calls to 1-800-MEDICARE. Inquiries may be recorded as complaints because of their level of complexity, rather than as a reflection of beneficiary dissatisfaction. CMS officials told us CSRs may forward complex inquiries to another entity for response, and these inquiries would be classified as complaints regardless of whether the beneficiary intended to log a complaint. CMS has multiple ongoing monitoring efforts to ensure that CBP beneficiaries can access DME and are satisfied with the program. While these tools have limitations, CMS’s monitoring of the first year of CBP implementation does not show evidence that beneficiaries have been affected negatively by CBP. Some of these tools—such as the beneficiary satisfaction survey—finished collecting data at the end of 2011. CMS’s claims and health outcomes monitoring tool found no changes in health outcomes in competitive bidding areas in 2011, but this method may not fully capture the relationship between access to DME and health outcomes. Other tools—such as secret shopping—are limited in scope, so their data will not provide beneficiary access information on the program as a whole. The results of CMS’s beneficiary satisfaction survey were generally positive, although the survey had limitations. CMS obtained responses from at least 400 beneficiaries in each of the nine competitive bidding areas, and in each of nine non-CBP comparison markets—areas chosen to closely match the makeup of each of the competitive bidding areas. Responses were collected by telephone in these 18 locations both pre- CBP and post-CBP. The survey collected beneficiary satisfaction ratings on a five-point scale for six topic questions about the beneficiary’s initial interaction with DME suppliers, the training received regarding the DME item, the delivery of the DME item, the quality of service provided by the supplier, the customer service provided by the supplier, and the supplier’s overall complaint handling. Respondents answered these questions with Follow-up questions one of five options from “very poor” to “very good.”were not used to obtain more detailed information. The survey design did not capture responses from beneficiaries living in those locations who may have needed, but did not obtain, DME during the period; that is, if a beneficiary’s access problems resulted in his not receiving DME, that beneficiary would not be included in the survey. The survey’s sampling methodology also did not ensure that all socio- economic groups were represented, so it does not confirm that all beneficiaries within an area had equal access. CMS’s beneficiary satisfaction survey did not reveal systemic beneficiary access or satisfaction problems with CBP. For all six questions in the competitive bidding areas, approximately 67 percent of beneficiaries reported their services as being “very good”. Beneficiaries in competitive bidding areas rated as “good” or “very good” their initial interaction with the DME supplier (89 percent), the training received (86 percent), delivery (91 percent), quality (90 percent), customer service (88 percent), and complaint handling (84 percent). Results within competitive bidding areas show a drop of one to three percentage points on each of the six questions from pre-implementation to post-implementation. Beneficiaries in the comparison markets rated their experiences similarly to those in competitive bidding markets: these beneficiaries rated as “good” or “very good” their initial interaction with the DME supplier (93 percent), the training received (89 percent), delivery (93 percent), quality (93 percent), customer service (91 percent), and complaint handling (88 percent). CMS’s daily monitoring of national Medicare claims data in real time found no changes in health outcomes in competitive bidding areas in 2011, but this method may not fully capture the relationship between access to DME and health outcomes. CMS tracks health outcomes— such as hospitalizations, emergency room visits, physician visits, admissions to skilled nursing facilities, and deaths—for beneficiaries likely to use a CBP-covered product and who have used a CBP-covered product, in both competitive bidding areas and similar comparison areas. CMS reports that, in 2011, the rate of use of hospital services, emergency room visits, physician visits, and skilled nursing facility care for beneficiaries in competitive bidding areas remained consistent with national trends. While these results are reassuring, these measures do not show directly whether beneficiaries received the DME they needed on time, or whether health outcomes were caused by problems accessing CBP-covered DME. In the first 6 months of 2011, CMS’s online supplier locator tool may not have provided beneficiaries with up-to-date item availability for two reasons. First, CMS’s update of its requirements after the second quarter no longer required suppliers to list the brands and models they had made available to beneficiaries in the previous quarter.not have records of supplies actually furnished, only of the type of supplies that contract suppliers intended to furnish. Second, suppliers we spoke with reported problems submitting the required forms in the first quarter of 2011, which may have caused a delay in updating information on the online supplier locator tool. These suppliers reported that the online submission form was unavailable during the period they were required to submit their first quarter data. They told us they had to submit hard copies of the forms, a time-consuming process which may have caused delays in reporting. CMS data show that fewer distinct beneficiaries in competitive bidding areas received CBP-covered DME items in 2011 than in 2010 for the six However, we do not assume that product categories that we analyzed.the utilization in 2010 was the appropriate level of Medicare utilization and the decline in the number of beneficiaries served between 2010 and 2011 does not necessarily indicate that beneficiaries did not have access to needed DME. For example, the number of beneficiaries served in 2010 may have been inflated by suppliers billing for unnecessary items; and 2011 claims data may not yet be complete. Data on the utilization of mail- order diabetic testing supplies is limited because some beneficiaries used non-CBP retail suppliers. For the six CBP product categories we analyzed for CBP’s first 6 months of 2011, initial Medicare claim data trends generally indicate a decrease in the number of CBP-covered Medicare beneficiaries who were furnished certain CBP-covered items. The decrease is evident when comparing changes in the number of distinct CBP-covered beneficiaries served in 2011 compared to 2010 in both the nine competitive bidding areas and non-competitive bidding areas.beneficiaries served does not necessarily indicate beneficiaries do not have access to needed DME as CMS told us that possible reasons for the decline in utilization may be the result of: However, such decline in the number of CBP’s round 1 rebid competitive bidding areas were selected by CMS, in part because they had high utilization, implying that some utilization may have been unnecessary. CBP bidding requirements may have eliminated some suppliers that previously may have been involved in potentially fraudulent Medicare claims billing which could have inflated pre-CBP utilization. CBP Medicare claims can be more closely monitored for possible fraud because there are fewer suppliers furnishing items. Some suppliers may have increased their Medicare claims submissions prior to the CBP round 1 rebid’s start date, which could have inflated 2010 utilization. Because suppliers have up to 1 year from the date of service to submit claims, the 2011 claims data may not yet be complete. For the CPAP/RAD product category, the number of distinct CBP-covered beneficiaries who were furnished these items in the nine CBP competitive bidding areas was smaller in each of the first 6 months of 2011 than in the same months of 2010. For example, in May 2010, 21,382 beneficiaries residing in the competitive bidding areas were furnished one or more CPAP/RAD product category items, while in May 2011, the number of beneficiaries furnished these items had declined by about 8 percent to 19,572. In contrast, in non-CBP competitive bidding areas, more beneficiaries were served in each of the first 6 months of 2011 compared to the same months in 2010. For example, in May 2010, 308,728 beneficiaries not residing in competitive bidding areas were furnished one or more CPAP/RAD product category items, while in May 2011, the number of beneficiaries furnished these items had risen to 333,746—for an increase of about 8 percent. (See fig. 8.) For the enteral product category, there were fewer beneficiaries served in both the nine CBP competitive bidding areas and the non-competitive bidding areas in the first 6 months of 2011 compared to the same months of 2010. However, for every month between January and June, the number of beneficiaries served in competitive bidding areas showed a larger decrease from 2010 to 2011 than occurred in the same month for non-competitive bidding areas. For example, in May 2010, 5,378 beneficiaries residing in the competitive bidding areas were furnished one or more enteral product category items, while in May 2011, the number of beneficiaries furnished these items had decreased by almost 15 percent to 4,576. Similarly, in May 2010, 62,298 beneficiaries not residing in competitive bidding areas were furnished one or more enteral product category items, while in May 2011, the number of beneficiaries furnished these items decreased by about 9 percent to 56,680. Although both CBP competitive bidding areas and the non-competitive bidding areas showed a decrease in the number of beneficiaries served in May 2011 as compared to May 2010, the competitive bidding areas had an additional 6 percent decrease than the non-competitive bidding areas. (See fig.9.) For the hospital beds product category, the number of distinct CBP- covered beneficiaries who were served these items was smaller in each of the first 6 months of 2011 than in the same months of 2010. (See fig. 10.) In non-CBP competitive bidding areas, more beneficiaries were served in the first 3 months of 2011 than in the first 3 months of 2010 but progressively fewer beneficiaries were served in April, May, and June of 2011 than in the same months of 2010. For the oxygen product category, the number of distinct CBP-covered beneficiaries who were served these items in the nine CBP competitive bidding areas was smaller in each of the first 6 months of 2011 than in the same months of 2010. (See fig. 11.) Similar to what occurred for the hospital bed category, in non-CBP competitive bidding areas, more beneficiaries were served in the first 3 months of 2011 than in the first 3 months of 2010, but progressively fewer beneficiaries were served in April, May, and June of 2011 than in the same months of 2010. For the standard power wheelchair product category, the number of distinct CBP-covered beneficiaries who were served these items in the nine CBP competitive bidding areas was also smaller in the first 6 months of 2011 than in the same months of 2010. While we included information about changes in utilization of the standard power wheelchair product category in competitive bidding areas, we did not include like information for non-competitive bidding areas because CMS changed the payment policy for standard power wheelchairs in non-competitive bidding areas only, making comparison to non-competitive bidding areas difficult. The payment policy change, effective January 1, 2011, eliminated the option for the lump sum purchase payment for standard power wheelchairs in all non-competitive bidding areas. (See fig. 12.) We also did not include utilization data for the complex wheelchair product category as it is unreliable due to suppliers’ inconsistent use of Medicare claims payment modifiers. For the walkers product category, the number of distinct CBP-covered beneficiaries who were furnished these items in the nine CBP competitive bidding areas was smaller in each of the first 6 months of 2011 than in the same months of 2010. While more beneficiaries were served in non- CBP competitive bidding areas in January 2011 than in January 2010, fewer beneficiaries were served in February through June of 2011 compared to the same months of 2010. (See fig. 13.) Although CBP-covered beneficiaries pay less for their diabetic testing supplies if they choose a CBP mail-order contract supplier, CMS has determined that some CBP-covered beneficiaries who had been receiving their supplies by mail-order in 2010 have been switching to non-mail- order sources in 2011. This switching would decrease both CBP’s mail- order utilization and its anticipated Medicare savings. CBP’s diabetic testing supplies product category is the only category that allows CBP-covered beneficiaries to choose how to receive their supplies—delivered by mail-order from a CBP contract supplier or furnished by a non-mail-order retail or storefront supplier. The beneficiary’s choice determines whether the CBP-covered supplies are paid at the CBP single payment amounts or at the Medicare fee schedule payments, and whether the beneficiary’s coinsurance is based on the lower CBP payment or the higher fee schedule payment. The HHS OIG is studying the extent to which and why beneficiaries have switched from mail-order to non-mail-order suppliers between 2010 (the year prior to CBP) and 2011 (the first year of CBP). There are concerns that suppliers may be providing testing supplies by mail-order but billing at the non-mail-order fee schedule payments, or may be incentivizing beneficiaries to choose non-mail-order instead of mail-order. The HHS OIG has stated that either of these activities could affect CBP mail-order utilization and projected CBP Medicare savings. Both CMS and suppliers incurred costs related to CBP. However, CMS estimates that CBP savings to Medicare and to beneficiaries are greater than its costs. CMS told us that it spent nearly $20 million on pre-implementation costs for the CBP round 1 rebid from May 2009 through December 2010. In August 2009, CMS began the suppliers’ bidding education campaign, and the round 1 rebid bid window opened on October 21, 2009. The CBP costs incurred during this time included outreach materials for beneficiaries, referral agents, and others, an IT contract, and other implementation costs. (See table 10.) In its 2007 CBP Final Rule, CMS estimated that a bidding supplier spends an average of $2,303.16 to prepare bids for CBP, and in May 2011, CMS officials told us that this estimate had not changed. Suppliers and supplier organizations told us they incurred varying costs when preparing a bid, including fees for legal and financial services. For example, one supplier hired a new staff member to oversee the bidding process, and some suppliers reported paying for assistance in compiling the required financial documentation. Some suppliers reported additional legal services to prepare their bids. Contract suppliers also incurred expenses for participating in the program. Winning suppliers may incur additional expenses to fulfill their contractual obligations—for example, one supplier told us that it paid up to $1,500 for updates to a software program in order to provide CMS the data required under CBP. Suppliers that subcontract with contract suppliers stated that they also incur expenses, such as the costs involved in negotiating an agreement with the contract supplier. CMS’s estimated savings to both the Medicare program and beneficiaries is significantly higher than its costs. In a 2012 report, CMS estimated the CBP saved Medicare approximately $202.1 million in its first year of implementation, a decrease in expenditures of over 42 percent in the nine competitive bidding areas. This estimate is larger than the CMS’s 2011 estimate, which did not include possible reductions in claims due to a decline in utilization. According to CMS, most savings come from the oxygen, mail-order diabetic supplies, and standard power wheelchair product categories. CMS also reported that CBP resulted in savings for beneficiaries. HHS reviewed a draft of this report and provided written comments which are reprinted in appendix II. HHS also provided technical comments, which we incorporated as appropriate. HHS made several general comments. First, HHS noted that the CBP round 1 rebid resulted in savings of more than $200 million in its first year, and that the Department anticipates additional savings of more than $25 billion to the Medicare program between 2013 and 2022 as CBP expands in round 2. Second, HHS commented that we had not fully accounted for the robust nature of CMS’s real-time claims monitoring system that measures the health status of Medicare beneficiaries using DME in both CBP and comparison areas, which HHS said indicates that CBP-covered beneficiaries have not been adversely affected by CBP . We revised the report to incorporate more details about the monitoring program, but we believe that our original description of the program was accurate. We concluded that, in the aggregate, CMS’s monitoring efforts provide useful information about beneficiary access and satisfaction. Third, HHS stated that we agreed with its view that the CBP round 1 rebid had reduced unnecessary utilization of DME. We noted that the CBP may have successfully reduced unnecessary DME utilization, because utilization has been reduced and CMS has not detected adverse health consequences, but our analysis does not allow us to conclude definitively that unnecessary utilization has been reduced. Moreover, we concluded that more experience with DME competitive bidding is needed to assess the program’s full effects. Fourth, HHS suggested that January 2011 was not an appropriate month to use in our examples of utilization changes associated with the round 1 rebid because it was the first month of the rebid. We agree and have changed our examples to May 2011. Finally, HHS discussed differences between CMS’s methods for measuring DME utilization changes associated with the CBP and our methodology, and noted disparities between CMS’s results and our findings. HHS noted that CMS monitors all DME claims in real-time in both CBP and matched comparison areas, and that its analyses are comparisons between the types of areas. As HHS noted, we analyzed claims data for DME items accounting for 80 percent of DME costs and utilization, not all items. We compared DME utilization in CBP round 1 rebid areas to the rest of the country, not to specific comparator areas. In addition, we analyzed claims for services provided in the months January through June in 2011 and compared them to the same months in 2010. Because the process of filing and processing Medicare claims can be lengthy, we used data for claims that had been processed by CMS’s payment contractors as of February 2012. We believe that our methods are valid. Further, our results are similar to results that CMS reported to us in its technical comments. For example, CMS found that 14 percent fewer beneficiaries had claims for hospital bed product category items in CBP areas in 2011 than in 2010. We found that about 13 percent fewer beneficiaries had claims for these items in May 2011 than in May 2010 in the CBP areas. Although the first year of the CBP round 1 rebid’s contracts has been completed, it is important to continue to closely monitor the CBP as the program expands into 91 additional areas in round 2. Our findings are based on the limited evidence available at the time we did our work. It is too soon to determine the full effects the CBP may have on Medicare beneficiaries and DME suppliers. We found that, in general, the round 1 rebid was successfully implemented. Nearly the same number of suppliers participated in the round 1 rebid as in CBP round 1. Few contract suppliers left Medicare during CBP’s first year. CMS’s beneficiary satisfaction survey and other monitoring activities, although limited, does not show evidence that beneficiaries have been affected negatively by CBP. Utilization of selected DME items declined in the round 1 rebid competitive bidding areas; however, we do not assume that all pre-CBP utilization was appropriate and CBP may have reduced unnecessary utilization of DME, particularly because CMS chose to implement the CBP round 1 rebid in areas with what it suspected were relatively high levels of unnecessary utilization. More experience with DME competitive bidding is needed, particularly to see if evidence of beneficiary access problems emerges. In the program’s first year, the prevalence of grandfathered suppliers for rental items may have ameliorated beneficiary access concerns. The number of grandfathered suppliers will continue to decrease as rental periods expire. Further, it is not known if the number of subcontracting suppliers will remain consistent or whether any change in subcontracting may affect beneficiary access to DME. While few contract suppliers voluntarily withdrew from CBP or were terminated by CMS in the first contract year, an increase in either outcome throughout the remaining contract period could have implications for beneficiary access and the CBP itself. Additionally, it will be important to determine if DME utilization trends similar to those in the round 1 rebid occur as the program expands into round 2’s competitive bidding areas. We are sending copies of this report to the Secretary of Health and Human Services. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Full face mask used with positive airway pressure device (each) Nasal interface (mask or cannula type) used with positive airway pressure device, with or without head strap Tubing used with positive airway pressure device Respiratory assist device, bi-level pressure capability, without backup rate feature, used with noninvasive interface, e.g., nasal or facial mask (intermittent assist device with continuous positive airway pressure device) Stationary compressed gaseous oxygen system, rental; includes container, contents, regulator, flowmeter, humidifier, nebulizer, cannula or mask, and tubing Stationary liquid oxygen system, rental; includes container, contents, regulator, flowmeter, humidifier, nebulizer, cannula or mask, and tubing Oxygen concentrator, single delivery port, capable of delivering 85 percent or greater oxygen concentration at the prescribed flow rate Oxygen concentrator, dual delivery port, capable of delivering 85 percent or greater oxygen concentration at the prescribed flow rate (each) In addition to the contact named above, key contributors to this report were Martin T. Gahart, Assistant Director; Krister Friday, Dan Lee, Lisa Motley, Michelle Paluga, Katherine Perry, Hemi Tewarson, and Opal Winebrenner. Medicare: Issues for Manufacturer-level Competitive Bidding for Durable Medical Equipment, GAO-11-337R. Washington, D.C.: May 31, 2011. Medicare: CMS Has Addressed Some Implementation Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program for the Round 1 Rebid, GAO-10-1057T. Washington, D.C.: September 15, 2010. Medicare: CMS Working to Address Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program. GAO-10-27. Washington, D.C.: November 6, 2009. Medicare: Covert Testing Exposes Weaknesses in the Durable Medical Equipment Supplier Screening Process. GAO-08-955. Washington, D.C.: July 3, 2008. Medicare: Competitive Bidding for Medical Equipment and Supplies Could Reduce Program Payments, but Adequate Oversight Is Critical. GAO-08-767T. Washington, D.C.: May 6, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare Payment: CMS Methodology Adequate to Estimate National Error Rate. GAO-06-300. Washington, D.C.: March 24, 2006. Medicare Durable Medical Equipment: Class III Devices Do Not Warrant a Distinct Annual Payment Update. GAO-06-62. Washington, D.C.: March 1, 2006. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: Past Experience Can Guide Future Competitive Bidding for Medical Equipment and Supplies. GAO-04-765. Washington, D.C.: September 7, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004.
To achieve Medicare savings for DME, the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) required that CMS implement the CBP for certain DME. In 2008, the Medicare Improvements for Patients and Providers Act (MIPPA) terminated the first round of supplier contracts and required CMS to repeat the CBP round 1—referred to as the round 1 rebid, resulting in the award of contracts to suppliers with payments that began January 1, 2011. CMS has estimated that the rebid will lead to significant savings for Medicare. MIPPA requires GAO to examine certain aspects of the CBP. In this report, GAO reviews (1) the outcomes of the CBP round 1 rebid process; (2) the effect of the CBP round 1 rebid on DME suppliers; (3) how the CBP round 1 rebid has affected Medicare beneficiary access to and satisfaction with selected DME; and (4) the extent to which the CBP round 1 rebid has affected the utilization of selected DME items. To examine CBP outcomes and effects, GAO analyzed data from CMS and its feedback provided to bidding suppliers, analyzed 2011 CBP data about different types of suppliers, and interviewed CMS and CBP contractor officials, DME industry groups, and suppliers. To examine CBP’s effects on beneficiary access, GAO analyzed Medicare claims data for the first six months of 2011 because the data for those months were the most complete, and compared it to the same months in 2010. The Centers for Medicare and Medicaid Services (CMS), within the Department of Health and Human Services (HHS), implemented the durable medical equipment (DME) competitive bidding program’s (CBP) bidding process for the round 1 rebid. Nearly the same number of suppliers submitted a similar number of bids for both the CBP round 1 rebid and round 1. Many suppliers continued to have difficulty complying with financial documentation requirements; however, the number of bids disqualified in the round 1 rebid was significantly less than for round 1. After being notified of their bid results, some suppliers were found to have bids that were disqualified incorrectly and were subsequently offered round 1 rebid contracts. About one-third of the bidding suppliers were awarded CBP contracts. Relatively few CBP contract suppliers (those awarded CBP contracts) had their contracts terminated by CMS, voluntarily canceled their contracts, or were involved in ownership changes. Under the CBP, non-contract suppliers (those not awarded CBP contracts) can grandfather certain rental DME for beneficiaries they were servicing prior to the implementation of CBP until CBP-covered beneficiaries’ rental periods expire. Also, some CBP contract suppliers entered into subcontracting agreements with non-contract suppliers to furnish certain services to CBP-covered beneficiaries in the round 1 rebid. CMS’s ongoing multiple monitoring activities generally indicate that beneficiary DME access and satisfaction have not been affected by CBP. Although some of these efforts have limitations, in the aggregate, they provide useful information to CMS regarding beneficiary access and satisfaction. Early data indicate that utilization has decreased in some CBP-covered DME categories. GAO’s review of Medicare claims data found that fewer beneficiaries in competitive bidding areas received some CBP-covered items in any of the first six months of 2011 than in the same month of 2010. Although the first year of the CBP round 1 rebid has been completed, it is too soon to determine its full effects on Medicare beneficiaries and DME suppliers. GAO found that, in general, the round 1 rebid was successfully implemented. GAO also found that utilization of selected DME declined in the CBP areas; while there are many possible reasons for this, it does not necessarily indicate that beneficiaries have not had access to needed DME. GAO does not assume that all pre-CBP utilization was appropriate and the CBP may have reduced unnecessary utilization of DME. More experience with DME competitive bidding is needed, particularly to see if evidence of beneficiary access problems emerges. For that reason, it is important to continue monitoring changes in the number of suppliers serving CBP-covered beneficiaries. In commenting on a draft of this report, HHS noted that the CBP round 1 rebid resulted in savings of more than $200 million in its first year. HHS also cited the results of CMS’s monitoring of beneficiaries’ access to DME in CBP areas as evidence that the CBP did not affect beneficiaries adversely.
While the law shifted responsibility to the states for designing and implementing TANF programs, HHS remains the primary federal agency responsible for assisting states with the development of these programs. ACF administers and oversees TANF and other programs related to the economic and social well-being of families, children, and individuals. Oversight of TANF is carried out through ACF’s Office of Family Assistance (OFA), which previously had administration and oversight responsibility for AFDC and JOBS, the predecessors to TANF. HHS provides help and oversight to the states through staff located at its headquarters office in Washington, D.C., and HHS’ 10 regional offices. Federal departments and agencies develop regulations and guidance to provide states and the general public the agency’s interpretation of a statute’s provisions and to assist them in complying with the law. Regulations are first issued in draft form to allow interested groups to provide comments, which the agency must consider before publishing the final rule. Final rules carry the force of law; for example, states could be penalized for not complying with them. The time it takes an agency to develop regulations depends on a number of factors, such as the complexity of the statute, the number of comments an agency receives, and the length of time it takes for the Office of Management and Budget (OMB) to review and approve them. To help ensure states comply with a statute as they develop or change their programs, states and the general public need basic information about the agency’s interpretation of the statute’s provisions prior to the regulations’ publication. Agencies may provide such information through guidance to states and others, which can be both written and oral. For example, guidance can be provided through written memorandums distributed to all states; agency-sponsored conferences; or responses to individual inquiries by phone, letter, or the Internet. However, guidance does not have the force of law as regulations do; essentially, it is the agency’s opinion or answer at that time, which could change during the regulatory process as the agency gathers and assesses the comments of knowledgeable parties. While the 1996 welfare reform law requires HHS to reduce its FTE levels by specific amounts, it does not direct HHS on how to implement this requirement. To address this provision, HHS reduced by 245 its authorized FTE levels within OFA. These reductions were achieved primarily through reassigning staff to other programs and eliminating vacant positions. HHS also reduced FTE levels by more than 60 in the Office of the Secretary to satisfy the mandated reduction in managerial positions. These reductions were achieved primarily through relocating organizational units within the Office and their staff to other places in HHS. HHS reduced authorized FTE levels in OFA by 245 between August 1995 and July 1997. While the law did not specify start and end dates for these reductions, the Department used August 1995 as the base year for calculating its reductions. HHS used July 1, 1997, as the target date for reducing FTE levels because that date is the final submission date for all TANF state plans and, thus, the effective starting date for TANF. As shown in table 1, ACF had reduced the number of authorized FTEs by 245 in OFA by July 1997 for headquarters and regional offices but had reduced actual FTE levels by 199.5 FTEs. Approximately two-thirds of the number of authorized and actual FTEs were eliminated from the regional offices, commensurate with the FTE distribution for headquarters and regional offices. HHS accomplished reducing the number of authorized FTEs by 245 primarily by reassigning 176 FTEs to other program offices and eliminating 45.5 that represented vacant FTE positions; an additional 21 FTEs were reduced through staff resigning or retiring. Figure 1 provides further details of these numbers for headquarters and regional offices. Overall, the Child Support Enforcement program acquired 37 FTEs, the largest proportion—about 21 percent—of reassigned FTEs. The distribution among programs of reassigned FTEs differed, however, between headquarters and the regional offices. At headquarters, 22 FTEs, or 38 percent, went to the Child Support Enforcement program; in the regional offices, approximately half of the FTEs were reassigned to the Child Care and Head Start programs. (See table 2.) While the law does not define “management,” HHS considered that all staff in the Office of the Secretary satisfied the term because of the Office’s general management responsibility for the entire Department, including TANF and its predecessor programs, AFDC, and JOBS. Using this definition, HHS considered that it had met the requirement to reduce FTE levels for managerial positions by 60 through staff reductions in the Office of the Secretary. Between fiscal years 1995, 1996, and 1997, the Office of the Secretary reduced its authorized FTE level by 613; its reduction in the number of actual FTEs was approximately 354. This reduction in FTE levels resulted from a number of changes initiated before and after the passage of the welfare reform law. These changes included the Federal Workforce Restructuring Act of 1994, which required agencies governmentwide to reduce their workforce, and major restructuring efforts that occurred in HHS’ Office of the Secretary during this period, such as separating the Social Security Administration from HHS; consolidating the Office of the Assistant Secretary for Management and Budget and the Office of the Assistant Secretary for Personnel Administration; abolishing the Office of the Assistant Secretary for Health and creating a new Office of Public Health and Science; and transferring responsibility for personnel, finance, and contract operations from the Office of the Secretary to HHS’ new Program Support Center. By March 1997, the numerous downsizing and reorganization efforts within HHS had affected the number of staff in the Office of the Secretary holding senior and mid-level management responsibility. Between September 1995 and March 1997, there was a net increase in the number of staff with senior management responsibility, but there was a net reduction in the number of staff with mid-level management responsibility, as shown in table 3. By March 1997, there were 15 more of the most senior management staff than in September 1995 and 94 fewer staff with mid-level management responsibility, including 57 fewer staff in grades 14 and 15 and 27 fewer in grades 12 and 13. (See tables 3 and 4.) Unlike in OFA, HHS reduced staff in the Office of the Secretary primarily through discharging staff who did not have permanent appointments and realigning entire units of staff by moving them to other locations in HHS. All discharges occurred, however, among staff considered nonmanagerial by civil service definitions. Among staff with senior management responsibility, 80 percent left by retiring or resigning. Of the mid-level management staff, 54 percent left as part of the realignments and 21 percent retired. (See table 5.) Overall, 1,640 staff left the Office of the Secretary between September 1995 and March 1997—174 of whom held positions defined as senior and mid-level management. Of these 174 staff, about half were relocated elsewhere in the Department while almost one-third retired or resigned. Although states are concerned that TANF regulations are not yet published, they are generally satisfied with HHS’ TANF guidance, both written and oral.Due to the amount of time the regulatory process takes, HHS does not expect to issue final TANF regulations until spring 1998—more than 18 months after some states began implementing their new welfare programs. States reported difficulties in designing their programs without the final regulations; they also are concerned about the possibility of being penalized for actions that they took in the absence of regulations or incurring additional costs to restructure their data systems to meet the requirements of the published regulations. HHS plans to issue regulations related to data collection, reporting, penalties, and bonuses. While the law does not specify a date by which the regulations must be promulgated, HHS expects that final regulations will be published in spring 1998. Department officials are expecting to receive thousands of comments on the proposed rules from states, local governments, advocacy groups, and other interested parties, which they must read and consider in drafting the final regulations. HHS officials noted that they are using the standard rulemaking process in order to consider comments from the many interested parties and that this process takes time to carry out. Twenty-nine of the 49 states responding to our survey reported that the lack of regulations was causing them moderate to very great difficulty in designing or implementing their programs. For example, one state reported difficulty determining which clients to select for placement in a state-funded TANF program because HHS guidance has been unclear as to whether clients in such a program would still have to meet certain TANF requirements, such as time limits or work hours. HHS’ final regulations on such requirements could change the type of client the state would select for such a program. Twenty-nine states also reported design and implementation problems for data collection and reporting. For example, one state official listed a number of unanswered questions that remained because of the absence of regulations regarding data collection and reporting, sampling, consequences of leaving reporting fields blank, and how to provide in the interim required data elements that the state’s current system does not capture. States were also concerned about the potential cost of having to redesign their information systems that collect and report data for managing their program if such action is needed to come into compliance with HHS regulations when they are published. Twelve states commented specifically that they wanted to avoid the expense of designing their systems twice yet assumed they will need to make modifications once the regulations are published. One state official explained to us that his state was converting to an automated data system and that modifications to the system would be expensive, but he is assuming they will have to make some once the final regulations are issued. According to a state welfare director, lead time is needed to develop the automated systems, given the type and amount of data to be reported under TANF. He said that “historically, state data systems were developed to generate checks to clients and to perform quality control functions. there are enormous data and reporting requirements in the . Most states don’t have the information systems available to collect data such as whether a client has been on welfare before. This requires data systems to communicate across counties and across states.” In addition to program design and implementation problems, states are concerned about being penalized for noncompliance with the regulations for program decisions they made before the final issuance of the regulations. In our survey, 14 states mentioned this specific concern. They told us that the potential difficulty with the lack of regulations is that HHS will provide its own interpretation of the law through the regulations, which may be inconsistent with the approaches states took. According to HHS officials, interim guidance distributed in a January 1997 policy announcement signaled to states that penalties will not be imposed for early program decisions if they were based on a reasonable interpretation of the law and that before a final rule is available, penalties will be imposed only for violations of the statute. Further, the guidance specifies that statutory interpretations in the final rules will apply prospectively only. HHS’ guidance to the states since August 1996, the date the new welfare reform law was enacted, has been provided through a variety of means, including its January 1997 policy announcement, letters to state directors providing HHS’ answers to frequently asked questions, conference calls to groups of states, conferences, and one-on-one calls between states and their respective HHS regional representatives. In considering HHS guidance, both written and oral, 33 of the 49 states responding to our survey reported that the guidance, for the most part, met their needs for information. For issues covered by HHS’ written guidance, the states were particularly satisfied with guidance for process-related issues. For example, 37 of 44 states indicated that HHS’ guidance mostly or completely met their needs for information about what must be described in a state TANF plan for HHS to consider the plan complete, as required by TANF. Similarly, HHS guidance on how it calculated the amount of each state’s block grant met the information needs of 37 of 41 states, and guidance on prorating the amount of states’ TANF block grant based on when their plan was submitted and deemed complete met the information needs of 32 of 39 states. States were least satisfied with HHS’ guidance on financial management controls; only 13 of 30 states indicated that HHS’ guidance on this subject mostly or completely met their information needs. State welfare directors and national organizations we contacted mentioned other issues of significance to the states that were not covered by HHS’ written guidance. Among these issues were the application of minimum wage laws to TANF participants, exempting domestic violence victims from time limits, and TANF requirements for a cap on administrative costs. From our survey, we determined that at least 25 states received no guidance from HHS on these issues. Of those states that reported receiving oral guidance, most said the guidance met their needs for information. For example, 9 of 13 states that reported receiving oral guidance regarding the application of minimum wage laws to TANF participants stated that the guidance met their state’s information needs. Similarly, for 16 states that received oral guidance about the time-limit exemption for domestic violence victims, 10 reported that it met their needs. For the remaining issue—TANF requirements for a cap on administrative costs—18 of 24 said their information needs were met. Although states generally indicated in our survey that they were satisfied with the clarity, timeliness, and usefulness of the HHS guidance they received, national associations indicated that the states are struggling with certain positions taken by HHS in its guidance. NGA and APWA stated that some of the positions taken that may adversely impact states included the application of minimum wage laws to welfare clients who obtain work; requirements for receiving, or “drawing down,” TANF funds; and the methods for allocating administrative costs for the TANF program. In general, NGA and APWA believed that certain of these positions limit the flexibility that the law intended to provide states in developing their new welfare programs and, in some cases, may significantly increase the costs to states of implementing these programs. Under the new welfare law, $1 billion is to be awarded over 5 years to high performing states, beginning fiscal year 1999. The awarded bonuses will be based on a set of measures to be developed by HHS. Although the law requires HHS to develop these measures no later than August 1997—1 year after enactment of the law—HHS has not yet specified how states’ performance will be assessed or how bonus funds will be distributed. Given that the first bonus funds are to be awarded in fiscal year 1999, states are concerned that they will not have enough time to either design their program activities or collect the data necessary to compete for the bonuses. HHS expects to have a final rule on bonus regulations by the end of fiscal year 1998. Having elected to develop the performance measures through regulations, HHS is still in the initial stage of writing the regulations. HHS asserts that the delay in issuing regulations is due to the complexities in developing performance measures, the need to consult a number of groups in the process, and HHS’ limited staff resources to work on both TANF and bonus program regulations. HHS, APWA, and NGA have developed concept papers that generally agree on the key measures to be used, but they differ about the source of the data to assess states’ achievement of the measures. While HHS developed a preliminary proposal for performance measures in July 1997, it does not expect to have its notice of proposed rulemaking ready for comment until March 1998 and a final rule published until the end of fiscal year 1998—over a year after the statutory deadline for implementing the high-performance bonus program. These time frames are of significant concern to APWA officials and its member state officials. They stated that since the regulations will be final so late in fiscal year 1998, states will have little time to develop their TANF programs or data collection systems to compete for the bonus money. According to APWA officials, states suggested to HHS that it develop early interim measures to assess states for the first year’s bonus money so that states would have time to collect data and then modify them, as needed, once final measures were implemented. HHS officials acknowledged missing the deadline stipulated in the law. However, they contend that the development of the performance measures is very complex with difficult measurement and data problems and limitations to address. For example, HHS and those with whom they are consulting are having difficulty determining how to measure increases in “child well-being”—one goal of the new welfare reform law—and whether national data sets exist that would enable states to make such measurements. Officials also stated that the process is taking time because HHS’ consultations with APWA and NGA, a requirement of the law, have been thorough. HHS also consulted with representatives of the states and other groups to ensure that any technical problems with the proposed measures were solved and that agreement was reached with TANF stakeholders. Officials also noted that HHS management had to decide which set of regulations—TANF or the bonus formula—would receive priority, given the agency’s staffing. Since TANF became operational before the bonus formula, HHS focused first on TANF guidance and regulations. APWA and NGA have drafted a joint proposal for the high-performance bonus program, which generally agrees with HHS’ concept paper about what the measures should emphasize—work and self-sufficiency—and what the key measures should be. However, HHS’ paper and the joint proposal by APWA and NGA differ about which sources of data should be used for measuring performance. HHS believes that the Bureau of Labor Statistics’ (BLS) unemployment insurance (UI) database should be used for the work measures, while APWA and NGA think that state administrative data should be used. According to HHS’ paper, the UI database provides an “objective data source that would be less subject to reporting bias . . . and is uniformly collected across states.” It further states that UI data would allow states to track people who have left the welfare rolls and thus provide states with data about the continued self-sufficiency of their clients. States would be able to track clients by matching the social security numbers of clients who have left with those in the database. Finally, HHS argues that using the UI database would avoid creating an additional administrative burden on states for data collection. APWA and NGA cited states’ concerns that UI data would not be an accurate measure. One problem with the UI database is that it does not capture information for seasonal or state government employees or for clients in subsidized jobs or community service. Another problem is that some states’ laws prevent the use of UI data for privacy reasons; this is the case in New York and Minnesota. States that would need to change their laws to gain access to these data are also concerned about the time available to collect the data. Because many state legislatures are out of session, states would need to wait for a new legislative session to address these issues. Moreover, there is no guarantee that enabling legislation would be enacted. HHS officials acknowledge the UI database’s limitations and are contracting for a study of its limitations and gaps. APWA and NGA have suggested that state TANF administrative data or a combination of UI and administrative data be used instead of UI data. These groups believe that states’ administrative data are a viable alternative and are now available to all states. APWA officials contend that state administrative data will be used by HHS to sanction states for noncompliance; hence, they could also be used for awarding bonus money. However, HHS is concerned that the uniformity and regularity of state administrative data across all 50 states have not yet been documented. HHS’ funding of its welfare research generally follows the research and evaluation requirements described in the new welfare law. The Congress appropriated a total of $44 million to HHS, in part, to conduct research on the benefits, effects, and costs of the state programs funded under the new law and to evaluate innovative programs designed to decrease welfare dependency and increase child well-being. HHS’ key effort in pursuing this mandate is its continued funding of evaluations of waiver programs. These evaluations were approved under the previous welfare law to assess state performance of innovations to their welfare programs, such as time-limited benefits and work requirements. In addition to the waiver studies, HHS is funding research efforts on employment-related issues of welfare clients, various technical assistance grants to help states obtain needed expertise or technical assistance to develop their welfare assistance programs, and child well-being studies. Table 6 shows the general areas of research that HHS funded with the $44 million for fiscal year 1997. HHS is pursuing its research and evaluation mandates, in part, by providing states approximately $9 million in fiscal year 1997 to continue their evaluations of their waiver programs, which is specified as an allowable area of research under the new law. Under section 1115 of the Social Security Act, HHS was authorized to grant states waivers of certain statutory requirements that governed AFDC programs. While this authority gave states flexibility to test innovations, it also required them to have an independent organization rigorously evaluate the outcomes of these innovations. According to HHS officials, the waiver programs were a key area of research because they implemented some of the ideas that were subsequently embodied in the new law, such as time limits and work participation requirements. Moreover, because many states have chosen to structure their TANF programs to fully or mostly continue their waiver program policies, HHS officials assert that the information collected from the waiver evaluations will provide early information about welfare programs being implemented under TANF. A number of states had not completed their evaluations before the enactment of the new welfare reform law but were interested in doing so. Hence, HHS organized its continued funding of these evaluations in two tracks. Under track one, selected states could receive an initial award for a 12-month period; under track two, states could receive a two-phased award, with an initial award for a planning period of up to 6 months followed by a second award to fund the first 12 months of the actual evaluation. Track one proposals are a continuation of a state’s original waiver evaluation with minor research modifications. Track two funding is used when a state proposes to make substantial modifications to the waiver evaluation, significantly modifying either the evaluation scope or methodology—or, in some cases, both—originally prescribed in the waiver terms and conditions. Nine states have been approved to fully continue their current evaluations as part of ACF’s track one research program, and 10 states have been approved for track two funding. The amount of funding for fiscal year 1997 to each track one awardee ranged from approximately $300,000 to $900,000; track two amounts ranged from $30,000 to over $500,000. The research questions vary, but several states planned to focus their evaluations on the effects of time limits and mandatory work requirements. Other research topics include program effects of family caps, child care services, financial incentives, and limiting benefits to unwed teens. HHS’ research efforts related to TANF cover a wide array of topics, including employment, technical assistance, and child well-being. HHS spent approximately $12 million in fiscal year 1997 on four research projects examining employment issues and welfare recipients. These evaluations are (1) the Goodwill Industries demonstration project that places the chronically unemployed into unsubsidized, private sector employment; (2) a 1-year analysis of employment and wage patterns of welfare recipients; (3) a study of four comprehensive, community-based employment programs for public housing tenants, funded by HHS, the Department of Housing and Urban Development, and the Rockefeller Foundation; and (4) the JOBS evaluation, a study examining alternative approaches for moving welfare recipients into work. HHS also funded a number of technical assistance projects for approximately $2 million in fiscal year 1997 to distribute research and data results as well as to support other areas. The technical assistance projects include activities such as local welfare staff training, conferences of federal and state practitioners and researchers that focus on their research efforts, and community-college-based workshops to design short-term employment training programs for welfare recipients. Some of this money also funds contracts to develop technical assistance networks and advisory group projects, which primarily focus on disseminating research and evaluation findings and transferring successful practices. Finally, HHS provided funds to sustain an existing research effort to look at child well-being at a cost of almost $1.5 million. The project provides money to selected states with welfare reform evaluations to augment the outcome measures for children and assess the effects of different welfare reform approaches on child well-being. HHS funds other studies that include some research on child well-being, but the dollar amount for the child well-being component could not be determined. These studies are the National Evaluation of Welfare-to-Work Strategies, which examines employment strategies in seven sites; several field-initiated studies; and some of the track one and track two welfare evaluations. In response to the new law, HHS has reduced its FTE levels and is pursuing its research and evaluation mandates. However, the Department is having difficulty meeting its responsibility for developing and issuing the TANF and high-performance bonus regulations. While a statutory deadline for the TANF regulations was not provided, the need for HHS to quickly issue the regulations became apparent given that states could, and did, begin implementing their TANF programs shortly after the law was enacted. Yet HHS did not issue proposed TANF regulations until November 1997, and final regulations are not expected to be issued until sometime in spring 1998. This same need for early direction arose with the bonus regulations because states wanted to be sure that the data collection systems they were putting in place would collect the data needed to compete for a bonus. Given the status of states’ implementation of welfare reform, the prompt issuance of the TANF and high-performance bonus regulations is of utmost importance so that states’ investment in systems and programs can be made wisely. HHS commented on a draft of this report and generally concurred with our findings. However, in summarizing our findings regarding the Department’s efforts to reduce its FTEs, HHS’ letter too broadly construed the findings presented by the report. In this letter, HHS states that “GAO was supportive of the Department’s assumptions about the number of FTEs required to be reduced and the time frames.” This is not the case. We did not endorse the manner in which HHS accomplished its FTE reductions but simply determined HHS’ interpretation of the FTE provision, described their interpretation, and analyzed both FTE and staff data in the context of that interpretation and other alternative criteria. Our report also points out that in cases where a statute is unclear, principles of administrative law allow the agency charged with carrying out a law to make such interpretations. HHS also provided technical comments, which we addressed in the report, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of HHS; Chairmen and Ranking Minority Members of the House Committees on Ways and Means and Education and the Workforce; the Ranking Minority Member of the Subcommittee on Human Resources, House Committee on Ways and Means; Chairman and Ranking Minority Members of the Senate Committee on Labor and Human Resources; and the Ranking Minority Member of the Senate Committee on Finance. We also will make copies available to others on request. If you or your staff have any questions about this report, please contact me on (202) 512-7215. Other staff who contributed to this report are listed in appendix II. This appendix discusses in more detail our approach and methodology for answering our objectives about HHS’ FTE reductions and its guidance and technical assistance to the states. The 1996 welfare law directs HHS to reduce the number of FTEs by (1) 245 in the welfare block grant programs and (2) 60 managerial positions in the Department. To determine the extent to which HHS had accomplished these reductions, we analyzed FTE reductions in the Office of Family Assistance (OFA)—which is responsible for the AFDC, JOBS, and Emergency Assistance programs—and both FTE and staff reductions in the Office of the Secretary using the number of staff on board as a surrogate measure for FTEs. We first specified measurement criteria, then collected and analyzed the FTE and staffing data. While the law stipulates a precise number of FTEs to be reduced, it does not provide criteria by which to measure whether HHS has achieved the reductions. In order to define the law’s FTE reduction requirement in a way that it could be measured, we considered such additional criteria as (1) the programs subject to the reductions; (2) the type of FTE, either authorized or actual; (3) the start and end dates for measuring the reductions; and (4) the definition of “managerial position.” In addition, when a statute does not include detailed criteria for implementation, under principles of administrative law, the executive branch department is responsible for interpreting the law’s provisions. Therefore, in developing a framework for measuring whether the FTE reductions had taken place, we asked HHS for the Department’s interpretation of the required FTE reduction and used additional criteria that we considered reasonable, as discussed below. The law did not name the programs in which 245 FTEs should be reduced but referred to them as programs “converted into a block grant.” Thus, from examining the legislative history and correspondence between the Administration for Children and Families (ACF) and Senator Daniel Patrick Moynihan, we limited consideration of programs subject to FTE reductions to AFDC, JOBS, and Emergency Assistance, which ACF considers part of AFDC when allocating FTEs. Further, because the law allowed HHS to designate the organizational unit or program where the reductions in managerial positions would be made, we measured the reductions in managerial positions only for the Office of the Secretary, the office that HHS had selected for these reductions. Neither the law nor the legislative history indicated whether the reduction in 245 FTEs in ACF or the 60 FTEs in managerial positions was to be in authorized or actual FTEs. Because the authorized FTE level establishes an upper boundary or ceiling on FTE usage and the actual FTE level indicates how many FTEs are being used, we measured the reduction in both types of FTEs to determine whether ACF and the Office of the Secretary had reduced the upper boundary as well as the actual number of FTEs. We considered three potential dates as the starting point for the reductions: August 1995, the date of the FTE level for ACF that was reported to Senator Moynihan and the start date that ACF used; August 1996, the date the new welfare law was enacted; and January 1995, the date HHS identified as the initiation of FTE reductions in the Office of the Secretary, which included major downsizing activity under the Federal Workforce Restructuring Act of 1994. HHS counted this reduction as addressing the provision in the federal welfare reform law. We decided to measure the FTE changes in OFA and the Office of the Secretary annually, beginning fiscal year 1995, in order to develop a complete picture of the changes that occurred. We used the month of September as the starting point in each year to establish the end of the fiscal year as a baseline and to make the reduction time periods in these two offices comparable. With respect to an end date for achieving the FTE reductions, ACF recommended a target date of July 1, 1997. This is the final submission date for all TANF state plans and thus the effective starting date for TANF. The Office of the Secretary did not object to this date, pointing out that most of its reductions had been made during fiscal year 1995. Because the effective starting date for TANF was the effective ending date for AFDC and JOBS, we considered this a reasonable date for completing the reduction of FTEs that the federal welfare reform law deemed were no longer needed to administer the new welfare program. Thus, we measured changes in FTE levels in OFA up to July 1, 1997. For the Office of the Secretary, we used different end dates because the full year total was not yet available at the time of our review. We measured the Office’s FTE level through September 1997, based on an estimate for the actual FTE total for the last quarter of fiscal year 1997. In addition to FTE data, we examined staffing data as a surrogate measure for the FTEs. The end date used for changes in staff was March 31, 1997—the most current date for which staffing data were available. The 1996 welfare law does not precisely define the term “managerial position,” which can have more than one meaning in federal civil service. The civil service classification system considers civil service staff in grades 12 and above as eligible for management positions, although not all staff in these grades hold management responsibility. Further, staff in lower civil service grades may serve as supervisors—a position with responsibilities similar to those of managers but which the civil service qualification standard does not classify as “managerial.” However, both managerial and supervisory positions are defined in terms of responsibilities rather than in terms of grade levels. HHS considered all staff positions—regardless of grade level and responsibilities—in the Office of the Secretary to be managerial because of the Office’s general management responsibility for the Department. The data source we employed to measure changes in the number of staff—the Office of Personnel Management’s Central Personnel Data File (CPDF)—contains separate variables for a staff member’s civil service grade, their level of management responsibility, and the organizational location in which they worked. Use of all three variables allowed us to look at staff changes from three perspectives: changes in the number of all staff in the Office of the Secretary, changes in the number of staff with management responsibility, and the civil service grades of staff that held management responsibility. However, the CPDF variable for management responsibility defined this term very broadly because it included staff in both managerial and supervisory positions. To develop our information, we interviewed representatives from ACF and the Office of the Secretary about HHS’ interpretation of the FTE reduction provision, other downsizing and reorganization activities, planned FTE reductions, and data sources. We gathered FTE data from both offices to address our measurement criteria; we also gathered billing data and staff rosters to verify the FTE data. To learn more about federal downsizing, we consulted with federal workforce analysts in GAO’s General Government Division. These analysts also assisted us in using CPDF staff data as a surrogate measure for the FTE data, since the Office of the Secretary did not maintain staff FTE data by civil service grade or level of management responsibility. We also discussed the FTE requirements with officials at OMB and OPM. To determine the reduction HHS made in the non-managerial FTE level in AFDC and JOBS, we compared the authorized and actual FTE levels for the start date with the authorized and actual FTE levels for the end date, for both headquarters and regional offices and in total. We then tabulated the data ACF provided on the disposition of headquarters and regional office FTEs. To determine the reduction of the managerial FTE level in the Office of the Secretary, we first compared, for all positions, the authorized and actual FTE levels for the start date to the authorized and actual FTE levels for the end date to measure the change in the FTE level in terms of HHS’ definition of managerial positions. Using CPDF staff data as a surrogate measure for FTEs to identify the number of staff in the Office of the Secretary that were managers in terms of civil service definitions, we first calculated the net change in the number of staff on board, broken out by level of management responsibility. Using cross-tabulations, we then examined the civil service grade distribution for the net change in staff that held mid-level management responsibility—the group of management staff whose numbers decreased. Finally, using cross-tabulations again, we calculated the disposition of staff that had left the Office of the Secretary between the end of September 1995 and March 1997, broken out by level of management responsibility. To verify FTE information provided to us, we looked at the hours that staff in OFA and the Office of the Secretary charged to the programs affected by the reductions, divided the total hours billed by 2,080—the number of hours in a year for one FTE—and compared it with the values for actual FTEs that ACF and the Office of the Secretary submitted. The hours that HHS staff charged to programs are captured through time and attendance data submitted for payroll purposes and maintained by HHS’ Program Support Center. Our primary method for obtaining state opinions about the clarity, usefulness, and timeliness of HHS guidance to the states was through a mail survey of TANF directors. To develop a list of the critical TANF implementation issues facing states for the survey instrument, we interviewed staff at several associations in Washington, D.C., including the American Public Welfare Association, the National Governors’ Association, the National Association of Counties, the National Conference of State Legislatures, the Council of State Governments, and the Center for Law and Social Policy. We also interviewed agency officials at ACF in headquarters, staff working on TANF in two HHS regions, and state TANF officials in two states—Pennsylvania and Illinois. In addition to our interviews, we reviewed all policy guidance that HHS distributed to the states, including its January 1997 policy memorandum; April 1997 Compilation of Implementation Materials, which included summaries of the various sections of the law, HHS and other federal agency contacts, and letters from the Acting Assistant Secretary for Children and Families answering frequently asked implementation questions; and other miscellaneous program instructions and memorandums to states. The survey questionnaire asked questions about the timeliness, clarity, and usefulness of HHS’ January 1997 policy memorandum and other HHS guidance that covered issues identified in our review as critical to the states. For some of these critical issues, HHS had not provided any formal guidance to the states. Regarding these issues, the survey asked states if they had received any oral guidance from HHS and whether it was useful. The survey was faxed to the TANF director in each of the 50 states and the District of Columbia; we received 49 responses. In addition to data provided through the survey responses, we also called 11 states to gather more detailed information about some of their answers. In addition to those named above, the following individuals made important contributions to this report: Sara Edmondson, Ellen Soltow, and John Vocino took the lead in designing the job, collecting and analyzing data, and writing the report; James Wright and John Smale, Jr., provided survey and design support; Gregory Wilmeth gave analytical and technical assistance in working with OPM’s personnel database; Robert Goldenkoff provided technical advice on FTE data and federal workforce issues; and Robert DeRoy provided the computer programming for analyzing OPM’s database. Welfare Reform: Implications of Increased Work Participation for Child Care (GAO/HEHS-97-75, May 29, 1997). Welfare Reform: States’ Early Experiences With Benefit Termination (GAO/HEHS-97-74, May 15, 1997). Welfare Reform: Three States’ Approaches Show Promise of Increasing Work Participation Rates (GAO/HEHS-97-80, May 30, 1997). Welfare Waivers Implementation: States Work to Change Welfare Culture, Community Involvement, and Service Delivery (GAO/HEHS-96-105, July 2, 1996). Employment Training: Successful Projects Share Common Strategy (GAO/HEHS-96-108, May 7, 1996). Welfare to Work: Approaches That Help Teenage Mothers Complete High School (GAO/HEHS/PEMD-95-202, Sept. 29, 1995). Welfare to Work: Child Care Assistance Limited; Welfare Reform May Expand Needs (GAO/HEHS-95-220, Sept. 21, 1995). Welfare to Work: State Programs Have Tested Some of the Proposed Reforms (GAO/PEMD-95-26, July 14, 1995). Welfare to Work: Most AFDC Training Programs Not Emphasizing Job Placement (GAO/HEHS-95-113, May 19, 1995). Welfare to Work: Participants’ Characteristics and Services Provided in JOBS (GAO/HEHS-95-93, May 2, 1995). Welfare to Work: Measuring Outcomes for JOBS Participants (GAO/HEHS-95-86, Apr. 17, 1995). Welfare to Work: Current AFDC Program Not Sufficiently Focused on Employment (GAO/HEHS-95-28, Dec. 19, 1994). Child Care: Current System Could Undermine Goals of Welfare Reform (GAO/HEHS-94-238, Sept. 20, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Health and Human Services' (HHS) implementation of the mandates resulting from the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, focusing on the: (1) extent to which HHS reduced its full-time equivalent (FTE) staff levels to the levels prescribed by the law; (2) clarity, timeliness, and usefulness of HHS' guidance and technical assistance to the states in implementing the Temporary Assistance for Needy Families (TANF) program; (3) status of HHS' work in establishing performance measures to use in implementing the high-performance bonus program; and (4) status of HHS' welfare research and evaluation efforts. GAO noted that: (1) between August 1995 and July 1997, HHS reduced by 245 its authorized FTE level for programs that were connected to block grants, and it reduced its authorized FTE level for managerial positions by more than 60 within the Department; (2) HHS achieved the 245 FTE reductions by reassigning almost three-quarters of them to other programs; (3) through GAO's survey, it found that states are generally satisfied with HHS' guidance but are concerned about the delay in TANF regulations, which HHS plans to issue in spring 1998; (4) HHS concedes that its rulemaking process to issue the regulations is lengthy because it requires the Department to obtain comments from many interested groups; (5) in the absence of regulations, states reported difficulties in designing and implementing their programs; (6) HHS missed the statutory deadline for implementing the high-performance bonus program; (7) while the law requires HHS to have implemented this program by August 1997, HHS is still writing regulations that will define the specific measures against which states are to be assessed; (8) HHS does not expect to issue final rules for the high-performance bonus program until the end of fiscal year (FY) 1998; (9) HHS attributes the delay to the inherent difficulties in developing performance measures; the large number of groups with whom HHS consulted, including advocacy and local government groups; and its limited number of staff with which to develop both TANF and bonus program regulations; (10) however, to be eligible for FY 1999 bonus money--the first year bonuses will be distributed--states are required to submit FY 1998 data; (11) HHS' funding for its welfare research generally follows the mandates outlined in the law; (12) a key effort for HHS in meeting these mandates is continuing the evaluations of state programs that were granted waivers from requirements that applied under the Aid to Families with Dependent Children program; (13) of the $44 million appropriated to Administration for Children and Families in FY 1997 for research, approximately $9 million has been awarded to 17 states for waiver evaluations; (14) several of these states will be evaluating the effect of time limits and mandatory work requirements on their programs, as well as other topics; (15) in addition to the waiver evaluations, HHS has awarded approximately $12 million for studies of employment issues focused on welfare and former welfare clients; and (16) technical assistance to states and child impact studies are other areas of research that were funded.
FFMIA and other financial management reform legislation have emphasized the importance of improving financial management across the federal government. The primary purpose of FFMIA is to ensure that agency financial management systems routinely provide reliable, useful, and timely financial information. With such information, government leaders will be better positioned to invest resources, reduce costs, oversee programs, and hold agency managers accountable for the way they run government programs. Financial management systems’ compliance with federal financial management systems requirements, applicable accounting standards, and the SGL are building blocks to help achieve these goals. FFMIA is part of a series of management reform legislation passed by the Congress over the past two decades. This series of legislation started with the Federal Managers’ Financial Integrity Act of 1982 (FIA), which the Congress passed to strengthen internal control and accounting systems throughout the federal government, among other purposes. Issued pursuant to FIA, the Comptroller General’s Standards for Internal Control in the Federal Government provide the standards that are directed at helping agency managers implement effective internal control, an integral part of improving financial management systems. Effective internal control also helps in managing change to cope with shifting environments and evolving demands and priorities. As programs change and as agencies strive to improve operational processes and implement new technological developments, management must continually assess and evaluate its internal control to ensure that the control activities being used are effective and updated when necessary. While agencies had achieved some success in identifying and correcting material internal control and accounting system weaknesses, their efforts to implement the FIA had not produced the results intended by the Congress. Therefore, in the 1990s, the Congress passed additional management reform legislation to improve the general and financial management of the federal government. As shown in figure 1, the combination of reforms ushered in by (1) the CFO Act of 1990, (2) the Government Management Reform Act (GMRA) of 1994, (3) FFMIA, (4) GPRA, and (5) the Clinger- Cohen Act of 1996, if successfully implemented, provides a basis for improving accountability of government programs and operations as well as routinely producing valuable cost and operating performance information, thereby making it possible to better assess and improve the government’s effectiveness, financial condition, and operating performance. The financial management systems policies and standards prescribed for executive agencies to follow in developing, operating, evaluating, and reporting on financial management systems are defined in OMB Circular A- 127, Financial Management Systems. Circular A-127 references the series of publications entitled Federal Financial Management System Requirements (FFMSR), issued by JFMIP as the primary source of governmentwide requirements for financial management systems. JFMIP systems requirements, among other things, provide a framework for establishing integrated financial management systems to support program and financial managers. JFMIP also issues financial system requirements for both administrative and programmatic financial management systems. Administrative systems include those generally common to all federal agency operations such as budget, acquisition, travel, property, and payroll. Agencies implement programmatic systems as needed to fulfill the agency’s mission, such as inventory, grants, insurance and benefit payments, and loans. For example, SSA would need a benefit payment system to fulfill its mission of providing social security and disability payments to the elderly and disabled. However, SSA would not need to implement a loan system since it does not process loans. Figure 2 is the JFMIP model that illustrates how these systems interrelate in an agency’s overall systems architecture. The first of JFMIP’s system requirements documents covering the core financial systems requirements was issued in 1988. Since then, JFMIP has been issuing system requirement documents covering specific functional areas, such as inventory systems. Most recently, JFMIP issued an exposure draft on Benefit System Requirements in May 2001, and has an ongoing project underway to develop system requirements for acquisition systems. Appendix I lists the current publications in the FFMSR series and their issue dates. JFMIP recently updated and revised the Core Financial System Requirements and issued an exposure draft in June 2001. JFMIP tests vendor COTS packages and certifies that they meet current financial management system requirements for core financial management systems. To maintain a certificate of compliance, vendors with qualified software packages must successfully complete any incremental tests required by JFMIP. These tests are conducted to ensure that vendor software offerings are aligned with current federal financial management requirements. Federal accounting standards, which agency CFOs use in preparing financial statements and in developing financial management systems, are promulgated by the Federal Accounting Standards Advisory Board (FASAB). FASAB develops accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information and comments from the public. FASAB forwards the standards to the three principals— the Comptroller General, the Secretary of the Treasury, and the Director of OMB—for a 90-day review. If there are no objections during the review period, the standards are considered final and FASAB publishes them on its Web site and in print. The American Institute of Certified Public Accountants now recognizes the federal accounting standards promulgated by FASAB as being generally accepted accounting principles for the federal government. This recognition enhances the acceptability of the standards, which form the foundation for preparing consistent and meaningful financial statements both for individual agencies and the government as a whole. Currently, there are 19 statements of federal financial accounting standards (SFFAS) and 3 statements of federal financial accounting concepts (SFFAC). The concepts and standards are the basis for OMB’s guidance to agencies on the form and content of their financial statements and the government’s consolidated financial statements. Appendix II lists the concepts, standards, and interpretations along with their respective effective dates. FASAB’s Accounting and Auditing Policy Committee (AAPC) assists in resolving issues related to the implementation of accounting standards. AAPC’s efforts result in guidance for preparers and auditors of federal financial statements in connection with implementation of accounting standards and the reporting and auditing requirements contained in OMB’s Form and Content Bulletin and Audit Bulletin. To date, AAPC has released five technical releases, which are listed in appendix III along with their release dates. The SGL was established by an interagency task force through the direction of OMB and mandated for use by agencies in OMB and Treasury regulations in 1986. The SGL promotes consistency in financial transaction processing and reporting by providing a uniform chart of accounts and pro forma transactions used to standardize federal agencies’ financial information accumulation and processing, enhance financial control, and support budget and external reporting, including financial statement preparation. The SGL is intended to improve data stewardship throughout the government, enabling consistent reporting at all levels within the agencies and providing comparable data and financial analysis at the governmentwide level. FFMIA requires an agency head to determine, based on a review of the auditor’s report on the agency’s financial statements and any other relevant information, whether the agency’s financial management systems substantially comply with the act. The agency head is required to make this determination no later than 120 days after (1) the receipt of the auditor’s report or (2) the last day of the fiscal year following the year covered by the audit, whichever comes first. If the agency head disagrees with the auditor’s determination that the systems do not substantially comply, the Director of OMB is to review the agency head’s determination and report to the Congress. If the agency head agrees that the systems do not substantially comply, FFMIA requires that the agency head, in consultation with the Director of OMB, establish a remediation plan to bring the systems into substantial compliance with FFMIA’s requirements. According to FFMIA, remediation plans are to include corrective actions, intermediate target dates, and resources necessary to bring the financial management systems into substantial compliance with FFMIA’s requirements within 3 years of the date the agency head’s noncompliance determination is made. If, with the concurrence of the Director of OMB, the agency head determines that substantial compliance cannot be reached within 3 years, the remediation plan must specify the most feasible date by which the agency’s systems will achieve compliance and designate an official responsible for effecting the necessary corrective actions. In accordance with the revisions to OMB guidance contained in Circular A- 11, Preparing and Submitting Budget Estimates, effective July 19, 2000, agencies are to include their remediation plans in their annual budget submissions due to OMB by December 15, 2000. The guidance requires that the plans include corrective actions, resources needed, and interim target dates to bring the financial management systems into substantial compliance within 3 years of the date of the agencies’ determination that their systems are not in substantial compliance. The plan must also list the officials responsible for bringing the systems into substantial compliance with FFMIA. Under its mandate to set governmentwide financial management policies and requirements, OMB currently has two sources of guidance related to FFMIA. First, OMB Bulletin No. 01-02, Audit Requirements for Federal Financial Statements, prescribes specific language auditors should use when reporting on compliance with FFMIA. Second, in a January 4, 2001, memorandum, OMB revised its implementation guidance for agencies and auditors to use in assessing compliance with FFMIA. The revised implementation guidance is to be used for financial reports and audits for fiscal year 2000 and thereafter. This guidance (1) describes the factors that should be considered in determining an agency’s systems compliance with FFMIA and (2) provides guidance to agency heads to assist in developing corrective action plans for bringing their systems into compliance with FFMIA. Examples are also provided on the types of indicators that should be used as a basis in assessing whether an agency is in substantial compliance with FFMIA. We reviewed fiscal year 2000 financial statement audit reports for the 24 CFO Act agencies to determine (1) which agencies had systems that their auditors found to be noncompliant with FFMIA requirements, (2) the reasons why the systems were found to be noncompliant, and (3) evidence of agencies’ progress in becoming compliant. Using structured interviews, we interviewed agency management and auditors for each of the 24 CFO Act agencies to obtain their perspectives on FFMIA implementation, including the factors that contributed to agencies’ systems compliance with FFMIA and the obstacles faced by management in becoming compliant. We also reviewed the auditors’ workpapers for the 24 CFO Act agencies to assess the nature and extent of FFMIA testing. We reviewed OMB’s FFMIA guidance. To obtain an understanding of the fiscal year 2000 audit requirements, we analyzed OMB’s January 4, 2001, memorandum that revised FFMIA implementation guidance and reviewed OMB Bulletin No. 01-02, Audit Requirements for Federal Financial Statements and predecessor guidance. Further, we reviewed the guidance for preparing remediation plans for fiscal year 1999 contained in revisions to OMB Circular A-11, Preparing and Submitting Budget Estimates. We reviewed agencies’ fiscal year 1999 remediation plans to determine if they contained the required elements and if the corrective actions, if implemented successfully, had a reasonable likelihood of resolving agencies’ systems problems. We did not review the agencies’ fiscal year 2000 remediation plans because these plans were not due to OMB until September 10, 2001. We compared the fiscal year 1999 remediation plans to those for fiscal year 1998 to determine if they had improved. We held discussions with OMB officials to apprise them of the scope and nature of our work and reviewed applicable federal accounting standards and systems requirements documents. We made inquiries of JFMIP staff to determine recent developments in their respective efforts to issue new system requirements documents. We conducted our work from January through August 2001 at the 24 CFO Act agencies, OMB, and JFMIP in the Washington, D.C., area in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OMB or his designee. The Deputy Controller of OMB provided us with written comments. These comments are discussed in the “Agency Comments and Our Evaluation” section and reprinted in appendix V. We also requested oral comments from selected agency officials whose financial management systems or audit procedures are specifically discussed in the report. These comments have been incorporated as appropriate. Overall, the government continues to face serious financial management systems weaknesses. Agencies have recognized the seriousness of their problems, and OMB has made financial systems reform a priority. Today, there are many ongoing initiatives to address the overarching financial management systems problems that are at the heart of the serious financial management weaknesses that are prevalent. Importantly, financial management systems reform is a part of one of the five governmentwide initiatives in the President’s Management Agenda. Agencies continue to make progress in addressing their financial management system weaknesses. At the same time, they have a long way to go. The vast majority of these agencies are still not substantially complying with FFMIA’s requirements. Auditors for 19 of the 24 CFO Act agencies reported that for fiscal year 2000, the agencies’ systems did not comply substantially with one or more FFMIA requirements—federal systems requirements, federal accounting standards, or the SGL. For fiscal year 2000, 7 agencies were reported not to be in substantial compliance with all 3 FFMIA requirements; 18 were reported not in substantial compliance with systems requirements; 12 were reported not in substantial compliance with accounting standards; and 8 were reported not in substantial compliance with the SGL. Auditors for five agencies—the Department of Energy, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), the Small Business Administration (SBA), and the General Services Administration (GSA)— reported that the results of tests disclosed no instances in which the agencies’ systems did not substantially comply with the three requirements of FFMIA. By reporting negative assurance, which is the form of reporting called for in OMB Bulletin No. 01-02, these auditors are not saying that the systems are in substantial compliance, but that the results of their tests disclosed no instances in which the agencies’ systems did not substantially comply with FFMIA. We discuss this issue further in this report. Figure 3 summarizes the auditors’ FFMIA determinations for fiscal year 2000. Eighteen of the 24 CFO Act agencies received unqualified audit opinions on their financial statements for fiscal year 2000, up from 15 in fiscal year 1999, 12 in fiscal year 1998, and 6 in fiscal year 1997. This represents steady progress and a lot of hard work by the agency CFOs and their staffs, OMB, Treasury, and the audit community. At the same time, auditors for 13 of the 18 agencies that received unqualified opinions reported that the agencies’ financial systems did not comply substantially with FFMIA’s requirements in fiscal year 2000. In many instances, agencies have been able to obtain unqualified audit opinions only through extensive labor-intensive efforts, which include expending significant resources to use extensive ad hoc procedures, and making billions of dollars in adjustments to derive financial statements. This is usually the case when agencies have inadequate systems that are not integrated and routinely reconciled. The President’s Management Agenda calls these efforts “extraordinary, labor-intensive assaults on financial records.” These time-consuming procedures must be combined with sustained efforts to improve agencies’ underlying financial management systems and controls. If agencies continue year after year to rely on significant costly and time-intensive manual efforts to achieve or maintain unqualified opinions, it can serve to mislead the public as to the true status of agencies’ financial management capabilities. In such a case, an unqualified opinion would become an accomplishment without much substance. Although the CFO Act agencies face challenges, some formidable, in preparing financial statements, all issued their financial statements on time. In a prior report, we recommended that OMB work with the agencies to ensure that the agencies’ financial statements are audited and issued by the March 1 statutory deadline. For fiscal year 2000, all 24 of the CFO Act agencies met the March 1 statutory due date, 5 months after the end of the fiscal year. In comparison, in fiscal year 1999, 5 of the 24 CFO Act agencies issued their audited statements after the statutory due date. Going forward, we anticipate the timeframes becoming tighter and reporting requirements earlier, which will intensify the need to overhaul the financial management systems. The Director of OMB and the Secretary of the Treasury have indicated that more timely and more frequent financial statement reporting will be an objective to help improve financial management. As a first step, OMB recently proposed that agencies prepare and issue unaudited interim financial statements starting with the 6-month period ending March 31, 2002, and submit these statements to OMB by May 31, 2002. Beginning with fiscal year 2003, OMB will require agencies to prepare and submit unaudited interim financial statements on a quarterly basis. Also, for fiscal year 2002, OMB has moved the reporting date from March 1 to February 1. We support these actions by OMB. Financial statement audit results are key indicators of the quality of agency financial data at year-end and provide an annual public scorecard on accountability. While the increase in the number of agencies that are receiving unqualified audit opinions is noteworthy, it is only one of three indicators of the quality of financial management information. Having effective internal controls to help managers better achieve agencies’ missions and program results and minimize operational problems along with financial management systems that routinely generate reliable, useful, and timely information are also key indicators of the quality of an agency’s financial management information. As shown in figure 4, fully integrated financial systems, reliable and timely financial statements, and effective internal control serve as indicators for an entity’s financial management health. A clean audit opinion alone only provides credibility to an agency’s financial statements as of the date of the financial statements—the last day of the fiscal year. It provides no assurance about the effectiveness or efficiency of financial systems used to prepare the statements, the quality of internal control, or whether the systems can produce reliable data for decision-making purposes on demand throughout the year. For example, the Department of the Treasury received its first unqualified opinion on its fiscal year 2000 departmentwide financial statements. However, like several other agencies, despite the unqualified opinion, Treasury’s IG reported that Treasury’s systems did not comply substantially with FFMIA. For example, the Internal Revenue Service (IRS) continues to face most of the pervasive systems and internal control weaknesses that we have reported each year since we began auditing IRS’ financial statements in fiscal year 1992. As discussed in our report on the IRS fiscal year 2000 financial statements, IRS’ unqualified opinion was the culmination of 2 years of extraordinary efforts on the part of IRS senior management and staff to develop compensating processes to work around its serious systems and control weaknesses to derive year-end balances for its financial statements. Top management at IRS and its staff are to be applauded for their dedication that resulted in an unqualified opinion. While IRS’ efforts did address several management issues we raised in previous audits, its approach to obtaining an unqualified opinion on its fiscal year 2000 financial statements relied heavily on costly, time-consuming, and labor- intensive efforts, including the need for statistical projections, external contractor support, substantial adjustments, and monumental human efforts that extended well after the fiscal year-end. This was particularly the case for reporting amounts for both tax receivables and property and equipment. Because IRS’ systems cannot accurately track amounts representing taxes receivable, IRS has for the past 4 years employed a complex statistical sampling process to derive the balance reported on its financial statements; this process takes months to complete, requires extensive human and financial resources, and results in tens of billions of dollars in adjustments annually to present a balance that is good for one day only. Additionally, because IRS does not have an adequate property management system, it had to use contractors to (1) perform statistical sampling procedures to derive a reliable balance for property and equipment in fiscal year 1999; and (2) analyze fiscal year 2000 transactions to derive the September 30, 2000, balance for property and equipment, a process that extended into February 2001. Situations such as those at IRS demonstrate the tremendous efforts many agencies make to produce auditable annual financial statements. These agencies undertake far more work to prepare financial statements than would be necessary if they had basic financial systems in place to routinely provide both the data for financial statements and management information. The financial statement preparation and audit process puts a tremendous strain on the staff of the CFO and the auditors and diverts resources from correcting the underlying problems. To quote the Secretary of the Treasury, “it takes the federal government 5 months to close our books…This is not the stuff of excellence.” As we discuss below, one of the main problems agencies face is the lack of an integrated financial management system. Having an effective, integrated financial management system that can produce financial information in a timely manner minimizes the need for time-consuming and costly procedures to prepare financial statements and, most importantly, provides the information needed to manage on an ongoing basis. To remedy their problems, agencies are in the process of either implementing new core financial systems or upgrading their current systems to lay the foundation for compliance with FFMIA. In this regard, by far DOD faces the most complex and difficult challenges of any agency. Today, DOD relies on an overly complex and error prone network of systems that are not integrated. Millions of transactions must be manually keyed and rekeyed into the vast number of systems involved in any given DOD business process. Weak systems and controls leave DOD vulnerable to fraud and improper payments. In addition, as we recently testified, lacking an effective network of systems and the related inability to obtain reliable cost and budget information severely constrains DOD’s ability to maintain adequate funds control, measure performance, reduce costs, and maintain effective accountability over its estimated $1 trillion investment in weapon systems and inventories. Because of the unparalleled size and complexity of DOD’s operations along with the serious, deeply entrenched nature of its financial management system deficiencies, it will not be possible to fully implement an integrated system structure overnight. Such a dramatic transformation will require a sustained effort over a number of years. As discussed later in this report, the Secretary of Defense and the DOD Comptroller have stated that priority will be given to financial management reform. Other agencies such as the Departments of Commerce, Agriculture, and Education, SBA, GSA, and AID are planning or are in the process of implementing new systems. And yet others, such as NASA, have failed in recent attempts to successfully implement new systems, and are starting over. One thing that stands out though is that across the board, agencies have recognized their shortcomings and assisted by OMB and with input from the audit community are working to modernize their financial management systems and processes. In addition, several agencies have made progress in other areas aimed at improving their financial management systems. For example, in fiscal year 2000, the Department of Agriculture IG reported the establishment of a Senior Executives group to develop a corporate strategy, including a budget and timeframes, for administrative and financial system changes to the agency’s various systems. According to Agriculture officials, this was a clear sign that senior level executives for the department were acknowledging that there was a need to develop overall agency financial systems rather than continue to rely on multiple stand-alone subsystems. In another instance, auditors for the Department of Education reported that the agency made progress in strengthening controls over IT processes. The auditors also reported that the implementation of new controls and the reinforcement of existing controls increased the effectiveness of internal controls in areas such as IT planning and security management. Based on our review of fiscal year 2000 audit reports for the 19 agencies’ systems that were reported to be substantially noncompliant with FFMIA, we identified 6 primary reasons either cited by the auditors or identified in our structured interviews with agency officials as to why their systems were noncompliant: lack of integrated financial management systems, inadequate reconciliation procedures, lack of accurate and timely recording of financial information, noncompliance with the SGL, lack of adherence to federal accounting standards and/or OMB weak security controls over information systems. Figure 5 shows the relative frequency of these problems at the 19 agencies with noncompliant systems and the problems relevant to FFMIA that were reported by their auditors or obtained through interviews with agency officials. Auditors reported these problems among the weaknesses identified during the audits; however, the auditors may not have reported the problems as specific reasons for why they concluded that the agencies’ systems did not substantially comply with FFMIA. We included all weaknesses relevant to FFMIA identified by the auditors because such problems must be resolved in order for the agencies’ systems to generate the reliable, useful, and timely information needed for decision-making. Also, the reported problems may not be all inclusive. For some agencies, the problems are so serious and well known that the auditor can readily determine the systems to be noncompliant without examining every facet of FFMIA compliance. One of the federal financial management systems requirements is that agencies’ financial management systems be integrated. The CFO Act calls for agencies to develop and maintain an integrated accounting and financial management system that complies with federal systems requirements and provides for (1) complete, reliable, consistent, and timely information that responds to the financial information needs of the agency and facilitates the systematic measurement of performance; (2) the development and reporting of cost information; and (3) the integration of accounting, budgeting, and program information. In this regard, OMB Circular A-127, Financial Management Systems, requires agencies to establish and maintain an integrated financial management system that conforms with JFMIP’s functional requirements. An integrated financial system coordinates a number of functions to improve overall efficiency and control. When agencies do not have an integrated financial management system—which includes administrative and program systems that maintain financial information, such as budgeting, logistics, personnel, acquisition, and property systems—they are often forced to rely on ad hoc programming, analysis, or actions such as duplicative transaction entries. In these situations, agencies must expend major efforts and resources to generate financial information that their systems should be able to provide on a daily or recurring basis. The lack of integrated financial systems is a continuing serious problem for most agencies. Based on discussions with agency officials at the 19 agencies that were reported to be noncompliant with FFMIA, lack of an integrated system and adequate funding to replace old systems were key obstacles in achieving compliance with FFMIA. Some agencies rely heavily on external consultants to develop financial information. The results of our work showed that 13 of the 24 CFO Act agencies used the assistance of contractors in preparing their financial statements because their systems were not able to produce this information. Many of these officials agree that a key to improving financial management and complying with FFMIA is to have an integrated financial system that provides reliable, useful, and timely information that managers can use for day-to-day operations. However, according to these officials, to upgrade or replace existing systems requires funding and a strong commitment from management, which many of them said they did not have in the past. In this regard, the President’s Management Agenda makes clear the commitment of the President to ensure that federal financial systems produce accurate and timely information. As shown in figure 5, auditors for 13 of the 19 agencies with noncompliant systems reported lack of integrated systems as a problem. To illustrate, VA achieved an unqualified opinion on its fiscal year 2000 consolidated financial statements, but to do so required a significant amount of resources and manual processes. Auditors for VA noted continued difficulties related to the preparation, processing, and analysis of financial information to support the preparation of VA’s consolidated financial statements. Considerable manual work-arounds and “cuff,” or out-of-date feeder systems are still in place as VA has not yet completed its transition to a new fully integrated financial management system, Core Financial and Logistics System. As a result, significant efforts were made at the component and consolidated levels to assemble, compile, and review the necessary financial information for annual financial reporting requirements. Specifically, auditors noted that a significant number of adjustments were recorded as part of the year-end closing process, many to record additional activities—both budgetary and proprietary—not reflected in the general ledger prior to the year-end close. The general ledgers for some smaller funds are maintained outside the existing core financial management system. Thus, until the new system is successfully implemented and functional, a significant amount of resources will be devoted to preparing the financial statements. We recently reported that NASA could not provide detailed support required in time for our audit of its space station or shuttle obligations because it does not have an integrated financial management system. According to NASA officials, transaction-level obligation data are available at NASA’s 10 space centers on separate and different financial systems. NASA officials also told us that NASA has long-term plans for implementing an integrated financial management system that will make access to detailed obligation data more readily available. Further, as we discussed in our performance and accountability series report, according to NASA, the agency’s financial management environment is comprised of decentralized, nonintegrated systems with policies, procedures, and practices that are unique to its field centers. For the most part, data formats are not standardized, automated systems are not interfaced, and on-line financial information is not readily available to program managers. Thus, it is difficult to ensure contracts are being efficiently and effectively implemented and budgets are executed as planned. In addition, NASA has pointed out that the cost to maintain these systems has been high, since both data and software are replicated at each field center. Deficiencies in agencies’ automated systems, including the lack of integrated systems, can also contribute to improper payments. The reported estimates of improper payments across the government totaled approximately $20 billion for both fiscal years 2000 and 1999. These improper payments frequently occur because agency personnel lack needed information, rely on inaccurate data, and/or do not have timely information. For example, we identified issues related to the National Institutes of Health’s (NIH) oversight and monitoring of grant recipients—an area with over $17 billion appropriated in fiscal year 2000 to conduct and sponsor biomedical research. Among other things, there were discrepancies between the data in NIH’s management, payment, and accounting systems. These discrepancies affected the accuracy of grant award amounts. These system deficiencies could result in NIH’s erroneously awarding grants to ineligible grant receipts and in funds being used for improper purposes. If these systems were integrated, NIH would have fewer discrepancies in its data and would need to devote substantially less effort to assuring that the data across those three functions were consistent. According to HHS officials, NIH has implemented compensating controls to address these systems deficiencies. Moreover, system deficiencies are also a factor for DOD. DOD’s payment process suffers from nonintegrated computer systems that require data to be entered multiple times, sometimes manually, which poses substantial increases in the risk of incorrect payments and overpayments. A reconciliation process, even if performed manually, is a valuable part of a sound financial management system. The general maxim would be that the less integrated the financial management system, the greater the need for adequate reconciliations because data for the same transaction may be separately entered in multiple systems. Reconciliation of records from the multiple systems would ensure that transaction data were entered correctly in each one. Reconciliation procedures are a control necessary in order to maintain and substantiate the accuracy of the data reported in an agency’s financial statements and reports. The Comptroller General’s Standards for Internal Control in the Federal Government highlight reconciliation as a key control activity. As shown in figure 5, auditors for 16 of the 19 agencies with reported noncompliant systems reported that the agencies had reconciliation problems, including difficulty reconciling their Fund Balance with Treasury accounts with the Department of the Treasury’s records. Treasury policy requires agencies to reconcile their accounting records with Treasury records monthly, which is comparable to individuals reconciling their checkbooks to their monthly bank statements. However, such reconciliations are not being routinely performed. For example, Agriculture’s Office of the Chief Financial Officer/National Finance Center’s (OCFO/NFC) Fund Balance with Treasury account had not been properly reconciled with Treasury records since 1992. In its audit report on Agriculture’s fiscal year 1998 financial statements, the IG reported that the absolute value of the differences between OCFO/NFC’s and Treasury’s records was $4.4 billion for disbursements and $383 million for deposits as of September 30, 1998. In fiscal year 2000, Agriculture contracted with a public accounting firm to assess OCFO/NFC’s reconciliation efforts, provide recommendations for resolving the reconciliation problem, assist in leading the actual reconciliations, as well as recommend ways to improve the overall reconciliation process. The IG recently reported that the absolute value of the out-of-balance amount for Agriculture’s Central Accounting System had been reduced to $226 million as of September 30, 2000. The OCFO proposed a one-time adjustment to write off $160 million of the total $226 million. In another instance, the Department of Health and Human Services (HHS) made numerous adjustments at year-end to correct errors and to develop accurate financial statements. Many of these adjustments would not have been necessary had management routinely reconciled and analyzed accounts throughout the year. For example, the HHS IG reported that differences between the Administration for Children and Families’ (ACF) Fund Balance with Treasury account and Treasury’s records ranged from $200 million to $6.3 billion at various times during fiscal year 2000. Accurate and timely recording of financial information is key to successful financial management. Recording transactions in the general ledger in a timely manner can facilitate accurate reporting in agencies’ financial reports and other management reports that are used to guide managerial decisionmaking. The Comptroller General’s Standards for Internal Control in the Federal Government state that transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. As shown in figure 5, auditors for 14 of the 19 agencies with reported noncompliant systems found that the agencies did not record transactions accurately and timely in the general ledger. For example, the Department of Commerce IG reported that $270 million in appropriations for two of the agency’s programs was not recorded in the general ledger until 6 months after the apportionment for these appropriations was issued. According to SFFAS No. 7, Accounting for Revenue and Other Financing Sources, appropriations should be recognized when available to the agency to be apportioned. The IG reported that the apportionment for these two programs was issued on September 30, 1999, and should have been recorded in the general ledger at that time. According to the IG, the failure to record these appropriations was due to confusion among agency officials as to which fiscal year the program was established and where the program should be recorded. Because the agency did not include these appropriations in its fiscal year 1999 financial statements, during fiscal year 2000 a prior period adjustment of $270 million was made to properly recognize the budget authority for the two programs. In other instances, auditors for five agencies reported that unliquidated obligations were not deobligated on a timely basis due to the lack of procedures for reviewing unliquidated obligations or the failure to follow these procedures. For example, auditors for the EPA reported that although EPA was aggressive during fiscal year 2000 in identifying and deobligating invalid obligations, EPA’s annual process for reviewing inactive unliquidated obligations for validity still needed improvement. The annual review by EPA management revealed that due to significant backlogs, EPA did not timely process and deobligate inactive unliquidated obligations. As a result of the weaknesses identified in its annual review, a special review was performed to obtain a more accurate accounting of its unliquidated obligations. In fiscal year 2000, the special review identified $26.5 million in open unliquidated obligations that should have been deobligated by September 30, 2000. EPA had to make a $26.5 million adjustment to more accurately present its Statements of Financing and Budgetary Resources. Implementing the SGL at the transaction level is one of the specific requirements of FFMIA. Applying the SGL at the transaction level means that a financial management system will process transactions following the SGL definitions of the general ledger accounts. Specifically, compliance with the SGL at the transaction level requires that (1) data used in financial reports be consistent with the SGL, (2) transactions be recorded consistently with SGL accounting transaction definitions and processing rules, and (3) transaction detail supporting SGL accounts be directly traceable to specific SGL account codes. By not implementing the SGL, agencies are challenged to provide consistent financial information across their component entities and functions. The effect of such differences has contributed to our disclaimer of opinion on the U.S. government’s consolidated financial reports for the last 4 fiscal years because the government could not ensure that the information in its financial statements was properly and consistently compiled. As shown in figure 5, auditors for 8 of the 19 agencies with noncompliant systems reported that the agencies’ systems did not comply with the SGL requirement for fiscal year 2000. This is compared to the 14 agencies that were reported in noncompliance with SGL requirements in fiscal year 1999. An example of improvement is the Department of Labor, where the IG reported that for fiscal year 2000, management took steps to improve the financial accounting for back wages with the design and implementation of Labor’s new Back Wage Collection and Disbursement System. With the improvements made, the IG concluded that the new system was substantially in compliance with the SGL. Other agencies are working to become SGL compliant. For example, the HUD IG reported that HUD was not compliant with the SGL at the transaction level. The Federal Housing Administration (FHA), a major component of HUD, provides consolidated summary level data to HUD’s Central Accounting and Program System (HUDCAPS). FHA has 19 subsidiary systems that feed transactions to its own commercial general ledger system. To provide consolidated summary level data from FHA to HUDCAPS, FHA used numerous manual procedures, including the use of personal computer based software to convert its commercial accounts- based general ledger to the government SGL and then transfer the account balances to HUDCAPS. During fiscal year 2000, FHA purchased a COTS financial system to replace its current system. FHA management anticipates that the implementation of the new accounting system will result in FHA’s compliance with the requirement for automated posting of transactions to SGL accounts. One of FFMIA’s requirements is that agencies’ financial management systems comply with federal accounting standards. Agencies face significant challenges implementing these standards. As shown in figure 5, auditors for 12 of the 19 agencies with reported noncompliant systems reported that the agencies had problems complying with one or more of these standards. Some agencies have experienced difficulty implementing the standards because their financial management systems are not capable of producing the financial data needed. The standards most often cited by the auditors relate to managerial cost accounting; property, plant, and equipment; accounting for inventory and related property; and accounting for revenue and other financing sources. FASAB continues to deliberate on new and emerging accounting issues that could result in its issuing additional standards; therefore, agencies’ systems also must be flexible enough to be able to accommodate any standards that may be issued in the future. A major cornerstone of FFMIA is good cost accounting information that program managers can use in managing day-to-day operations. Managerial cost accounting is aimed at providing reliable and timely information on the full cost of federal programs, their activities, and outputs. The cost information can be used by the Congress and federal executives in making decisions about allocating federal resources, authorizing and modifying programs, and evaluating program performance. Developing the necessary information, which is needed as well to support GPRA implementation, will be a substantial undertaking. Of the 12 agencies’ systems reported to be noncompliant with one or more of the federal accounting standards, 7 of these agencies were reported in noncompliance with SFFAS No. 4, Managerial Cost Accounting Concepts and Standards. However, as mentioned earlier, if an agency had serious problems overall, the auditor may not have reviewed every area for compliance so the extent of this specific shortcoming may be greater. Our sense is that today few agencies may have good cost accounting information. The seven agencies’ systems that were reported by their auditors as noncompliant with the cost accounting standard are not able to provide timely full cost accounting information and at best can only provide this information at the end of the fiscal year through periodic cost surveys or other cost finding techniques. The lack of timely cost information seriously impinges the capacity to make informed managerial decisions on a daily basis, precludes meaningful and timely reporting on performance measures, and could result in project cost overruns and program inefficiencies. Performance information is necessary to determine the value of government programs and their success in achieving their goals. Further, the move to implementation of performance-based budgeting highlights the need for cost accounting information at the program level. If program managers are going to be more accountable for the achievement of output targets, they will need timely, accurate information on the cost of their programs. At the present, program managers do not always have information on, or control of, the full costs of support services, retirement, and other nondirect costs associated with their programs. For example, the IG for AID reported that the agency did not comply with the five fundamental elements of managerial cost accounting. AID’s current financial management system does not provide complete, reliable, timely, or consistent information. Specifically, missions cannot determine the cost of their program strategic objectives. Furthermore, AID does not have cost allocation tools to utilize detailed administrative and program cost information from overseas accounting stations. As a result, AID is not able to assign costs to organizations, locations, projects, programs, or activities. The IG for DOT reported that the Federal Aviation Administration (FAA) has made progress implementing its cost accounting system, but still has much to do. FAA’s actual cost for air traffic controller and airways facilities maintenance labor, estimated at $3.4 billion for fiscal year 2001, cannot be broken down further to a specific shift of air traffic control or airways facilities maintenance. Therefore, FAA cannot develop potentially useful information such as the cost associated with a particular shift. FAA’s labor costs are more than half of its total costs. An effective cost accounting system that fully accounts for labor cost by activities and services would allow FAA to identify areas of low productivity and high cost, as well as high productivity and cost efficiency. While we recently reported that NASA did not have needed cost accounting data for the actual costs of completed space station components, its auditors reported that the results of their tests disclosed no instances in which NASA’s systems did not comply substantially with FFMIA. The results of our work raise questions about NASA’s compliance with SFFAS No. 4, Managerial Cost Accounting Concepts and Standards. NASA systems do not track and maintain cost data for NASA’s completed space station components. Because NASA does not attempt to track these costs, the agency does not know the actual cost of completed space station components and is not able to re-examine its cost estimates for validity once costs have been realized. Further, as discussed earlier, NASA does not have an integrated financial management system. These issues raise questions about management’s assertion regarding compliance with FFMIA. Information security weaknesses are one of the primary causes for agencies’ systems noncompliance with FFMIA. As a result, federal assets continue to be at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. Significant computer security weaknesses in systems that handle the government’s unclassified information continue to be reported in each of the major federal agencies. As shown in figure 5, auditors for the 19 agencies with reported noncompliant systems reported information security weaknesses as a problem in fiscal year 2000. Our high-risk series report shows that all of the 24 CFO Act departments and agencies have significant computer security weaknesses. The computer security weaknesses covered the full range of computer security controls. For example, physical and logical access controls were not effective in preventing and detecting system intrusions and misuse. In addition, software change controls were ineffective in ensuring that only properly authorized and tested software programs were implemented. Further, duties were not adequately segregated to reduce the risk that one individual could execute unauthorized transactions or software changes without detection. Finally, sensitive operating system software was not adequately controlled, and adequate steps had not been taken to ensure continuity of operations. The risks associated with these weaknesses are heightened because of the increasing interconnectivity of today’s computerized systems and use of the Internet that further exposes them to outside hackers. The Standards for Internal Control in the Federal Government highlight the need for adequate control over automated information systems to ensure protection from inappropriate access and unauthorized use by hackers and other trespassers or inappropriate use by agency personnel. Unresolved information security weaknesses could adversely affect the ability of agencies to produce accurate data for decision-making and financial reporting because such weaknesses could compromise the reliability and availability of data that are recorded in or transmitted by an agency’s financial management system. The degree of risk caused by security weaknesses is extremely high and places a broad array of federal operations and assets at risk of fraud, misuse, and disruption. For example, weaknesses at the Department of the Treasury increase the risk of fraud associated with billions of dollars of federal payments and collections, and weaknesses at DOD increase the vulnerability of various military operations. Further, information security weaknesses place enormous amounts of confidential data, ranging from personal and tax data to proprietary business information, at risk of inappropriate disclosure. One of our most recent reports on computer security highlights significant and pervasive computer security weaknesses that place sensitive Department of Commerce systems at risk. Individuals both within and outside Commerce could gain unauthorized access to these systems and thereby read, copy, modify, and delete sensitive economic, financial, personnel, and confidential business data. Moreover, intruders could disrupt the operations of systems that are critical to the mission of the department. Poor detection and response capabilities at the Commerce bureaus we reviewed increase the likelihood that incidents of unauthorized access to sensitive systems will not be detected in time to prevent or minimize damage. Commerce’s weaknesses were attributable to the lack of an effective information security program, that is, lack of centralized management, a risk-based approach, up-to-date security policies, security awareness and training, and the effectiveness of implemented controls. These weaknesses are exacerbated by Commerce’s highly interconnected computing environment in which the vulnerabilities of individual systems affect the security of systems in the entire department, since a compromise in a single poorly secured system can undermine the security of the multiple systems that connect to it. Similarly, in another recent report, we reported that in spite of progress made in correcting computer security weaknesses previously identified by the Interior IG and other steps to improve security, our review of Interior’s information system general controls identified additional weaknesses at its National Business Center (NBC) in Denver, CO. These weaknesses affected the center’s ability to (1) prevent and detect unauthorized changes to financial information, including payroll and other payment data; (2) control electronic access to sensitive personnel information; and (3) restrict physical access to sensitive computing areas. The effect of these weaknesses is to place sensitive NBC-Denver financial and personnel information at risk of unauthorized disclosure, critical financial operations at risk of disruption, and assets at risk of loss. These weaknesses and risks also affect other agencies that use computer-processing services at NBC- Denver. In recognition of these serious security weaknesses, we and the Inspectors General have issued numerous reports that identify computer security weaknesses in the federal government and have made recommendations to agencies regarding specific steps they should take to make their security programs more effective. Also, in 2001, we again reported information security as a high-risk area across government, as we did in our 1997 and 1999 high-risk series. In addition, we have identified best practices for improving information security management, which we published in two guides. Our guides are consistent with guidance on information security program management provided to agencies by OMB and the National Institute of Standards and Technology (NIST). Further, recognizing the highly networked federal computing environment and the resulting need for improved security management measures, the Congress enacted the Government Information Security Reform (GISR) provisions as part of the fiscal year 2001 Defense Authorization Act. The legislation seeks to provide a comprehensive framework for establishing and ensuring the effectiveness of information security controls over information resources that support federal government operations and assets. GISR requires agencies to implement an information security program that is founded on a continuing risk management cycle and largely incorporates existing security policies found in OMB Circular A-130, Appendix III. GISR also added an important new requirement by calling for both annual management and independent evaluations of the information security program and practices of an agency. The results of these reviews, which are initially scheduled to become available in late 2001, will provide a more complete picture of the status of federal information security than currently exists, thereby providing the Congress and OMB an improved means of overseeing agency progress and identifying areas needing improvement. OMB’s current FFMIA implementation guidance, which was revised on January 4, 2001, and was effective for fiscal year 2000 audits, provides information for auditors to consider in evaluating and reporting audit results. This guidance requires auditors to plan and perform their audit work in sufficient detail to enable them to determine the degree of compliance and report on instances of noncompliance for all of the applicable FFMIA requirements. We agree with this objective. The guidance describes specific minimum requirements that agency systems must meet to achieve compliance and provides indicators of compliance. The FFMIA implementation guidance also indicates that auditors should report on FFMIA compliance as part of the financial statement audit process based upon OMB Bulletin No. 01-02, Audit Requirements for Federal Financial Statements. OMB Bulletin No. 01-02 states that auditors shall perform tests of the entity’s compliance with FFMIA. In providing guidance on reporting on substantial compliance with FFMIA, OMB Bulletin No. 01-02 states that auditors should report that “the results of our tests disclosed no instances in which the agency’s financial management systems did not substantially comply” . In contrast, FFMIA requires the auditors to “…report whether the agency financial management systems comply with the requirements of .” This is an important distinction because under auditing standards the terminology “disclosed no instances” means that the auditor is providing negative assurance. Under generally accepted government auditing standards, only limited incidental testing is necessary for an auditor to give negative assurance. However, to “report whether,” or to provide positive assurance, auditors need to perform sufficient testing to draw a conclusion. Auditors for the five agencies that were not reported to be noncompliant with FFMIA provided negative assurance in accordance with OMB guidance. If the readers of the report do not understand this distinction, they may have a false impression that the systems have been reported to be substantially compliant by the auditors. Today for most agencies, because their systems deficiencies are well known and well documented and based on other audit work the auditor may have performed outside of the financial statement audit, the auditor may have sufficient knowledge to conclude that an agency is not in substantial compliance with FFMIA without performing additional testing beyond that needed for the financial statement audit opinion. The auditors for the 19 agencies that reported agencies’ systems to be noncompliant with FFMIA for fiscal year 2000 told us they relied on knowledge obtained from prior years’ audits or the internal control and compliance with laws and regulation testing performed during the current year financial statement audits. However, to provide positive assurance when assessing substantial compliance with FFMIA requirements, sufficient testing is needed. Some of the promising audit procedures noted during our review included the use of detailed audit programs and an assessment of financial systems’ functionality. For example, auditors for 7 of the 24 agencies—the Department of Energy, AID, NSF, EPA, HUD, NRC, and OPM—designed and used separate FFMIA audit programs to test for compliance. Other procedures that auditors could perform to provide positive assurance when assessing compliance with FFMIA include using the GAO and JFMIP checklists that were developed as assessment tools. For example, the auditors for NRC used the GAO checklist to determine systems’ compliance with JFMIP systems requirements, while the auditors for the Department of Labor used the JFMIP checklist to determine the agency’s core financial system’s compliance with FFMIA. For both agencies, the auditor reported that the agency systems were not compliant with FFMIA. GAO and the President’s Council on Integrity and Efficiency (PCIE) recently issued a joint Financial Audit Manual (GAO/PCIE FAM). This manual provides the methodology for performing financial statement audits of federal entities. Section 350 of this manual describes the procedures auditors should follow in determining the nature, timing, and extent of control tests and of tests for systems’ compliance with FFMIA requirements. Specifically, the manual states that the auditor should use any management-provided documentation of the work that management did for its assertion about the systems’ conformance with the agency’s annual Financial Integrity Act report and any work management may have done for FFMIA as a basis for determining the nature and extent of audit work needed. Management’s role is important and the comprehensive nature of its determination as to whether it is in substantial compliance is important in the audit process. For example, if management provides the auditor with a checklist detailing the functions the systems are able to perform, the auditor generally should select some significant functions from the checklist and determine whether the systems perform them. Overlap exists between testing for FFMIA compliance and testing internal controls. The GAO/PCIE FAM cites a number of techniques, such as observation, inspection, and walkthroughs that the auditor can employ when performing this work. Further, to achieve maximum efficiency, these tests for FFMIA compliance generally should be done concurrently with other nonsampling control tests. The nature of FFMIA will always require a certain degree of judgement on the part of auditors and management. OMB's revised implementation guidance provides examples for auditors and management to consider when assessing compliance with FFMIA. For example, the guidance states that an agency’s systems are substantially compliant with FFMIA, if they can, (1) prepare financial statements and other required financial and budget reports using information generated by the financial management systems; (2) provide reliable and timely financial information for managing current operations; (3) account for their assets reliably, so that they are properly protected from loss, misappropriation, or destruction; and (4) do all of the above in a way that is consistent with the federal accounting standards and the SGL. Nonetheless, auditors for 10 agencies and financial management officials at 4 agencies told us that they encountered problems in interpreting the guidance including OMB’s definition of “substantial compliance.” FFMIA states that agencies’ financial management systems should “comply substantially” with the systems’ requirements, accounting standards, and SGL requirements but does not elaborate on the meaning of “comply substantially.” Some in the CFO and audit communities believe that without further guidance, the interpretation and application of the guidance will likely remain inconsistent throughout the federal government. Auditors for seven agencies told us that, in their view, OMB’s January 2001 revised guidance appeared to lower the threshold for determining compliance with FFMIA, providing more agencies an opportunity to become compliant with FFMIA. For example, the auditor for one agency believes that the revised guidance is too subjective, while the auditor for another agency told us that the guidance eliminated specific systems requirements. Further, officials at one agency told us that they believed the change in the guidance related to the systems security indicators lowered the threshold. In fact, according to the auditors and agency officials for this same agency, the change in OMB’s revised guidance, which was retroactive for fiscal year 2000, was the reason for the agency’s reported systems’ compliance in fiscal year 2000. In fiscal years 1999 and 1998, the auditor reported that the agency was not in substantial compliance with FFMIA because of reportable conditions related to IT security control weaknesses. OMB’s previous guidance characterized IT security controls that were considered reportable conditions as an indicator of an instance of noncompliance with FFMIA. In contrast, the revised guidance states that only material IT security control weaknesses should be considered as indicators of noncompliance with FFMIA. According to the auditors and agency officials, because the revised guidance no longer characterized IT security controls that were reportable conditions as indicators of instances of noncompliance, the auditors determined that the agency was compliant with FFMIA for fiscal year 2000. Moreover, although the compliance indicators in OMB’s revised implementation guidance were meant only as examples of compliance, auditors for three of the agencies which ultimately reported the agencies’ systems to be noncompliant with FFMIA and two auditors that provided negative assurance used the indicators in OMB’s revised guidance as a prescriptive checklist for determining an agency’s systems compliance. These auditors compared the material weaknesses and reportable conditions identified through the financial statement audit process to the OMB compliance indicators, and if no deficiencies in a specifically listed indicator had been identified as part of the financial statement audit work, no noncompliance with FFMIA was reported. If a deficiency in a specific indicator was noted, noncompliance was reported. This was not the way the OMB indicators should have been used because just applying the indicators is too limiting and was not OMB’s intention. Without a comprehensive approach, key systems’ functionalities may not be assessed and the extent of noncompliance will remain uncertain. Without testing the functionality of a financial management system, auditors cannot be assured that the agencies’ systems are operating as designed and that the systems substantially comply with FFMIA. Bringing agency financial management systems into compliance with FFMIA requirements is a formidable challenge that requires sustained top management commitment, adequate funding resources, skilled financial management staff, and meaningful management information. Our Executive Guide: Creating Value Through World-class Financial Management identifies these factors, among others, as key success factors and practices associated with world-class financial management. Agency officials we interviewed repeatedly emphasized the need for top management commitment and adequate resources to effect the changes needed to upgrade or replace financial management systems. To enhance their capabilities of providing meaningful information to decisionmakers, leading organizations included in GAO’s Executive Guide reengineered their business processes in conjunction with implementing new technology. The Executive Guide further points out that at world-class financial management organizations, top executives demonstrate their commitment by ensuring that the necessary resources needed to effect the changes for improved financial management are available. However, 11 of the 19 agencies with reported noncompliant systems cited lack of funds as an obstacle in achieving compliance with FFMIA. Our interview results showed that agencies also need adequate human capital resources, which includes not just enough staff but also skilled staff for critical positions. Many of the officials we interviewed told us that having enough staff with the right skill mix was a problem for the agencies in achieving their FFMIA goals. Officials at 14 of the 19 agencies with noncompliant systems cited the lack of adequate human capital resources as an obstacle to achieving FFMIA compliance. It is crucial that the federal government has a qualified workforce with the right mix of skills to successfully implement financial systems. A key factor is having a well-qualified project manager to lead this effort. The Core Competencies for Project Managers Implementing Financial Systems in the Federal Government identifies competencies in three areas: financial management, human resources, and technical. Pursuit of these competencies will enable project managers to meet the challenge of today’s changing environment and prepare for the future. Strategic human capital management is a pervasive challenge in the federal government. To highlight the urgency of this governmentwide challenge, in January 2001 we added strategic human capital management to our list of federal programs and operations identified as high risk. As stated in the high-risk series report, human capital shortfalls are eroding and threatening the ability of many agencies to effectively, efficiently, and economically perform their missions. As a result, this area needs greater attention to ensure maximum government performance and accountability. Another key success factor for world-class financial management is meaningful management information. Financial information is meaningful when it is reliable, useful, and timely. However, as discussed earlier, most federal agencies lack the systems and processes required to produce meaningful financial information needed for management decision-making. To remedy their financial management systems problems, many agencies are implementing COTS software packages. In this regard, JFMIP tests vendor COTS packages and certifies that they meet current financial management system requirements for core financial management systems. Agencies who have or are currently implementing COTS packages include the Departments of Agriculture, Education, Transportation, and Veterans Affairs, DOD components such as the Defense Finance and Accounting Service and the Military Sealift Command, HUD’s FHA, and AID. A key to successful implementation of COTS systems, according to leading finance organizations, is reengineering business processes to fit the new software applications that are based on best practices. The Clinger-Cohen Act requires agency heads to modernize inefficient mission-related and administrative processes (as appropriate) before making a significant investment in IT systems to support them. Thus, an assessment of current processes should be completed before any decision is made about acquiring technology. As a result, federal agencies are beginning to consider the merits of information technology approaches that involve reengineering business processes in conjunction with implementing COTS software without significant modification. The CFO Act requires OMB to prepare and submit to the Congress a governmentwide 5-year financial management plan, including annual status updates. Among other requirements, the governmentwide plan is to describe strategies for improving financial management. To help compile the governmentwide 5-year plan, OMB uses agency-specific financial management plans that agencies prepare as part of their budget submissions and are also required by the CFO Act. FFMIA requires agency management to prepare remediation plans, in consultation with OMB, that describe the corrective actions they plan to take to resolve their instances of noncompliance, target dates, and resources necessary to bring financial systems into substantial compliance with FFMIA requirements. Further, the recently issued President’s Management Agenda for improving financial management states that OMB will work with agencies to ensure that federal financial systems produce accurate and timely information to support operating, budget, and policy decisions. For our report on FFMIA compliance last year, we reviewed remediation plans agencies prepared to address problems identified in the fiscal year 1998 financial statement audits. We concluded that the majority of the plans lacked sufficient detail to be adequate tools for agency management and staff to use in resolving financial management problems. For this year’s report, we reviewed agencies’ fiscal year 1999 remediation plans. Overall the plans improved slightly over those for fiscal year 1998. While OMB has worked with many agencies to prepare or revise these plans, which helped improve the plans, many still lacked sufficient detail and descriptions of the resources needed for executing the corrective actions. Further, some of the corrective actions included in the remediation plans we reviewed did not fully address the problems they are intended to correct. As we reported last year, remediation plans need to be sufficiently detailed to provide a “road map” for agency management and staff to resolve financial management problems. The severity of problems facing agencies as they attempt to replace or overhaul old and outdated financial systems and resolve serious information security weaknesses, among other things, highlights the need for detailed remediation plans. Of the 21 agencies whose systems were reported to be noncompliant with FFMIA in fiscal year 1999, 16 prepared remediation plans. Two agencies— SSA and FEMA—did not submit remediation plans for fiscal year 1999 to OMB because agency management determined that their systems were in substantial compliance with FFMIA. While SSA and FEMA management acknowledged that the weaknesses identified by the auditors exist, they did not agree with the auditors that the weaknesses resulted in lack of “substantial” compliance. However, SSA and FEMA have provided comments, including corrective actions, in response to the auditors’ recommendations. In addition, 3 of 21 agencies—the Departments of Justice and State and GSA—did not prepare separate remediation plans to address reported fiscal year 1999 instances of noncompliance. The Department of Justice addressed instances of FFMIA noncompliance for both fiscal years 1999 and 2000 in its Financial Management Status Report and Five-Year Plan dated May 2001. Department of State officials decided not to issue a separate remediation plan for fiscal year 1999 but rather to focus on implementing actions in its March 2000 plan and on updating its remediation plan to address the fiscal year 2000 instances of noncompliance. Lastly, GSA officials told us that management did not prepare a remediation plan for fiscal year 1999 because the agency’s systems were determined to be in compliance for fiscal year 2000, and the severity of the problems for fiscal year 1999 no longer warranted development of a plan. FFMIA provides that if the compliance determination made by the agency head differs from the auditors’ findings, the Director of OMB is to review the determinations and provide a report on the findings to the appropriate committees of the Congress. Further, although FFMIA does not require a remediation plan if an agency head determines the agency’s systems comply substantially, OMB Circular A-11 requires agencies to address systems weaknesses in their financial management improvement plans. We reviewed the 16 available remediation plans to determine whether (1) they included all the instances of noncompliance identified in the fiscal year 1998 financial statement audit reports; (2) the planned corrective actions were accompanied by detailed steps; (3) the corrective actions, if successfully implemented, could potentially resolve the problems; (4) they included information about resources needed; and (5) they provided target dates for completing the corrective actions. Figure 6 presents the results of our analysis. As shown in figure 6, 14 of the agencies’ remediation plans included corrective actions that covered all of the reported instances of noncompliance identified as a result of the fiscal year 1999 financial statement audit. The remediation plans for two agencies, HUD and DOD, did not include corrective actions to cover all of the instances of FFMIA noncompliance reported. While HUD’s remediation plan covered virtually all of its instances of noncompliance, it did not fully address computer security weaknesses over its information systems. The corrective actions in DOD’s plan, referred to as its Financial Management Improvement Plan (FMIP),could not be specifically related to the reported instances of FFMIA noncompliance. Another limitation with a number of the remediation plans is that the corrective actions were broadly stated and did not include sufficient details describing how actions are to be accomplished. As shown in figure 6, corrective actions in 11 of the 16 remediation plans fell into this category. An example of a plan with sufficient details describing corrective actions is HUD’s remediation plan. In its plan, HUD included specific actions for addressing FHA’s compliance with the SGL at the transaction level, which includes completing the feeder system SGL financial transaction processes for 19 systems, validating extracts from existing feeder systems, and determining the appropriate SGL accounting treatment and data format. In contrast, one of the corrective actions in AID’s remediation plan is to develop cost allocation models with cost drivers to attribute costs to the agency’s goals. Based on our review of AID’s remediation plan, we found no information that describes, even in general terms, the cost drivers and how the cost allocation models will be developed. As we discuss later, when an agency’s corrective actions involve implementing or replacing financial management systems, it is important to have a detailed plan that includes adopting sound IT investment and control processes. While there is a substantial amount of professional judgment associated with assessing the adequacy of these plans, we determined that the corrective actions in the remediation plans of 15 agencies, if successfully implemented, could potentially resolve the problems, as shown in figure 6. For DOD, we determined that the corrective actions described in the agency’s remediation plan probably would not resolve the problems. For example, we recently reported that while DOD’s fiscal year 2000 FMIP is a significant effort and an improvement over prior plans, it largely represents a compilation of the military services’ and DOD components’ stovepiped approaches, and therefore is not an effective management tool that establishes a departmentwide strategic approach for developing an integrated DOD-wide financial management system. Such stovepiped approaches have been at the heart of previous DOD-wide reform initiatives that have produced some incremental improvements, but have not resulted in the fundamental reform necessary to resolve these long-standing management challenges. As we recently testified, DOD’s financial management challenges must be addressed as part of a comprehensive, integrated, DOD-wide business process reform, including an enterprisewide systems architecture to guide and direct its financial management modernization investment. If the hundreds of initiatives outlined in the plan are not implemented as part of an overall financial management architecture, DOD runs the risk that its system efforts will result in perpetuating a system environment that is duplicative, not interoperable, unnecessarily costly to maintain, and is unable to optimize financial management performance and accountability. We are encouraged that the Secretary of Defense has stated that he intends to include financial management reform among his top priorities. Most recently, DOD has initiated a number of actions that hold promise for addressing its long- standing serious problems in this area. For example, DOD recently announced plans to (1) dedicate significant funding to this area; (2) establish a top level steering committee that is to include leaders of its major components and Secretariat-level organizations; and (3) analyze ongoing and planned financial management systems initiatives across the Department to curtail high-risk efforts that will not lead to an integrated financial management structure. OMB’s guidance and FFMIA state that remediation plans are to include resources and target dates necessary to achieve substantial compliance. As shown in figure 6, 10 of the 16 remediation plans we reviewed did not include a discussion of resources needed. Resource information is important for agencies and OMB to determine whether corrective actions can realistically be undertaken. Finally, as shown in figure 6, all 16 of the remediation plans included timeframes. This is an improvement over fiscal year 1998, where 14 of the 19 remediation plans included timeframes. The plans would be further enhanced by including intermediate target dates. Setting specific intermediate target dates help keep agencies on track as they implement corrective actions. FFMIA, which was enacted 5 years ago, specifies that agencies have 3 years to bring their systems into compliance after a determination of noncompliance has been made. FFMIA also provides for extending the time needed to complete the planned actions past 3 years with the concurrence of OMB. As mentioned earlier, in our discussion of the extent of long-term challenges facing DOD, 3 years will not be enough time for some agencies to address their remaining problems. Therefore, OMB’s continuing leadership and oversight of remediation efforts will be important. The importance of having a good remediation plan becomes more evident when corrective actions in remediation plans involve IT investments, such as implementing or replacing financial management systems or software. Agencies invest more than $40 billion in IT for about 26,000 information systems. Technology now affects virtually every aspect of the way the government operates, and IT investments are extremely important to the success of e-government transforming the delivery of information and services. To ensure that IT dollars are directed toward prudent investments designed to achieve cost savings, increase productivity, and improve the timeliness and quality of service delivery, agencies need to apply the framework outlined in the Clinger-Cohen Act of 1996 and implementing guidance. The Clinger-Cohen Act requires agencies to use a capital planning and investment control process to compare and prioritize all IT projects using explicit quantitative and qualitative decision criteria. Moreover, the Clinger-Cohen Act requires agencies to adopt an IT architecture, a well- defined and enforced blueprint for operational and technological change. An enterprise architecture provides an agency with a clear and comprehensive picture of an entity and includes a capital investment road map for transitioning from the current to the target, or the planned future environment. In concert with an enterprise architecture, the Clinger-Cohen Act requires agencies to have disciplined approaches for developing or acquiring software, including an effective evaluation process for assuring that the contractor-developed software satisfies the defined requirements. OMB officials told us that they are working with agencies regarding the application of the framework outlined in the Clinger-Cohen Act and that OMB’s review of agencies’ IT capital asset planning processes is linked to its review of agencies’ remediation plans. As discussed further in the next section, OMB’s continuing leadership is critical to the efforts across government to improve financial management systems. Many agencies are planning or are in the process of implementing new core financial management systems. Implementing or overhauling financial management systems can understandably take time, and the systems may not be operational for several years. For example, as previously discussed, VA is planning to replace its “patchwork” of computer systems and correct its FMFIA material systems weaknesses by implementing a single commercial financial management and logistics system. This system implementation effort, called the Core Financial and Logistics System, is targeted to be completed by the end of 2003. Similarly, the Centers for Medicare and Medicaid Services have efforts underway to implement the Integrated General Ledger Accounting System and expects to complete implementation in fiscal year 2007. NASA has found implementation of new financial management systems to be a challenge. In describing its need for an integrated financial management system, NASA has stated that its financial management environment was comprised of decentralized, nonintegrated systems, with policies, procedures, and practices unique to its field centers. The Integrated Financial Management System (IFMS) is expected to correct these problems. NASA is undertaking its third attempt to implement an integrated financial management system. In a prior attempt, a former contractor working on the IFMS had difficulties upgrading its software to support new technologies and to meet all federal requirements. This contract was eventually terminated, and the program to implement the system has been changed so that implementation of the new system is broken into individual software modules. NASA CFO officials told us that NASA is now moving to a COTS package for its core financial system. NASA expects to implement the core financial system at its centers in fiscal year 2003. Agriculture encountered problems in implementing a COTS package to provide a departmentwide accounting system, due to inadequate project planning and inexperienced management coupled with insufficient business process reengineering. According to PriceWaterhouseCoopers, Agriculture’s implementation of the COTS package was impacted by insufficient strategic planning. For example, Agriculture did not have a single, strong strategic plan to guide the implementation of its Foundation Financial Information System (FFIS). The strategic implementation plan should have been developed in concert with its component agencies and communicated throughout the Department. Moreover, Agriculture did not do sufficient analysis of its business processes before attempting to implement the FFIS at the Forest Service. As a result, significant effort was expended automating existing and complex business processes, some of which needed to be reengineered. Agriculture hired experienced financial systems program management staff in the fourth quarter of 1998. This staff reoriented the FFIS project and has implemented the FFIS in six major Agriculture agencies since the beginning of fiscal year 2000. Agriculture expects to implement the FFIS at another eight agencies on October 1, 2001. According to Agriculture officials, the keys to progress for this project, required knowledgeable staff, management support, and resources. The advisory role FFMIA established for OMB with respect to agency remediation plans is important for addressing the types of problems we noted in the remediation plans we reviewed. Therefore, in a prior report we recommended that OMB work with the agencies to ensure that all remediation plans are prepared and submitted timely. We also recommended that OMB review agencies’ plans for (1) detailed corrective actions that fully address reported problems, (2) inclusion of resource requirements, and (3) specific time frames needed to implement and resolve problems. OMB officials have told us that OMB is moving toward full implementation of its strategy to link financial management systems improvements detailed in FFMIA remediation plans to key agency plans. For example, OMB is planning to link its review of the remediation plans with agency 5-year financial management plans, IT plans, and capital planning and investment control processes. According to OMB officials, OMB is integrating its review of FFMIA remediation plans with its capital planning and investment control plans process. By incorporating FFMIA remediation reviews under this framework, OMB will be better able to analyze, track, and evaluate FFMIA improvement efforts as part of the budget process. OMB officials have told us that they met with each of the CFO Act agencies to introduce OMB’s long-term strategy for incorporating FFMIA remediation plans into the agencies’ capital asset plans. In scheduling these meetings, OMB requires multiple agency officials to attend such as the Chief Financial Officers, Chief Information Officers, the Chief Procurement officials and, in some instances, the Budget Officer. In addition, changes were made to OMB Circular A-11 in July 2000 to provide guidance on integrating financial management systems improvements in FFMIA remediation plans with agency information on IT capital projects. Further, GAO and Treasury participate in annual meetings held by OMB with the CFOs and IGs of each CFO Act agency that did not have an unqualified audit opinion or had serious systems problems. At these meetings, financial management systems initiatives are discussed, and OMB stresses that the end game of the CFO Act is having systems that produce reliable, useful, and timely information on an ongoing basis. A number of agency CFO officials we interviewed, though, generally seemed unsure about OMB’s strategy for integrating FFMIA remediation plans with agencies’ capital planning and investment control processes. Of the 16 agencies preparing remediation plans for fiscal year 1999, officials from 9 were certain that OMB had met with or contacted officials from their agencies to discuss its strategy related to remediation plans while 7 were unaware of the meetings. Officials from 5 of the 16 agencies did say that they implemented OMB’s strategy in preparing their fiscal year 1999 remediation plans due to OMB in December 2000, while 7 told us they had not done so, and officials from 4 agencies did not know if the strategy had been implemented. This lack of awareness of OMB’s strategy can result in agency officials providing less-than-expected attention to the critical issue of preparing and submitting remediation plans. Without appropriate attention, agencies have a greater risk of failure when attempting to implement the plans and the serious weaknesses in the financial management systems will remain. With the new Presidential Management Agenda, significant attention is expected to be devoted to these issues. OMB’s continued leadership will be important to foster effective results. On August 13, 2001, the JFMIP principals—the Comptroller General, the Secretary of the Treasury, the Director of OMB, and the Director of OPM— met to discuss federal financial management reform issues. Commitment and cooperation among the highest levels of leadership in the federal financial management community can provide the impetus for accelerating changes to financial management reform in the federal government. This group is developing an agenda to address the long-standing challenges discussed in this report. We anticipate that a number of recommendations and action items will come from the initiatives contemplated by this group related to issues such as addressing impediments to an opinion on the U.S. government’s consolidated financial statements, defining success in financial management, and modernizing financial management systems. Long-standing problems with agencies’ financial systems make it difficult for the agencies to produce reliable, useful, and timely financial information and hold managers accountable. Federal managers need this important information for developing and executing budgets, managing government programs based on results, and making difficult policy choices. The extraordinary efforts that many agencies go through to produce auditable financial statements are not sustainable in the long term. These efforts use significant resources that could be used for other important financial-related work. For these reasons, the widespread systems problems facing the federal government need top management attention. Sustained management commitment at the highest levels of government is one of the most important factors in prompting attention and action on a widespread problem. In addition to top management commitment, additional refinements to OMB’s FFMIA implementation guidance are needed to assure consistent and effective implementation of FFMIA. OMB guidance should address the differing interpretations over (1) the meaning of substantial compliance, (2) the nature and extent of audit work necessary to assess compliance with FFMIA, and (3) whether to provide an opinion on agency’s systems FFMIA compliance. The size and complexity of many federal agencies and the discipline needed to overhaul or replace their financial management systems present a significant challenge—not simply a challenge to overcome a technical glitch, but a demanding management challenge that requires attention from the highest levels of government along with sufficient human capital resources to effect lasting change. We recognize that it will take time, investment, and sustained emphasis on correcting deficiencies to improve federal financial management systems to the level required by FFMIA and to effectively manage government funds. The significance of the issues facing agencies, now and in the future, emphasizes the need for detailed remediation plans. As envisioned by the act, these remediation plans would help agencies establish seamless systems and processes to routinely generate reliable, useful, and timely information that would improve agencies’ accountability. Our analysis has shown that many agencies’ remediation plans lack key elements that could preclude the achievement of establishing seamless systems. Therefore, we reaffirm the recommendation we made in our prior report that OMB continue to work with agencies to ensure that the remediation plans include all required elements and are not making new recommendations at this time related to remediation plans. Improvements in federal financial management systems are in some cases a long-term goal, but with sustained emphasis, the goals of the CFO Act and FFMIA can be achieved. As mentioned earlier, the heads of GAO, OMB, Treasury, and OPM recently met to discuss governmentwide financial management reform issues. The leadership commitment and spirit of cooperation among these top officials can provide the needed impetus to accelerate financial management reform in the federal government. Given the ongoing efforts of the JFMIP Principals to develop an action plan, we are making no specific recommendations at this time regarding OMB’s overall financial management systems strategy, other than to reiterate the importance of OMB’s continuing leadership in improving financial management systems. We do recommend that the Director of the Office of Management and Budget revise OMB’s current FFMIA audit guidance to require agency auditors to provide a statement of positive assurance when reporting an agency’s systems to be in substantial compliance with FFMIA, which entails a more thorough examination of agencies’ systems and thus, amplifies financial managers’ awareness of the importance of an effective and efficient financial management system, develop additional guidance, in accordance with the FAM, to specify the expected procedures that auditors should perform when assessing FFMIA compliance, which clearly outlines (1) the minimum scope of work and (2) the procedures for auditors to perform in determining whether management has reliable, timely, and useful financial information for managing day-to-day operations. We are also recommending that OMB work with the CFOs, the IGs, and GAO to explore further clarification of the definition of “substantial compliance” to assist auditors and agency management to consistently apply and evaluate an agency’s systems’ FFMIA compliance, and reiterate that the indicators of compliance in the January 4, 2001, FFMIA implementation guidance are not meant to be all inclusive. We further recommend that because of the importance of cost accounting to managers for measuring the results of program performance, that OMB request that as part of the FFMIA review, auditors pay special attention to agencies’ ability to meet the requirements of the Managerial Cost Accounting Concepts and Standards and to report as to whether agencies’ systems comply with the standards. In written comments (reprinted in appendix V) on a draft of this report, OMB agreed with our overall observations and conclusions concerning the financial management systems weaknesses faced by the federal government and the need for sustained management commitment at the highest levels in order to overcome them. OMB stated that good financial management systems enable managers to have the financial and performance information necessary to measure and effect current day-to- day operations while fully meeting federal reporting requirements. OMB also stated, as discussed in our report, that improving financial management is one of five governmentwide initiatives included in the President’s Management Agenda. OMB stated that it has begun to reexamine its fundamental approach to systems development and implementation in the federal government, including its FFMIA implementation guidance, and believes that more emphasis should be placed on system performance and results. OMB welcomed our participation on this effort and we look forward to working with them. As we discuss in our report, management reform legislation including the CFO Act, GPRA, and FFMIA, if fully and effectively implemented, will collectively help achieve strong financial management that provides reliable, timely, and useful information for decisionmakers. In its comments, OMB expressed concern as to how our report characterized the level of testing currently contemplated by OMB Bulletin No. 01-02. This was not our intent, and we have clarified our report. OMB stated that required tests of FFMIA in OMB Bulletin No. 01-02, coupled with the requirement to test internal controls over significant systems, results in more than incidental testing. Our point is that under generally accepted government auditing standards, auditors need to perform only limited incidental testing to provide negative assurance. Our concern is that with negative assurance, the auditor is not saying that they determined the systems to be substantially compliant, but that the work performed did not identify instances of noncompliance. We view the law as requiring a definitive statement as to whether the systems susbstantially complied. It is important that readers of the audit report understand this distinction, or they may have a false impression that the auditor is stating that they found the systems to be substantially compliant. We will continue to work with OMB on this matter and recognize that we have differing views. We agree with OMB that reorienting remediation plans towards measurable performance would force a more integrated enterprisewide approach that considers both financial and nonfinancial systems that support agency missions. This provides a needed perspective and helps agencies to adopt an information technology architecture that includes a well-defined blueprint for operational and technological change. The Clinger-Cohen Act provides a foundation that can be followed by agencies as they implement these important systems initiatives. Remediation plans with sufficient detail, that are linked to and support as agency’s strategic business plan, provide a “road map” for management and staff to resolve financial management problems and hold managers accountable for needed improvements. We reiterate the importance of OMB’s leadership as it moves towards bringing about needed changes in the federal financial management environment. We also provided excerpts from a draft of this report to cognizant officials at the 24 CFO Act agencies to obtain oral comments. Officials from DOT, AID, and Interior expressed concern that the report did not fully recognize their efforts to address the systems weaknesses discussed in various parts of our report. In these instances, these corrective actions occurred after fiscal year 2000, which is subsequent to the timeframe covered by our report. Also, it was not our objective to independently assess specific management actions for the 24 CFO Act agencies. However, in our report we have acknowledged actions that have been taken throughout government and are underway to address systems weaknesses. Interior officials suggested we acknowledge that the weak information security controls that were used as an example in the draft report have not compromised financial or personnel data. In this regard, we previously reported that the Interior’s National Business Center had not fully established a comprehensive program to routinely monitor access to its computer facilities and data and to identify and investigate unusual or suspicious access patterns that could indicate unauthorized access. EPA officials were concerned that the example in the report related to EPA's backlog of unliquidated obligations was (1) not reported by EPA’s IG as an instance of noncompliance with FFMIA, and (2) not an example of the lack of accurate or timely recording of financial information as portrayed in the report. Regarding the first concern, as we state in the report, we included all weaknesses relevant to FFMIA identified by the auditors because such problems must be resolved in order for the agencies' systems to have the data to generate the reliable, useful, and timely information needed for decision-making. Regarding the second concern, EPA officials stated that the example in our report illustrates the lack of timely processing of the deobligation actions and is not an example of the lack of timely recording of a financial transaction. According to EPA officials, the deobligation can only take place after an authorizing official closes the obligating document. EPA officials stated that the closeout usually occurs after audits and other administrative requirements are satisfied. As a result, a large backlog of grants has been awaiting closeout. In our view, this example illustrates that the lack of timely deobligations, which trigger the final transactions, can result in misleading financial information, both at year-end and throughout the year. NASA officials disagreed with our questioning of its compliance with FFMIA because of issues related to cost accounting and the lack of an integrated financial management system. NASA stated that contract cost reports provided to the agency by its contractors, combined with cost finding techniques that are permitted under Managerial Cost Accounting Concepts and Standards, allow NASA to capture all costs related to the multibillion dollar international space station program. However, as we highlight and discuss in more detail in our August 31, 2001 report, NASA's systems do not track the cost of individual space station subsystems or elements. According to agency officials, NASA manages and tracks space station costs by contract and does not need to know the cost of individual subsystems or elements to effectively manage the program. However, our work in this area found that NASA assigns potential and probable future costs in order to estimate the impact of canceling, deferring, or adding space station content. These cost estimates often assign the cost of specific space station subsystems. However, because NASA does not attempt to track costs by element or subsystem, the agency does not know the actual cost of completed space station components and is not able to re-examine its cost estimates for validity once costs have been realized. Further, in the event of a cost overrun, it would be very difficult to identify which component prompted the overrun, thus hampering management's ability to make informed decisions. While cost finding techniques when clearly assigned to outputs are permitted under the Managerial Cost Accounting Concepts and Standards, NASA appears to not have clearly defined the outputs—in this case the space station components—to permit recognition and measurement of costs appropriate for intended purposes, such as holding managers accountable for differences between budgeted and actual costs. Therefore, it remains unclear how NASA can conclude it is in compliance with cost accounting standards. NASA officials also stated that the fact that NASA does not have an integrated financial management system does not preclude substantial compliance with FFMIA. Specifically, they state that NASA's systems taken as a whole, meet the objectives of FFMIA and the supplemental, compensating procedures and practices employed by NASA substantially and materially achieve federal requirements. Nonetheless, NASA reports its financial management systems as a nonconforming significant area of management concern because the systems are not fully automated and not fully integrated. NASA’s labor-intensive, reconciliation/compilation processes are due to the fact that it has nonstandard systems that are not integrated and were not designed to include the SGL accounts. In our view, systems that are prone to errors and do not adhere to OMB's requirements for an integrated financial management system as outlined in OMB Circular A-127, preclude compliance with the goals and requirements of FFMIA. To illustrate the challenges faced by NASA in trying to provide relatively straightforward information, as we recently reported, for over 5 months NASA has been unable to provide us with detailed transaction-based support for amounts obligated against the space station and shuttle because it maintains a separate accounting system at each of its nine field centers and headquarters and cannot readily pull the information together. NASA also took exception with the way the draft characterizes the centers' financial/accounting policies, procedures, and practices as unique. NASA stated that its Financial Management Manual prescribes standard financial policies, procedures and practices and ensures the centers comply with those through various quality assessment processes. NASA also added that the field centers have lower level policies, practices, and procedures unique to each center based on a center's mission and organization structure. We agree that NASA's Financial Management Manual prescribes standard financial policies, procedures, and practices, but our intent in the report is to convey that each center has a unique operating environment. This has permitted nonstandardized data formats, and nonintegrated systems, thus prohibiting the access to readily available reliable, useful, and timely financial information. Several agencies also provided technical comments that we incorporated where appropriate. We are sending copies of this report to the Chairman, and Ranking Minority Member, Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; and to the Chairman, and Ranking Minority Member, Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, House Committee on Government Reform. We also sending copies to the Director of the Office of Management and Budget; the Secretary of the Treasury; the heads of the 24 CFO Act agencies; and agency CFOs and IGs. Copies will also be made available to others upon request. This report was prepared under the direction of Sally E. Thompson, Director, Financial Management and Assurance, who may be reached at (202) 512-9450 or by e-mail at [email protected] if you have any questions. Staff contacts and other key contributors to this report are listed in appendix VI. In addition to those named above, Lee Carroll, Cary Chappell, Richard Harada, Rosa R. Harris, Lisa Knight, Steve Lowrey, Meg Mills, Karlin Richardson, and Sandra S. Silzer made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system).
Effective management of the government's day-to-day operations has been hampered by a lack of necessary data. The Chief Financial Officers (CFO) Act of 1990 calls for the modernization of federal financial management systems, including the systematic measurement of performance; the development of cost information; and the integration of program, budget, and financial information. The Federal Financial Management Improvement Act of 1996 (FFMIA) encourages agencies to have systems that generate timely, accurate, and useful information with which to make informed decisions and to ensure accountability on an ongoing basis. Auditors for 19 of the 24 CFO Act agencies reported that their agencies' financial management systems did not comply substantially with FFMIA requirements, compared to 21 agencies reported as not being substantially compliant for 1999. The auditors for five CFO Act agencies reported no instances in which the agencies' systems did not substantially comply with FFMIA. These auditors, however, did not definitively state whether the agencies' financial management systems substantially complied with FFMIA. FFMIA requires agencies to prepare remediation plans to overcome financial management systems problems. These plans have improved over the fiscal year 1998 plans; however, further enhancements are needed.
BPCA was enacted on January 4, 2002, to encourage drug sponsors to conduct pediatric drug studies. BPCA allows FDA to grant drug sponsors pediatric exclusivity—6 months of additional market exclusivity—in exchange for conducting and reporting on pediatric drug studies. BPCA also provides mechanisms for pediatric drug studies that drug sponsors decline to conduct. The process for initiating pediatric drug studies under BPCA formally begins when FDA issues a written request to a drug sponsor to conduct pediatric drug studies for a particular drug. When a drug sponsor accepts the written request and completes the pediatric drug studies, it submits to FDA reports describing the studies and the study results. BPCA specifies that FDA generally has 90 days to review the study reports to determine whether the pediatric drug studies met the conditions outlined in the written request. If FDA determines that the pediatric drug studies conducted by the drug sponsor were responsive to the written request, it will grant a drug pediatric exclusivity regardless of the study findings. Figure 1 illustrates the process under BPCA. BPCA includes two provisions to further the study of drugs when drug sponsors decline written requests. FDA cannot extend pediatric exclusivity in response to written requests for any drugs for which the drug sponsors declined to conduct the requested pediatric drug studies. First, when drug sponsors decline written requests for studies of on-patent drugs, BPCA provides for FDA to refer the study of those drugs to FNIH for funding. FNIH, which is a nonprofit corporation and independent of NIH, supports the mission of NIH and advances research by linking private sector donors and partners to NIH programs. FNIH and NIH collaborate to fund certain projects. As of December 2005, FNIH had raised $4.13 million to fund pediatric drug studies under BPCA. Second, to further the study of off-patent drugs, NIH—in consultation with FDA and experts in pediatric research—develops a list of drugs, including off-patent drugs, which the agency believes need to be studied in children. NIH lists these drugs annually in the Federal Register. FDA may issue written requests for those drugs on the list that it determines to be most in need of study. If the drug sponsor declines or fails to respond to the written request, NIH can contract for, and fund, the pediatric drug studies. Drug sponsors generally decline written requests for off-patent drugs because the financial incentives are considerably limited. Pediatric drug studies often reveal new information about the safety or effectiveness of a drug, which could indicate the need for a change to its labeling. Generally, the labeling includes important information for health care providers, including proper uses of the drug, proper dosing, and possible adverse events that could result from taking the drug. FDA may determine that the drug is not approved for use by children, which would then be reflected in any labeling changes. The agency refers to its review to determine the need for labeling changes as its scientific review. BPCA specifies that study results submitted as a supplemental new drug application—which, according to FDA officials, most are—are subject to FDA’s general performance goals for a scientific review, which in this case is 180 days. FDA’s process for reviewing study results submitted under BPCA for consideration of labeling changes is not unique to BPCA. FDA’s action can include approving the application, determining that the application is approvable, or determining that the application is not approvable. A determination that an application is approvable may require that drug sponsors conduct additional analyses. Each time FDA takes action on the application, a review cycle is ended. Most of the on-patent drugs for which FDA requested pediatric drug studies under BPCA were being studied, but no studies have resulted when the requests were declined by drug sponsors. From January 2002 through December 2005, FDA issued 214 written requests for on-patent drugs to be studied under BPCA, and drug sponsors agreed to conduct pediatric drug studies for 173 (81 percent) of those. The remaining 41 written requests were declined. Of these 41, FDA referred 9 written requests to FNIH for funding and FNIH had not funded any of those studies as of December 2005. Drug sponsors completed pediatric drug studies for 59 of the 173 accepted written requests—studies for the remaining 114 written requests were ongoing—and FDA made pediatric exclusivity determinations for 55 of those through December 2005. Of those 55 written requests, 52 (95 percent) resulted in FDA granting pediatric exclusivity. Figure 2 shows the status of written requests issued under BPCA for the study of on- patent drugs, from January 2002 through December 2005. Drugs were studied under BPCA for their safety and effectiveness in treating children for a wide range of diseases, including some that are common—such as asthma and allergies— and serious or life threatening in children—such as cancer, HIV, and hypertension. We found that the drugs studied under BPCA represented more than 17 broad categories of disease. The category that had the most drugs studied under BPCA was cancer, with 28 drugs. In addition, there were 26 drugs studied for neurological and psychiatric disorders, 19 for endocrine and metabolic disorders, 18 related to cardiovascular disease—including drugs related to hypertension—and 17 related to viral infections. Analyses of two national databases shows that about half of the 10 most frequently prescribed drugs for children were studied under BPCA. Through December 2005, drug sponsors declined written requests issued under BPCA for 41 on-patent drugs. FDA referred 9 of these 41 written requests (22 percent) to FNIH for funding, but as of December 2005, FNIH had not funded the study of any of these drugs. NIH has estimated that the cost of studying these 9 drugs would exceed $43 million, but FNIH had raised only $4.13 million for pediatric drug studies under BPCA. Few off-patent drugs identified by NIH as in need of study for pediatric use have been studied. By 2005, NIH had identified 40 off-patent drugs that it believed should be studied for pediatric use. Through 2005, FDA issued written requests for 16 of these drugs. All but 1 of these written requests were declined by drug sponsors. NIH funded pediatric drug studies for 7 of the remaining 15 written requests declined by drug sponsors through December 2005. NIH provided several reasons why it has not pursued the study of some off-patent drugs that drug sponsors declined to study. Concerns about the incidence of the disease that the drugs were developed to treat, the feasibility of study design, drug safety, and changes in the drugs’ patent status have caused the agency to reconsider the merit of studying some of the drugs it identified as important for study in children. For example, in one case NIH issued a request for proposals to study a drug but received no responses. In other cases, NIH is awaiting consultation with pediatric experts to determine the potential for study. Further, NIH has not received appropriations specifically for funding pediatric drug studies under BPCA. NIH anticipates spending an estimated $52.5 million for pediatric drug studies associated with 7 written requests issued by FDA from January 2002 through December 2005. Most drugs that have been granted pediatric exclusivity under BPCA— about 87 percent—have had labeling changes as a result of the pediatric drug studies conducted under BPCA. Pediatric drug studies conducted under BPCA showed that children may have been exposed to ineffective drugs, ineffective dosing, overdosing, or side effects that were previously unknown. However, the process for reviewing study results and completing labeling changes was sometimes lengthy, particularly when FDA required additional information from drug sponsors to support the changes. Of the 52 drugs studied and granted pediatric exclusivity under BPCA from January 2002 through December 2005, 45 (about 87 percent) had labeling changes as a result of the pediatric drug studies. In addition, 3 other drugs had labeling changes prior to FDA making a decision on granting pediatric exclusivity. FDA officials said that the pediatric drug studies conducted up to that time provided important safety information that should be reflected in the labeling without waiting until the full study results were submitted or pediatric exclusivity determined. Pediatric drug studies conducted under BPCA have shown that the way that some drugs were being administered to children potentially exposed them to an ineffective therapy, ineffective dosing, overdosing, or previously unknown side effects—including some that affect growth and development. The labeling for these drugs was changed to reflect these study results. For example, studies of the drug Sumatriptan, which is used to treat migraines, showed that there was no benefit derived from this drug when it was used in children. There were also certain serious adverse events associated with its use in children, such as vision loss and stroke, so the labeling was changed to reflect that the drug is not recommended for children under 18 years old. Other drugs have had labeling changes indicating that the drugs may be used safely and effectively by children in certain dosages or forms. Typically, this resulted in the drug labeling being changed to indicate that the drug was approved for use by children younger than those for whom it had previously been approved. In other cases, the changes reflected a new formulation of a drug, such as a syrup that was developed for pediatric use, or new directions for preparing the drug for pediatric use were identified in the pediatric drug studies conducted under BPCA. Although FDA generally completed its first scientific review of study results—including consideration of labeling changes—within its 180-day goal, the process for completing the review, including obtaining sufficient information to support and approve labeling changes, sometimes took longer. For the 45 drugs granted pediatric exclusivity that had labeling changes, it took an average of almost 9 months after study results were first submitted to FDA for the sponsor to submit and the agency to review all of the information it required and approve labeling changes. For 13 drugs (about 29 percent), FDA completed this scientific review process and approved labeling changes within 180 days. It took from 181 to 187 days for the scientific review process to be completed and labeling changes to be approved for 14 drugs (about 31 percent). For the remaining 18 drugs (about 40 percent), FDA took from 238 to 1,055 days to complete the scientific review process and approve labeling changes. For 7 of those drugs, it took more than a year to complete the scientific review process and approve labeling changes. While the first scientific reviews were generally completed within 180 days, it took 238 days or more for 18 drugs. For those 18 drugs, FDA determined that it needed additional information from the drug sponsors in order to be able to approve the drugs for pediatric use. This often required that the drug sponsor conduct additional analyses or pediatric drug studies. FDA officials said they could not approve any changes to drug labeling until the drug sponsor provided this information. Drug sponsors sometimes took as long as 1 year to gather the additional necessary data and respond to FDA’s request. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. For further information regarding this testimony, please contact Marcia Crosse at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Thomas Conahan, Assistant Director; Carolyn Feis Korman; and Cathleen Hamann made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
About two-thirds of drugs that are prescribed for children have not been studied and labeled for pediatric use, placing children at risk of being exposed to ineffective treatment or incorrect dosing. The Best Pharmaceuticals for Children Act (BPCA), enacted in 2002, encourages the manufacturers, or sponsors, of drugs that still have marketing exclusivity--that is, are on-patent--to conduct pediatric drug studies, as requested by the Food and Drug Administration (FDA). If they do so, FDA may extend for 6 months the period during which no equivalent generic drugs can be marketed. This is referred to as pediatric exclusivity. BPCA also provides for the study of off-patent drugs. GAO was asked to testify on the study and labeling of drugs for pediatric use under BPCA. This testimony is based on Pediatric Drug Research: Studies Conducted under Best Pharmaceuticals for Children Act, GAO-07-557 (Mar. 22, 2007). GAO assessed (1) the extent to which pediatric drug studies were being conducted under BPCA for on-patent drugs, (2) the extent to which pediatric drug studies were being conducted under BPCA for off-patent drugs, and (3) the impact of BPCA on the labeling of drugs for pediatric use and the process by which the labeling was changed. GAO examined data about the drugs for which FDA requested studies under BPCA from 2002 through 2005 and interviewed relevant federal officials. Drug sponsors have initiated pediatric drug studies for most of the on-patent drugs for which FDA has requested such studies under BPCA, but no drugs were studied when sponsors declined these requests. Sponsors agreed to 173 of the 214 written requests for pediatric studies of on-patent drugs. In cases where drug sponsors decline to study the drugs, BPCA provides for FDA to refer the study of these drugs to the Foundation for the National Institutes of Health (FNIH), a nonprofit corporation. FNIH had not funded studies for any of the nine drugs that FDA referred as of December 2005. Few off-patent drugs identified by the National Institutes of Health (NIH) that need to be studied for pediatric use have been studied. BPCA provides for NIH to fund studies when drug sponsors decline written requests for off-patent drugs. While 40 such off-patent drugs were identified by 2005, FDA had issued written requests for 16. One written request was accepted by the drug sponsor. Of the remaining 15, NIH funded studies for 7 through December 2005. Most drugs granted pediatric exclusivity under BPCA (about 87 percent) had labeling changes--often because the pediatric drug studies found that children may have been exposed to ineffective drugs, ineffective dosing, overdosing, or previously unknown side effects. However, the process for approving labeling changes was often lengthy. For 18 drugs that required labeling changes (about 40 percent), it took from 238 to 1,055 days for information to be reviewed and labeling changes to be approved.
In 1938, Congress established a program under the Wagner-O’Day Act that created employment opportunities for the blind. People employed under the program manufactured and sold certain products, such as brooms and mops, to the federal government. In 1971, Congress expanded the program under the Javits-Wagner-O’Day Act to employ people with other severe disabilities and provide services (in addition to products) to federal customers. Today, the AbilityOne program provides more services than products. As of September 30, 2012, the program’s list of projects (known as the Procurement List) included 4,639 projects— 65 percent of which were services and 35 percent of which were products. Services include janitorial, landscaping, and document destruction services as well as staffing call centers and base commissaries. Products include office and cleaning supplies, military apparel, and bedspreads. Federal agencies that need the specific products and services on the Procurement List are generally required to purchase them through the program. Unlike contracts that are reserved exclusively for small businesses—which generally must be competed among qualified small businesses—contracts for projects on the Procurement List are not competed within the program. Once projects are included on this list, they can remain there indefinitely and continue to be provided by the initially-assigned affiliate. Three types of entities comprise the AbilityOne Program: (1) the AbilityOne Commission, (2) the CNAs, and (3) the affiliates. Figure 1 shows the program’s organizational structure and how each of these entities is funded. The AbilityOne Commission consisted of a 15-member presidentially appointed Commission and 27 full-time staff as of the end of fiscal year 2012. Its responsibilities include (1) establishing rules, regulations, and policies to assure the effective implementation of the program; (2) adding new projects to the Procurement List, after determining whether they can be suitably provided by people who are blind or have severe disabilities; and (3) setting prices for these projects that reflect the market (fair market prices) and appropriately revising them over time. In regard to the CNAs, the Commission has the authority to (1) authorize and de-authorize one or more CNAs to help administer the program, (2) set the maximum fee ceiling the CNAs can charge their affiliates, and (3) provide guidance and technical assistance to the CNAs to ensure the successful implementation of the program. The Commission is funded through congressional appropriations which, in fiscal years 2011 and 2012 were almost $5.4 million each year. The AbilityOne Commission designated two CNAs—NIB and NISH—to help administer the program. The Commission designated NIB in 1938 and in calendar year 2011, NIB had 161 employees and, as of the end of fiscal year 2012, NIB worked with 70 agencies affiliated with the program that employ people who are blind. The Commission designated NISH in 1974 and at the end of calendar year 2011, NISH had 352 employees and, as of the end of fiscal year 2012, NISH worked with 528 agencies affiliated with the program that employ people with severe disabilities. The CNAs are funded almost entirely through fees they charge their affiliates as a percentage of the revenues the affiliates earn from federal customers on AbilityOne contracts. The affiliated agencies that provide AbilityOne projects to federal customers can be private nonprofit agencies or state-run nonprofit agencies. Some affiliates are part of well known nonprofit agencies, such as Goodwill Industries or Easter Seal agencies, and others are lesser known affiliates. Moreover, some affiliates rely exclusively or mostly on AbilityOne sales, whereas others have a substantial amount of sales outside of the AbilityOne Program. Regardless of how much business an affiliate conducts through the AbilityOne Program, the program requires that at least 75 percent of the total direct labor hours it uses to provide all products and services, including those outside of the AbilityOne Program, be carried out by people who are blind (in the case of NIB) or have severe disabilities or blindness (in the case of NISH). The Commission has limited authority to oversee and control the CNAs, which manage much of the program’s day-to-day operations because they are independent nonprofit agencies. Even though the Commission has ultimate responsibility for program management and oversight because of the unique public-private structure of the program it cannot control how CNAs (1) spend their funds, (2) set and manage their performance goals, or (3) set and implement governance policies and other internal controls. The Commission has limited influence over how CNAs spend their funds because the CNAs, as independent nonprofit entities, have their own boards of directors that determine how much the CNAs will spend on each item in their budgets. However, the Commission can influence the CNAs overall budgets by (1) reviewing CNA annual business plans and (2) limiting the maximum amount of revenue the CNAs can collect from their affiliates to fund their operations. Commission reviews of CNA business plans consist of examining the plans to ensure that they are aligned with the Commission’s core goals and asking clarifying questions or requesting changes. The Commission limits CNA revenues by setting the maximum fee amount the CNAs can charge their affiliates based on revenues from their AbilityOne contracts. In fiscal year 2012, NISH spent $78 million and NIB spent $32 million on operations. The major expenses of each are depicted in figure 2 and all expenses are provided in appendix I. Because the CNAs are independent nonprofit agencies, the Commission’s influence over their budgets does not and cannot extend to (1) controlling CNA cost areas, such as employee salaries and benefits or lobbying costs; (2) establishing a policy on the appropriate level of CNA reserves; and (3) ensuring that the CNAs provide sufficient funding to support key program initiatives designed to promote employment opportunities for people with severe disabilities. Compensation and benefits. According to the Commission, it has no direct control over the amount that CNAs pay their executives and other employees, an important driver of CNA expenditures. In November 2004, the Commission proposed to exert more control through proposed regulations that included, among other things, standards regarding the reasonableness of executive and other employee compensation at the CNAs. The Commission eventually withdrew the entire regulatory proposal citing the number and nature of issues raised by commenters. Federal laws limit the amount of federal funds that can be used to pay the salaries of certain federal agency contractors and nonprofit agency executives receiving federal grants to the level II federal senior executive service (SES) salary, which in fiscal year 2012 was the maximum SES CNA executive salaries, however, are not limited in pay of $179,700. this way because although the fees the affiliates pay the CNAs originate with federal customers, once they are remitted to the affiliates they are no longer federal funds. SES pay ranged from $119,554 to $179,700 in fiscal year 2012. Our review of the financial information submitted by NISH and NIB of their 25 highest-paid executives for this year shows that 11 executives had a salary above this range, 12 were within this range, and 2 were below this range. NISH and NIB employees, including the highest- paid executives, also received bonuses and benefits, such as pensions, and health, dental, disability, and life insurance. The highest-paid NISH executives as well as staff were entitled to first-class or business air travel in certain circumstances and reimbursement for eligible wellness program expenses up to a maximum of $250 annually. Also, the NISH Chief Executive Officer received a stipend for a car. Within the last 5 years, each CNA has had different consultants conduct compensation assessments to determine whether their compensation was comparable to other organizations. The organizations used for comparison had similar missions and levels of revenue for the assessments conducted for NISH, and similar locations for the assessments conducted for NIB. These assessments took into consideration some factors similar to those in the Commission’s proposed regulations, such as comparing the salary of job positions at the CNAs with positions at other organizations deemed similar. However, none of the assessments compared CNA compensation to federal sector compensation. One consultant who conducted one of the studies explained that this was because CNA job titles and functions were more comparable to the for-profit and nonprofit sectors than the federal sector. These assessments also varied in scope and methodology. For example, while some NISH assessments included a review of the value of all salary, cash incentives and benefits, the NIB assessments did not include a review of benefits. The 2011 study for NISH found that with the exception of salaries for three executives, the salaries of all NISH executives were comparable to the market median. The 2009 study for NIB found, in part, that the salaries for NIB’s leadership team needed to be increased to be competitive with the market, and NIB subsequently raised their salaries. 18 U.S.C. § 1913. and NISH reported spending $700,000 on lobbying. In the same period, NIB reported lobby activities related to the AbilityOne Program, the Rehabilitation Act, Social Security and federal procurement, and NISH reported lobbying related to 10 different bills or laws.years, from 2008 to 2012, NIB reporting spending about $976,729 and NISH reported spending $3.5 million on lobbying. According to the Commission, to decrease reserves the Commission reduced CNA fee limits in fiscal year 2007. each CNA is inappropriate.reserves for NISH and NIB separately over time. Specifically, the annual reserves for NISH for fiscal years 2008 to 2012 as well as its reserve projection for fiscal year 2013 continued to grow, while NIB’s reserves declined slightly in 2011and 2013 (see fig. 3). The Commission relies on CNA recommendations when determining which projects are added to the AbilityOne Procurement List and when assigning affiliates to provide them. However, some affiliates have expressed concerns that CNA assignment decisions may not be sufficiently transparent or equitable. In response to these concerns, the AbilityOne Commission issued a policy on how CNAs should assign projects. While a step in the right direction, this policy may be ineffective in several ways. Federal law gives the AbilityOne Commission the authority to add projects to the AbilityOne Program Procurement List and federal regulations give the Commission the authority to approve which agencies affiliated with the program can provide the projects. In so doing, the Commission relies heavily on recommendations from the CNAs. Specifically, it takes five steps to add a project to the Procurement List (see fig. 4). Under the first step of the Procurement List addition process, the CNAs assign one of their affiliated agencies to develop a business opportunity that potentially may become an AbilityOne project, in accordance with their own procedures. The Commission does not provide input into which affiliate is assigned at this stage. In step 2, the CNAs recommend that the Commission add the potential project to the Procurement List using a standard project addition package. The affiliate that the CNA assigns to develop the potential project is typically the affiliate that the CNA recommends to the Commission to provide the project in this package. In step 3, Commission staff review CNA addition packages to determine whether the project is suitable for the AbilityOne Program, using the criteria in the sidebar. According to Commission staff, they do not determine: a) whether another affiliate would be better positioned to provide the project or b) if the CNAs followed appropriate processes in selecting the affiliate. In step 4, Commission members vote on whether to add staff-recommended projects to the Procurement List, using the same four criteria that staff used to evaluate the project. They also vote on whether the CNA-recommended affiliate should be designated to provide the project. According to Commission staff, members vote to add the vast majority of projects staff put forward for addition to the Procurement List. GAO has identified key elements that public procurement systems should have to ensure that they are efficient and accountable. Two of these, which the Commission has also acknowledged in policy as being important in the AbilityOne Program, are: transparency, which includes having written procedures that are easily understandable by all; and equity, which includes maintaining impartiality, avoiding conflict of interest and preferential treatment, and dealing fairly and in good faith with all parties. The processes the CNAs use to make assignment decisions allow them to exercise discretion when determining which affiliate to assign to a project and such discretion can limit transparency and equity. A Commission official told us that such discretion is essential to balancing the core mission of this program—providing employment opportunities for people who are blind or have severe disabilities—with providing quality projects to federal agencies in a timely and economical manner. AbilityOne officials also told us that their involvement in determining which affiliate should provide a project is limited. The reasons they gave for relying so heavily on CNA recommendations include (1) historically, project assignment has always been a CNA responsibility, and (2) it is the CNAs that have the necessary expertise to assess which affiliates are best suited to providing specific projects. Although both NISH and NIB have written procedures for assigning affiliates to projects, some affiliates told us that they do not always find the CNAs’ assignment processes transparent. Both CNAs have basic eligibility criteria that all affiliates must meet or they will be disqualified from pursuing a potential project. NISH has 16 additional criteria that it uses when making assignment decisions among qualifying affiliates and NIB has 7 (see sidebar). Both NISH and NIB also provide feedback to affiliates that were not awarded a project, upon request. NISH officials explained that not all of its criteria are relevant when determining which affiliate should be assigned a project and that each project notification lists those criteria that will be used. NIB officials explained that due to the general nature of their criteria, most are applicable to assignment decisions. Nevertheless, some NISH and NIB affiliates told us that they do not always understand how the CNAs apply the assignment criteria on a project-by-project basis and, as a result, do not understand how their proposals are being judged. One affiliate explained, for example, that sometimes his CNA views geographic proximity to a project’s worksite as more important than prior experience in a relevant line of business when evaluating affiliate proposals and sometimes they do the opposite. However, because the CNA does not tell the affiliates up front which criteria will be weighted as more important, affiliates do not know what elements to emphasize in their proposals and can be confused as to why one affiliate was assigned a project over another affiliate. Moreover, some affiliates have questioned the overall integrity of the CNAs’ assignment processes. Several affiliates we spoke with stated that they feel the system is biased in that assignment decisions tend to favor larger affiliates, affiliates that are or were on one of the CNAs’ boards of directors, or are a member of a particular affiliate sub-group. In addition, NISH assignment decisions are made by a regional executive director in each of its six regions and some affiliates questioned whether these individuals apply NISH assignment criteria consistently. Affiliates have also said that when NIB identifies a potential project for development, NIB does not routinely notify all affiliates. Instead, NIB usually notifies only those that they think may be interested in, and capable of, developing it. During our focus groups with affiliates, several mentioned that this practice can make it difficult for them to be considered for a different or new line of business. NISH, on the other hand, routinely notifies all affiliates of potential projects through its website and such notification is a requirement in NISH assignment procedures. The Commission’s May 2012 policy, according to Commission officials, sought to articulate a minimum set of broad principles that CNA assignment policies and procedures should incorporate—some of which relate to the elements of transparency and equity discussed above. This was the first time that the Commission had issued a written policy to guide CNA project assignment decisions, although the CNAs have had their own written procedures for years. Commission officials told us that they issued this written policy for two reasons. First, in the event that an affiliate filed suit in court over an assignment decision, as occurred in 2010, the Commission wanted to be able to point to a written policy that described how they expect CNAs to make assignment decisions. Second, they felt that having a written policy was important, given complaints levied by some affiliates that CNA assignment decisions sometimes lacked transparency and appeared biased. A Commission official acknowledged that the principles articulated in its assignment policy generally aligned with the CNAs’ written procedures. As a result, the Commission did not expect that the CNAs would need to make substantial changes in their assignment processes. Our review of the Commission’s policy shows that although it describes some desired outcomes regarding CNA assignment decisions, it does little to indicate how these outcomes can be achieved. For example: The policy states that CNAs should develop processes to assure that projects are distributed among affiliates that result in fair, equitable, and transparent distribution, taking into account the unique mission and objectives of the program. It does not explore how such distribution should be achieved, or define what is meant by fair, equitable, and transparent. To maintain CNA discretion in determining certain criteria to use when making decisions, the policy allows decisions to be at least partially based on special considerations in certain circumstances. The policy gives examples of special considerations, such as providing jobs to wounded warriors or using environmentally friendly supplies, but it does not limit the CNAs to them. The policy also does not define or provide any examples of the circumstances in which the special considerations may be applied, which limits transparency. The policy also lacks transparency because it does not require that the CNAs routinely disclose to affiliates applying for projects how and why special considerations were used in making assignment decisions. Instead, it says that upon the Commission’s request, CNAs must certify that an assignment complies with all applicable policies and procedures and include documentation about any special circumstances in the project addition package submitted to the Commission. The policy also contains three types of enforcement mechanisms, another key internal control intended to ensure that program directives are followed, but they are not well—formulated. Specifically: The policy requires that the Commission review CNA assignment processes at least once every 3 years to determine whether these processes are aligned with the principles outlined in its policy. However, because some of the principles contained in this policy are vague, the Commission may have difficulty determining the extent to which CNA procedures are aligned with them. Although the policy states that these reviews would begin in 2012, as of February 2013 the Commission had not developed review procedures or conducted any reviews. The policy requires that CNAs document any special considerations that figure into an assignment decision and provide such documentation to the Commission upon request. It does not, however, specify what the documentation should entail. For example, it does not require the CNA to document why or how a particular consideration was used in an assignment decision. Such information would be critical to assessing whether the assignment decision was impartial and free from bias. The policy requires that CNAs have written appeal processes in place and both CNAs had such written procedures before to the Commission issued its policy. The policy also requires that the AbilityOne Commission develop its own separate written appeals policy and procedures, which would allow for a second level of appeal. At the time of our review the Commission did not have a timeline for developing this policy. The AbilityOne Commission has not determined how the assignment of projects among affiliates has affected the creation of employment opportunities for people who are blind or have severe disabilities and, according to Commission officials, has not done so at least in part because of limited resources. Such an assessment is important to conduct for two reasons. First, identifying risks that may affect the capacity of an agency to meet its mission—in this case the creation of jobs for people who are blind or have severe disabilities—is an important Because the Commission has not determined if or how internal control.the current assignment of projects affects its mission, it does not know whether the way projects are currently distributed among affiliates poses a risk to achieving the program’s mission and, if so, the extent of this risk. Second, according to an AbilityOne official, the relationship between the distribution of projects and job creation for people who are blind or have severe disabilities has been an ongoing debate among affiliates, CNAs, and the Commission for years. AbilityOne and CNA officials told us that there is no clear research to indicate whether the current distribution of projects among affiliates affects the amount of employment opportunities created for people who are blind or have severe disabilities. On the one hand, AbilityOne and CNA officials said that the program could benefit from spreading projects widely among its affiliates. Under this scenario, the program would not be as reliant on the capabilities of a few affiliates to hire people who are blind or have severe disabilities. Such a broad bench of affiliates may reduce the possibility of the program losing a federal customer if a producing affiliate becomes unable to provide a project because the project could be transferred to another affiliate within the program that had similar or potentially similar capabilities and capacity. On the other hand, Commission and CNA officials also said there could be benefits from a completely different distribution that assigned relatively more projects to some of the largest affiliates. Larger affiliates typically have more experience and their size creates economies of scale necessary to provide large projects, such as computer destruction or contract closeout services for an entire federal agency or program within an agency. We analyzed fiscal year 2012 program data and found that while the largest affiliates represent a minority of the AbilityOne affiliates, they hold the majority of projects. Figure 5 shows that the largest 114 affiliates (20 percent) that provided projects as of the end of fiscal year 2012 had 56 percent of the projects and 79 percent of the sales dollars. We also analyzed the distribution of projects among NIB and NISH affiliates separately. The largest 13 NIB affiliates (20 percent) held 46 percent of AbilityOne projects and 53 percent of AbilityOne sales. The largest 100 NISH affiliates (20 percent) held 50 percent of AbilityOne projects and 80 percent of AbilityOne sales. Program officials from all levels, as well as some of the affiliates themselves, told us that small and mid-size affiliates may struggle to compete for AbilityOne projects for a variety of reasons. For example, they told us that small affiliates cannot devote as many resources to business development or may only have the capacity to compete for projects in their local area. Affiliates also said that CNAs may not select them because of a perceived lack of work experience in a new line of business. Indeed, one affiliate told us it provides janitorial services and, despite efforts to expand into other businesses, it could not persuade its CNA to consider it for anything other than janitorial contracts. AbilityOne and CNA officials told us that while they try to give opportunities to smaller, less experienced firms, opportunities for smaller affiliates may be reduced when other factors are taken into account, such as a federal customer’s preference for a larger, more experienced contractor. While the AbilityOne Commission is ultimately responsible for determining the fair market price of projects in the program, it permits the CNAs, affiliates, and federal customers to negotiate pricing and recommend a fair market price for each project. Commission guidance defines a fair market price as the price agreed upon by a buyer and seller, with neither under any compulsion to buy or sell and both having reasonable knowledge of relevant facts.that providing jobs to people who are blind or have severe disabilities may necessitate employing a less than fully productive workforce, which could raise an affiliate’s costs. As a result, according to Commission staff, a project’s price under the AbilityOne Program is not necessarily the lowest possible price, but it also isn’t the highest possible price. Commission guidance holds that the fair market price should include the CNA fee. In addition, the Commission recognizes The process for determining the price of a project begins when an affiliate and federal customer are developing a potential project for the program and ends when that project is added to the Procurement List (see fig. 6). Commission staff review the CNA pricing package in step 3 of the process. This review is a key control intended to ensure a fair market price. Between January 1, 2012 and December 10, 2012, the Commission received 336 new packages for price review. As shown in figure 7, staff recommended 78 to Commission members for final approval (23 percent). Staff rejected the pricing proposed in the other 258 pricing packages (77 percent), primarily because of insufficient documentation, but in some instances because they found the price too high. The CNAs and affiliates have the option of revising and resubmitting the rejected packages. After working with the CNAs, affiliates, and customers, as necessary, to produce better documentation or a revised price, staff recommended that Commission members approve the revised packages of 116 proposals. For the last several years, the Commission has approved all pricing packages the staff have recommended because they agreed with their staffs’ recommendations. Commission staff told us that they consider various factors when reviewing recommended prices, such as whether negotiations between the federal customer and affiliate are sufficiently documented. Staff also told us that they conduct research to determine whether the recommended price in a project addition package conforms with the pricing for similar goods and services available from public sources, and if not, whether the project addition package contains a sufficient explanation for these differences. Commission staff also told us that they conduct these reviews in accordance with written policies and procedures, but acknowledged that these instructions are not sufficiently explicit and transparent. Such limitations can make it difficult for the CNAs and affiliates to understand how and why decisions are made. CNA managers and some affiliates told us, for example, that they sometimes do not understand the Commission’s price reviewing procedures and by extension, its reasons for rejecting prices. This lack of understanding about Commission reviews of recommended prices may partially explain the relatively high rejection rate of initial packages (see fig. 7). More explicit and transparent written policies and procedures on pricing reviews might include, for example, a checklist of what Commission staff should look for when assessing prices and a list of red flags that could indicate when recommended prices might be too high. Clearly-communicated price review procedures, including a discussion about the protocols the Commission uses to review pricing packages, could result in better- prepared pricing packages and therefore fewer rejections and less rework. According to AbilityOne policy, all projects that extend beyond a single contract period must include a mechanism for adjusting the price. All parties involved—the affiliate, the federal customer, the CNAs, and the Commission—must agree on the mechanism. According to CNA officials, periodic negotiations between the affiliate and the customer are the most common price revision mechanism. If a price revision conforms to the originally approved mechanism, the affiliate and customer implement the revision without seeking Commission approval or submitting documentation of the revision to the Commission. However, if the change in price does not conform to the originally approved mechanism, Commission policy directs affiliates to prepare a price revision request package, which the CNA submits to the Commission for staff approval. Between January 1, 2012, and December 10, 2012, Commission staff reviewed 569 packages for non-conforming price revisions (see fig. 8). Commission staff initially approved 216 of these packages (38 percent) and, after a subsequent review, approved an additional 157. Commission staff rejected 196 of the price revision packages, none of which had been resubmitted at the time of our review. Commission staff told us that they might reject a price revision for a variety of reasons. Staff might see an anomaly in the request, such as a price that is increasing much faster than either (1) the original terms of the contract specified for future year price changes or (2) research indicates that it should be changing. Affiliates and their federal customers have the option to resubmit their requests with additional information or clarifications. Commission staff and CNA officials reported that they do not have procedures in place to ensure that affiliates comply with the policy that affiliates report to the Commission, through their CNA, any price revisions that do not conform with approved contract pricing mechanisms. If the Commission becomes aware of unreported price revisions, staff told us that they contact the affiliate and federal customer to attempt to resolve the situation, typically by asking for an immediate price revision package. Commission staff told us that unreported price revisions are a recurring problem, and provided us with three examples of price increases that should have been reported between 2 months and 19 years ago. Although they were not able to estimate the number of times such unreported increases occurred, they said that its recurring nature causes them concern. CNAs collect information on current prices, but the Commission does not require them to submit this information to the Commission. If the Commission had this information, it could electronically compare the current prices to the data it maintains on the approved prices and thus have assurances that controls were met. Failure to submit price revision requests to the Commission before raising prices (1) negates the Commission’s internal controls that ensure that affiliates are charging fair market prices and (2) means that the Commission does not have accurate data regarding the prices that are actually being used within the program. The AbilityOne Program is one of many federal programs designed to help people with disabilities find employment. It is the single largest source of employment for the blind and others with severe disabilities. This program’s unique public-private structure was set up more than seven decades ago when federal purchasing was simpler and much smaller in scale. Today, billions of federal procurement dollars flow through the program every year and tens of thousands of people who are blind or have severe disabilities are employed through it. The Commission’s oversight of the CNAs is hampered by limitations in its monitoring procedures and in its authority over their operations. Developing a written agreement between the Commission and each CNA that specifies key expectations for the CNAs and oversight mechanisms could improve program accountability. It would be important to work to achieve an agreement within a reasonable period of time, such as 18 months. In the event that an agreement cannot be reached, it is important to identify in advance appropriate next steps for program changes by the Commission to establish adequate oversight and accountability for the AbilityOne program. In addition, there are specific areas where the Commission needs to establish adequate oversight procedures to better help ensure program integrity, transparency, and effectiveness. These include: obtaining reports from CNAs on alleged misconduct and internal control violations to ensure that any appropriate corrective actions are taken, overseeing CNA procedures for assigning projects to affiliates to help ensure transparency and equity, developing more explicit and transparent written protocols for pricing reviews, and reviewing pricing packages to ensure fair market value. Finally, the AbilityOne program does not have an independent IG. Without an independent IG, this major procurement program lacks an office to independently audit and investigate waste, fraud, and abuse and to make recommendations for enhancing program integrity and operations. To enhance program effectiveness, efficiency, and integrity in the AbilityOne Program, Congress may wish to consider establishing an independent inspector general for the program with the authority to audit and investigate the Commission and the CNAs. To promote greater accountability for program effectiveness, efficiency, and integrity, the Chairperson of the U.S. AbilityOne Commission should direct the AbilityOne Commission to enter into a written agreement with each CNA within reasonable established time frames, such as within 18 months. The agreements should establish key expectations for each CNA and mechanisms for the Commission to oversee their implementation and could cover, among other things: expenditures of funds, performance goals and targets, governance standards and other internal controls to prevent fraud, waste, and abuse, access to data and records, consequences for not meeting expectations, and provisions for updating the agreement. If the Commission is unable to enter into such a written agreement with either CNA, the Commission should take steps to designate a CNA that is willing to enter into such an agreement or seek legislation that would require such an agreement as a prerequisite to designation as a CNA. To further improve oversight and transparency in the AbilityOne Program, the Chairperson of the U.S. AbilityOne Commission should: Routinely obtain from the CNAs any audits and reports of alleged misconduct or other internal controls violations, and information on corrective actions taken by the CNAs. Take additional action to better ensure that the CNAs’ processes of assigning projects to affiliated agencies result in a transparent and equitable distribution. Such action could include one or more of the following: further developing its policy to specify procedures CNAs should follow to ensure equity and transparency in project assignment decisions, developing protocols for how the Commission will review CNA project assignment procedures to ensure their alignment with the Commission’s policy, or performing a study to determine if and how the distribution of projects among affiliates affects the number of jobs for people who are blind or have severe disabilities. Develop more explicit and transparent written procedures for how Commission staff review pricing packages and clearly communicate these procedures to affiliates and the CNAs. Such communication might also highlight the most common reasons that pricing packages are rejected by Commission staff. Require the CNAs to provide current pricing information to enable the Commission to better identify instances when current prices differ from approved prices. We provided a draft of this report to the AbilityOne Commission, NIB, and NISH for review and comment. The Commission’s comments are reproduced in appendix II, NIB’s comments are reproduced in appendix III, and NISH’s comments are reproduced in appendix IV. Technical comments from all three agencies were incorporated as appropriate. In their written comments, the Commission and the two CNAs agreed with our matter for Congressional consideration and recommendations for executive action. They also provided additional information and disagreed with several findings. We subsequently modified the report in a few places to provide further clarification. With regard to our matter for Congressional consideration about establishing an independent Inspector General (IG) for the program, the Commission concurred that there are benefits to having an independent entity conduct audits where needed. The Commission added that in its view, the creation of an IG would have to be budget neutral given the already scarce program funding for the Commission. The Commission concurred with our recommendation to enter into a written agreement with each CNA and added that it will pursue these agreements once it has updated and enhanced its regulations to describe its authority and oversight with respect to the CNAs. The Commission added that it anticipates completing the written agreements in 18 to 24 months. The Commission concurred with our recommendation to routinely obtain from the CNAs any audits and reports of alleged misconduct or other internal control violations, and information on corrective actions taken by the CNAs. The Commission added that it will establish or enhance and disseminate policies and procedures regarding CNA oversight and internal controls and anticipates that this will be completed in fiscal year 2014. While NIB agreed with our recommendations to the Commission, NIB disagreed with our finding that the Commission has limited control over CNA spending. NIB highlighted several tools which it believes show that the Commission’s controls are sufficient, such as the Commission’s ability to set fee limits for the CNAs and provide guidance for, and review of, CNA budgets and performance. The report discusses these tools and presents evidence as to why we believe they are not sufficient for the Commission to oversee CNA spending. Both CNAs cited other controls that contribute to the oversight of their budgets. We cited examples of these other controls in the report, including IRS reporting requirements for nonprofit agencies and such CNA internal controls as undergoing annual independent financial audits. However, IRS and CNA internal controls cannot replace Commission oversight because the Commission is the entity that is most knowledgeable about the program’s regulations and is ultimately responsible for ensuring compliance with these regulations and for the stewardship of the program. The Commission and the two CNAs commented on CNA reserve levels. The Commission provided some additional clarification on its written guidance for reserves and actions taken, which we incorporated into the report. NISH disagreed with the statement that the CNAs have been accumulating reserve funds. However, our analysis of certified financial statements for NISH and NIB shows that (1) the annual reserves for NISH for fiscal years 2008 to 2012, as well as its reserve projection for fiscal year 2013, continued to grow and (2) NIB’s reserves declined slightly in 2011and 2013 (see fig. 3). NISH also disagreed with the statement that the CNAs have not provided the Commission with financial analyses that support their levels of reserves and reserve policies. However, the statement in the report to which NISH refers actually focuses on actions of the Commission and we have clarified this in the report. This statement indicates that the Commission has not developed guidance about what the CNAs should consider when setting reserve policies nor determined what financial information the CNAs should provide to it to fully support their reserve levels. NISH and NIB cited the criteria they took to establish their reserve policies and levels in their comments. NISH disagreed with the Commission’s position that the Commission lacks the authority to require and enforce program improvements. During the course of our work, Commission officials noted that the Commission has very little explicit authority to regulate the CNAs and, as a result of this lack of authority, said they have not taken additional action to expand the Commission’s oversight in ways that may be beneficial to the program. They said that, without additional oversight tools, they have few ways to enforce regulations. For example, although they could remove a CNA as an administrator of the program for noncompliance or significantly reduce its fees, such approaches could be highly disruptive to the program and the people it serves. Thus, depending on the infraction in question, they could be reluctant to use them. Because an agency’s interpretation of its regulatory authority under the laws it is charged with administering is generally to be afforded deference, we did not make any changes to our report. However, we note that it may be beneficial for the Commission to engage with NISH on this issue as it takes steps to implement our recommendations, particularly the one focusing on entering into written agreements. NIB disagreed with our finding that the Commission has limited oversight and control over areas such as CNA performance, governance, and internal controls. NIB’s comments on this topic generally provided additional information about NIB’s governance structure and controls and did not directly address the Commission’s level of authority and control. However, in response to NIB’s comments, along with additional clarification from a NIB official, we revised the report to make clear that NIB does not allow board members who are executives or employees of a NIB affiliate to serve as a Board officer, but those individuals can serve on the Board. The Commission agreed with our recommendation that it take additional action to ensure that CNAs’ processes of assigning projects to affiliated agencies result in a transparent and equitable distribution. The Commission noted that it has already initiated a review of CNA assignment policies as part of a larger review of procedures across the entire AbilityOne Program and that it will build our recommendations into the deliberative process. The Commission added that the target completion date for this review and development of procedures is no later than June 2014. Both CNAs disagreed that their processes for assigning projects to affiliates were not transparent. NIB stated that the primary factor it uses when making assignment decisions is the potential to positively impact employment for people who are blind and NISH stated that it ensures transparency through several actions, including posting all notices of project opportunities on its website. However, we continue to believe that greater transparency is needed for the reasons stated in the report, including to address the concerns of some affiliates that: (1) they do not understand how the CNAs prioritize the criteria used to evaluate their proposals; (2) NISH applies its criteria inconsistently across its regions, and (3) NIB does not notify all of its affiliates about potential project opportunities it is considering for the program. NISH also stated that it disagreed with what it believed to be our assessment that CNA assignment processes are biased. We did not, however, state that these processes are biased; rather, we stated that some affiliates view them as biased. Greater transparency can help organizations address concerns of bias. NISH also provided additional information about its assignment processes that we incorporated in the report as appropriate. The Commission and NISH provided comments about the distribution of projects among affiliates. The Commission noted that it will increase its emphasis and attention to mentoring the smaller affiliates so that they can more fully participate in the program. The Commission also suggested that we note that factors other than an affiliate’s size can influence the number of projects affiliates are assigned in the program. We agree, but did not make any revisions to the report in this regard because we had already discussed such factors in the draft. NISH noted that it assigned more projects to its smaller affiliates in fiscal year 2012 than in prior years. However, because it is not clear how the distribution of projects among affiliates affects the creation of employment opportunities for people who are blind or have severe disabilities, it is not currently known whether assigning more projects to smaller affiliates is the most effective path for the program to pursue. The Commission suggested that we modify the wording of our finding on the extent of the Commission’s knowledge about how project assignment affects employment opportunities for its target population. The Commission noted that, while it is presented with information on the number of employment opportunities a proposed project will generate, it does not track the number of overall employment opportunities realized. In response, we revised the wording to clarify that the Commission does not track how the program’s distribution of projects affects job creation for its target population. NIB reiterated several aspects about the process of adding projects to the Procurement List. In response to these comments we now more explicitly note that the Commission relies on CNA recommendations when adding projects to the Procurement List and votes on whether to approve CNA-recommended affiliates as project providers at the Procurement List addition stage. The Commission agreed with our two recommendations for Commission actions to improve pricing reviews. However, the Commission took exception with our statement that Commission staff do not have written policies and procedures for reviewing pricing packages. The Commission stated that staff do have such written instructions and we confirmed this statement and revised the report to incorporate this information. Nonetheless, the Commission agreed with our assessment that its pricing review procedures are not sufficiently explicit or transparent and that this can make it difficult for the CNAs and affiliates to prepare acceptable pricing packages. The Commission noted, however, that the extent to which Commission reviews of pricing packages can be transparent is limited by the fact that such reviews are often based upon sensitive information that is not releasable to the CNAs or affiliates. We agree, but continue to believe that the Commission can increase the transparency of its pricing review processes. As agreed with your offices, we will send copies to the appropriate congressional committees, the Chairperson of the U.S. AbilityOne Commission, the President and CEO of NISH, the President and CEO of NIB, and other interested parties. In addition, this report will be available at no charge on the GAO web-site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: CNA Operating Expenses for Fiscal Year 2012 (in millions) The CNA fiscal year begins on October 1and ends on September 30. Employee benefits include health, dental, life, and disability insurance. In response to NISH’s clarification of its operational costs, we removed $31.57 million for subcontracting costs that are associated with federal contracts in which NISH was the prime contractor. According to NISH officials, these costs are not required to be reported on the IRS Form 990 as operational costs and NIB did not do so. The CNA fiscal year begins on October 1 and ends on September 30. Employee benefits include health, dental, life, and disability insurance. NIB’s service bureau costs are for its outsourced data entry and call center. In addition to the contact named above, Assistant Director Bill Keller, Nancy Cosentino, Julie DeVault, Sara Pelton, and Paul Wright made significant contributions to this report. Assistance, expertise, and guidance were provided by Kurt Burgeson, David Chrisinger, Michele Grgich, Alex Galuten, Kristine Hassinger, Steve Lord, Mimi Nguyen, Jerry Sandau, William Shear, Walter Vance, Monique Williams, and William Woods.
In 1938, Congress created a program providing employment opportunities for people who are blind and expanded it in 1971 to include people with severe disabilities. Now known as AbilityOne, the program’s public-private structure consists of the federal, independent U.S. AbilityOne Commission (15 part-time presidentially-appointed members supported by 27 staff) to oversee the program; two central nonprofit agencies (CNAs) to administer much of the program; and hundreds of affiliated nonprofit companies employing people who are blind or severely disabled to provide products and services to federal agencies. Federal agencies are generally required to purchase such products and services through the program. GAO examined how the AbilityOne Commission: (1) directs and oversees the CNAs; (2) adds products and services (hereafter called projects) to the program and assigns affiliates to provide them; and (3) prices program projects. GAO reviewed policies, procedures, relevant federal laws and regulations, and other documents; interviewed CNA and AbilityOne officials; held five focus groups with affiliates; and analyzed data on program products, services, and pricing reviews. Federal agencies need to exercise strong oversight to promote effectiveness and efficiency and prevent waste, fraud, and abuse--especially in a federal procurement program such as this, which is exempt from full and open competition requirements. However, although the AbilityOne Commission is ultimately responsible for overseeing the program, the Commission cannot control how CNAs (1) spend their funds, (2) set and manage their performance goals, or (3) set and implement governance policies and other internal controls. The Commission's authority to direct CNA budget priorities--including how much they compensate their executives and the level and growth of their reserves--is limited. As independent entities, the CNAs are responsible for determining their spending. Most of their money comes from fees they charge their affiliates as a percent of revenue earned from AbilityOne contracts. Moreover, the Commission does not have sufficient authority to set CNA performance and governance standards, so it depends on the CNAs to set and enforce such standards. Although the CNAs have instituted their own internal controls, the Commission does not have procedures to monitor alleged CNA control violations, nor is there an inspector general to provide independent audit and investigative capabilities for the program, including at the CNAs. The AbilityOne Commission is responsible for determining which products and services can be suitably provided by the program. It delegates to the CNAs most of the responsibility for deciding which affiliates should develop and provide these projects. According to CNA and affiliate officials, the CNAs often do not fully disclose how they make these decisions. This limited transparency could increase the risk of biased decisions because CNA officials have wide latitude in determining which affiliate should be awarded a project. Although AbilityOne Commission officials have acknowledged the importance of transparency and equity in assigning projects, they have done little to indicate how these outcomes can be achieved. The Commission has statutory responsibility for determining the fair market price of projects in the program, but: (1) its written pricing review policies and procedures are limited and (2) it does not have sufficient internal controls to ensure that prices are appropriately revised over time. The Commission sets procedures that encourage affiliates and federal customers to negotiate prices that reflect the market. Although Commission staff review these prices in accordance with written policies and procedures, they acknowledged that these instructions are not sufficiently explicit or transparent. Such limitations can make it difficult for the CNAs and affiliates to understand the Commission's pricing review procedures and, by extension, its reasons for rejecting prices. This lack of understanding may partially explain the 77 percent rejection rate for initial pricing packages. Commission policy also states that CNAs submit for Commission review any request for adjusting the price of a project beyond a single contract period that does not conform with the prior Commission-approved mechanism. Occasionally customers and affiliates implement non-conforming price revisions without requesting Commission approval. This negates the Commission's internal controls for ensuring fair market prices and results in the Commission not knowing the actual price being charged. Neither the AbilityOne Commission nor the CNAs have procedures in place to systematically identify such instances. We are presenting a matter for Congressional consideration to establish an inspector general and several recommendations to the Commission to enhance program oversight. The Commission and CNAs agreed with our recommendations, but disagreed with several findings or provided additional information, which we incorporated as appropriate.
The Joint Strike Fighter is DOD’s most expensive aircraft acquisition program. The number of aircraft engines and spare parts expected to be purchased, along with the lifetime support needed to sustain the engines, mean the future financial investment will be significant. DOD is expected to develop, procure, and maintain 2,443 aircraft at a cost of more than $338 billion over the program’s life cycle. The JSF is being developed in three variants for the U.S. military: a conventional takeoff and landing aircraft for the Air Force, a carrier-capable version for the Navy, and a short takeoff and vertical landing variant for the Marine Corps. In addition to its size and cost, the impact of the JSF program is even greater when combined with potential international sales (expected to be between 2,000 and 3,500 additional aircraft) and the current U.S. aircraft that the JSF will either replace or complement to meet mission requirements. Congress first expressed concern over the lack of engine competition in the JSF program in fiscal year 1996 and in fiscal year 1998 directed DOD to ensure that sufficient funding was committed to develop an alternate engine. Since that time, DOD has initiated multiple studies to determine the advantages and disadvantages of the alternate engine program. DOD program management advisory groups conducted studies in 1998 and again in 2002, both resulting in recommendations to continue with the alternate engine program. The advisory groups determined that developing an alternate JSF engine had significant benefits in the areas of contractor responsiveness, industrial base, aircraft readiness, and international participation. They also reported finding marginal benefits in the areas of cost savings and the ability to add future engine improvements. However, they found no benefit with regard to reducing development risk without restructuring the program. The advisory groups noted that these recommendations were made independent of the services’ ability to fund the program—meaning overall affordability should be taken into consideration. In August 2005, DOD awarded a $2.1 billion contract for alternate engine system development and demonstration, of which $699 million has been appropriated to date. In its fiscal year 2007 budget submission, DOD proposed canceling the alternate engine program and eliminated funding related to this effort. While Congress restored the majority of the funding for that year, DOD again eliminated alternate engine funding in its proposed budget for fiscal year 2008. DOD decided to cancel the alternate engine program prior to the fiscal year budget submission, stating that (1) no net cost benefits or savings are to be expected from competition and (2) low operational risk exists for the warfighter under a sole-source engine supplier strategy. We reported last year that this decision was made without a new and comprehensive analysis and focused only on the potential up-front savings in engine procurement costs. We stated further that costs already sunk were inappropriately included and long-term savings that might accrue from competition for providing support for maintenance and operations over the life cycle of the engine were excluded from the decision justification. Our position was that DOD’s decision to cancel the program was driven by the need to identify sources of funding in order to pay for other, more immediate priorities within the department. DOD did not change the JSF acquisition strategy to reflect its proposed elimination of the alternate engine program, and it continues a dual engine approach. The 2007 Defense Authorization Act has now placed certain restrictions on DOD modification of the dual engine approach. According to current JSF program plans, beginning in fiscal year 2007, the program office will award the first of three annual production contracts to Pratt & Whitney for its F135 engine. In fiscal years 2010 and 2011, noncompetitive contracts will be awarded to both Pratt & Whitney and to the Fighter Engine Team for the F136 engine. Beginning in fiscal year 2012, contracts will be awarded on an annual basis under a competitive approach for quantities beyond each contractor’s minimum sustaining rate. Full-rate production for the program begins in fiscal year 2014 and is expected to continue through fiscal year 2034. The JSF program intends to use a combination of competition, performance-based logistics, and contract incentives to achieve goals related to affordability, supportability, and safety. Through this approach, the JSF program office hopes to achieve substantial reductions in engine operating and support costs. Traditionally, operating and support costs have accounted for 72 percent of a program’s life cycle costs. Without competition, the JSF program office estimates that it will spend $53.4 billion over the remainder of the F135 engine program. This includes cost estimates for the completion of system development, procurement of 2,443 engines, production support, and sustainment. Additional investment of between $3.6 billion and $4.5 billion may be required should the Department decide to continue competition in the JSF engine program. This includes additional development, procurement, support, and stand-up costs for a second engine provider. While Pratt & Whitney design responsibilities and associated costs may be reduced under a sole-source contract, our analysis shows that competitive pressures may yield enough financial savings to offset the costs of competition over the life of the program. These results are dependent on how the government decides to run the competition, the number of aircraft that are ultimately purchased, and the exact ratio of engines awarded to each contractor. Given certain assumptions with regard to these factors, the additional costs of having the alternate engine could be recouped if competition were to generate approximately 10.3 to 12.3 percent savings. According to actual Air Force data from past engine programs, including for the F-16 aircraft, it is reasonable to expect savings of at least that much. Additionally, there are a number of non financial benefits that may result from competition, including better performance, increased reliability, and improved contractor responsiveness. The cost of the Pratt & Whitney F135 engine is estimated to be $53.4 billion over the remainder of the program. This includes cost estimates for the completion of system development, procurement of engines, production support, and sustainment. Table 1 shows the costs remaining to develop, procure, and support the Pratt & Whitney F135 engine on a sole-source basis. Costs remaining for the JSF engine program can be broken down into four categories: remaining system development and demonstration contract costs; engine unit recurring flyaway costs—per unit cost for aircraft, based on rate of learning; production support costs related to production spares, training personnel and equipment, manpower, and depot facilities; and sustainment costs to maintain fielded aircraft based on engine flight hour costs and usage rates. Stable requirements and funding, a well-defined acquisition strategy, an appropriately structured contract, and adequate oversight are keys to ensuring the contractor is motivated to perform, especially under a sole- source contract where competitive pressure does not exist. In a sole- source environment, the primary benefit comes from the improved rate of progress, or “learning,” achieved by the contractor based on having all production activity. In other words, the greater volume of business given to a single contactor is expected to translate into efficiency in production in a shorter time, thereby lowering associated costs. Learning curves must be established in a manner so that the contractor is not only intent on meeting that curve, but also incentivized to exceed the curve in order to achieve cost reductions. Through analysis of program information and in conversations with Pratt & Whitney and JSF program office personnel, we found examples of initiatives aimed at improving the F135 learning curve. Pratt & Whitney has ongoing and planned activities in areas such as supply chain optimization, technology development, and manufacturing efficiency that it hopes will reduce unit costs through the first 5 years of F135 production. Having Pratt & Whitney as the single engine manufacturer may also provide benefits in terms of simpler design and integration responsibilities. Currently, in addition to development of the F135 engine design, Pratt & Whitney has responsibility for design and development of common components that will go on all JSF aircraft, regardless of which contractor provides the engine core. Examples of common components include the lift fan and roll posts for the Marine Corps variant, the exhaust nozzles, and ducts. This responsibility supports the overall F-35 program requirement that the engine be interchangeable—either engine can be used in any aircraft variant, either during initial installation or when replacement is required. In the event that Pratt & Whitney is made the sole-source engine provider, future configuration changes to the aircraft and common components could be optimized for the F135 engine, instead of potentially compromised design solutions or additional costs needed to support both F135 and F136. In testimony last year, the Under Secretary of Defense for Acquisition, Technology, and Logistics reported that DOD preferred a sole-source engine strategy for the JSF program. He noted that maintaining two engine suppliers for the program would cost, at that time, an additional $1.8 billion for the development phase which was not the most efficient use of Department resources. In fact, when considering the costs of competition over the full life cycle of the F136 program, the additional costs are even greater. The government’s ability to recoup the additional investments required to support competition depends largely on (1) the number of aircraft produced, (2) the ratio that each contractor wins out of that total, and (3) the savings rate that competitive pressures drive. We estimated costs under two competitive scenarios; one in which contractors are each awarded 50 percent of the total engine purchases (50/50 split) and one in which there is a 70/30 percent award split of total engine purchases to either contractor, beginning in fiscal year 2012. Without consideration of potential savings, the additional costs of competition total $4.5 billion under the first scenario and $3.6 billion under the second scenario. Table 2 shows the additional cost associated with competition under these two scenarios. The disparity in costs between the two competitive scenarios reflects the loss of learning resulting from lower production volumes that is accounted for in the projected unit recurring flyaway costs used to construct each estimate. The other costs include approximately $1.4 billion in remaining F136 development costs and $127 million in additional stand-up costs, which would be the same under either competitive scenario. DOD implemented the JSF alternate engine development program to provide competition between two engine manufacturers in an effort to achieve cost savings, improve performance, and gain other benefits. For example, competition may incentivize the contractors to achieve more aggressive production learning curves, produce more reliable engines that are less costly to maintain, and invest additional corporate money in technological improvements to remain competitive. To reflect these and other potential factors, we applied a 10 to 20 percent range of potential cost savings to our estimates, where pertinent to a competitive environment. Further, when comparing life cycle costs, it is important to consider that many of the additional investments associated with competition are often made earlier in the program’s life cycle, though much of the expected savings do not accrue for decades. Therefore, a net present value calculation (time value of money) must be included in the analysis and, once applied, provides for a better estimate of program rate of return. Figure 1 shows the results of our analysis under different scenarios and accounting for the time value of money. When we assumed overall savings due to competition, our analysis indicated that recoupment of those initial investment costs would occur at somewhere between 10.3 and 12.3 percent, depending on the number of engines awarded to each contractor. A competitive scenario where one of the contractors receives 70 percent of the annual production aircraft, while the other receives only 30 percent reaches the breakeven point at 10.3 percent savings. A competitive scenario where both contractors receive 50 percent of the production aircraft reaches this point at 12.3 percent savings. We believe it is reasonable to assume at least this much savings in the long run based on analysis of actual data from the F-16 engine competition. Competition may also provide benefits that do not result in immediate financial savings, but may result in reduced costs or other positive outcomes to the program over time. DOD and others have performed studies and have widespread concurrence as to these other benefits, including better engine performance, increased reliability, and improved contractor responsiveness. In fact, in 1998 and 2002, DOD program management advisory groups assessed the JSF alternate engine program and found the potential for significant benefits in these and other areas. Table 3 summarizes the benefits determined by those groups. While the benefits highlighted may be more difficult to quantify, they are no less important, and ultimately were strongly considered in recommending continuation of the alternate engine program. These studies concluded that the program would maintain the industrial base for fighter engine technology, instill contractor incentives for better performance, ensure an operational alternative if the current engine developed enhance international participation. We spoke with government officials from various organizations who widely concurred with our analysis of the potential benefits of engine competition. Many of these were important benefits realized by past competitions such as that for the Air Force F-16 aircraft engines. Discussions with the Air Force engine manager who co led both advisory group studies explained that these benefits are valuable when trying to manage significant numbers of fighter-type engines to ensure combat readiness. He told us that problems are magnified when trying to manage a single engine system, which can require substantial manpower and extra hours to keep aircraft flying when engine problems occur. In his opinion, the benefits of a dual-source engine would outweigh the costs. He stated that he had not seen anything that would change this conclusion since the last advisory group study was conducted. The ability of competition to deliver such benefits is important for the JSF program. In addition to considering engine price, the program office has identified a range of potential criteria for competition during the production and support phases of the program, which could include other costs, reliability, and sustainability. It is reasonable to assume that competition under these criteria may drive better engine performance and reliability over the life of the program. Such improvements can positively affect fleet readiness and schedule outcomes while avoiding costs in various other areas for the JSF program. Another potential benefit of having an alternate engine program, and one also supported by the program advisory group studies, is to reduce the risk that a single point, systemic failure in the engine design could substantially affect the fighter aircraft fleet. Though current performance data indicate it is unlikely that engine problems would lead to fleet wide groundings in modern aircraft, having two engine sources for the single-engine JSF further reduces this risk as it is more unlikely that such a problem would occur to both engine types at the same time. Because the JSF is expected to be the primary fighter aircraft in the U.S. inventory, and Pratt & Whitney will also be the sole-source provider of F119 engines for the F-22A aircraft, DOD is faced with the potential scenario where almost the entire fleet could be dependent on similar engine cores, produced by the same contractor in a sole-source environment. Results from past competitions provide evidence of potential financial and non financial savings that can be derived from engine programs. One relevant case study to consider is the “Great Engine War” of the 1980s— the competition between Pratt & Whitney and General Electric to supply military engines for the F-16 and other fighter aircraft programs. At that time all engines for the F-14 and F-15 aircraft were being produced on a sole-source basis by Pratt & Whitney, which was criticized for increased procurement and maintenance costs, along with a general lack of responsiveness with regard to government concerns about those programs. For example, safety issues on the single-engine F-16 aircraft were seen as having greater consequences than the twin-engine F-14 or F-15 aircraft. To address concerns, the Air Force began to fund the development and testing of an alternate engine to be produced by General Electric; the Air Force also supported the advent of an improved derivative of the Pratt & Whitney engine. Beginning in 1983, the Air Force initiated a competition that Air Force documentation suggests resulted in significant cost savings in the program. For example, in the first 4 years of the competition, when actual costs are compared to the program’s baseline estimate, results included nearly 30 percent cumulative savings for acquisition costs, roughly 16 percent cumulative savings for operations and support costs, and total savings of about 21 percent in overall life cycle costs. While sole-source competitions have been the general rule for engine program strategies, evidence shows that when competition was utilized for even part of those programs, positive outcomes were often realized. Other than the Great Engine War, there have been a number of U.S. competitions for modern fighter engines, including those for the F-15, F/A-18, and F-22A fighter aircraft. During the course of this review, government and contractor personnel told us that the difference between these programs and the F-16 was that competition was limited to only one phase of the program (i.e., program initiation or production phase). For example, the General Electric F404 engine, which today powers the Navy F/A-18 aircraft and the Air Force F-117A aircraft, was competed in the mid-1980s. In that case, the Navy had decided to upgrade the A-6 aircraft to the A-6F model with two F404 engines, thereby increasing the number of F404 engines in the fleet. The Navy leadership recommended a second source for that engine, and Pratt & Whitney was awarded a “build-to-print” contract, which meant it would produce additional F404 engines according to the General Electric design. While this competition did provide some improvements in contractor responsiveness, government and contractor officials told us this was not an optimum competitive environment as it provided no design competition. The Great Engine War was able to generate significant benefits because competition incentivized contractors to improve designs and reduce costs during production and sustainment. Competitive pressure continues today as the F-15 and F-16 aircraft are still being sold internationally. While the other competitions resulted in some level of benefits, especially with regard to contractor responsiveness, they did not see the same levels of success absent continued competitive pressures. The economic stakes in the JSF engine program are likely to be high given the size of the program, international participation, and the expected supplier base. Participation in the development, production, and support of the JSF engine program will position Pratt & Whitney, the Fighter Engine Team, and their respective supplier base to compete for future military development and acquisition programs. According to government officials, Pratt & Whitney faces a decline in the area of large commercial engines, which could result in a shift of workforce and overhead costs to military programs. While it is the sole-source provider of the engine for the Air Force F-22A aircraft, production will likely end in 2012 for that program. Pratt & Whitney will at a minimum provide at least some of the engines for the JSF program, the extent to which is to be determined by whether or not the Fighter Engine Team remains a competitor and, if so, the amount of contract awards that company can win. Should the JSF program suffer substantial schedule slips beyond 2011 or 2012, the gap between the end of F-22A production and the onset of JSF production could grow, resulting in workforce disruptions or other negative effects. General Electric is a significant entity in the market for large commercial engines. However, the company faces declining production within its other fighter engine programs, such as the Navy’s F/A-18E/F, which could result in erosion of specialized skills should the company not continue as a participant in the JSF program. While the overall health of the company is very strong, business decisions as to where to invest company resources could favor the commercial side, should military business decline substantially. Due to the size of the JSF program, the industrial base implications reach far beyond Pratt & Whitney and the Fighter Engine Team. With JSF contracts awarded to suppliers within both the U.S. and international partner countries, JSF propulsion production and support business will contribute to the global engine industrial base for almost 60 years. While companies that participate are likely to see increased business opportunities, if the JSF comes to dominate the market for tactical aircraft, as DOD expects, companies that are not part of the program could see tactical aircraft business decline. DOD officials noted in 2006 that canceling the F136 engine program would save DOD $1.8 billion in needed investments over the remaining 7 years of development, which could be used to fund higher-priority programs. According to our analysis that figure is now $1.4 billion; and does not include the approximately $2.2 billion to $3.1 billion of additional investments for procurement, production support, and stand-up investments necessary for competition. However, our analysis indicates that this investment may be recouped under a competitive approach if it generates savings of 10.3 to 12.3 percent. Historical data indicate that it is reasonable to assume savings of that much and more. Choices made today will ripple forward and influence additional, and perhaps even more challenging, decisions in the future. The JSF engine acquisition strategy is one such choice facing DOD today. The results of our work indicate that with the proper structure and attention, and the up-front investments, the alternate engine can ultimately recover those investments and potentially provide additional benefits to the program. Prior engine programs and more recent DOD studies and analyses also suggest these outcomes to be reasonable. DOD is now faced with prioritizing its short-term needs against potential long-term payoffs through competition for JSF engine development, procurement, and sustainment. Mr. Chairmen, this concludes my prepared statement. I will be happy to answer any questions you or other members of the subcommittee may have. For future questions regarding this testimony, please contact Michael J. Sullivan, (202) 512-4841. Individuals making key contributions to this testimony include Brian Mullins, Assistant Director; J. Kristopher Keener; Daniel Novillo; Greg Campbell; Charles Perdue; and Adam Vodraska. Joint Strike Fighter: Progress Made and Challenges Remain, GAO-07-360. Washington, D.C.: Mar. 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis, GAO-06-717R. Washington, D.C: May 22, 2006. Recapitalization Goals Are Not Supported By Knowledge-Based F-22A and JSF Business Cases, GAO-06-487T. Washington, D.C.: Mar. 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance, GAO-06-356. Washington, D.C.: Mar. 15, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization, GAO-05-519T. Washington, D.C.: Apr. 6, 2005. Defense Acquisitions: Assessments of Selected Major Weapon Programs, GAO-05-301.Washington, D.C.: Mar. 31, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy, GAO-05-271. Washington D.C.: Mar. 15, 2005. Tactical Aircraft: Status of F/A-22 and JSF Acquisition Programs and Implications for Tactical Aircraft Modernization, GAO-05-390T. Washington, D.C.: Mar. 3, 2005. Joint Strike Fighter Acquisition: Observations on the Supplier Base, GAO-04-554. Washington, D.C.: May 3, 2004. Joint Strike Fighter Acquisition: Managing Competing Pressures Is Critical to Achieving Program Goals, GAO-03-1012T. Washington, D.C.: July 21, 2003. In conducting our analysis of costs for the Joint Strike Fighter (JSF) engine program, we relied primarily on program office data. We did not develop our own source data for development, production, or sustainment costs. In assessing the reliability of data from the program office, we compared that data to contractor data and spoke with agency and other officials and determined that the data were sufficiently reliable for our review. Other base assumptions for the review are as follows: Unit recurring flyaway cost includes the costs associated with procuring one engine and certain nonrecurring production costs; it does not include sunk costs, such as development and test, and other costs to the whole system, including logistical support and construction. Engine procurement costs reflect only U.S. costs, but assumes the quantity benefits of the 646 aircraft currently anticipated for foreign partner procurement. Competition, and the associated savings anticipated, begins in fiscal year 2012. Engine maturity, defined as 200,000 flight hours with at least 50,000 hours in each variant, is reached in fiscal year 2012. Two years are needed for delivery of aircraft. Aircraft life equals 30 years at 300 flight hours per year. For the sole-source Pratt & Whitney F135 engine scenario, we calculated costs as follows: Development Relied on JSF program office data on the remaining cost of the Pratt & Whitney development contract. We considered all costs for development through fiscal year 2007 to be sunk costs and did not factor them into analysis. Production For cost of installed engine quantities, we multiplied planned JSF engine quantities for U.S. aircraft by unit recurring flyaway costs specific to each year as derived from cost targets and a learning curve developed by the JSF program office. For the cost of production support, we relied on JSF program office cost estimates for initial spares, training, support equipment, depot stand-up, and manpower related to propulsion. Because the JSF program office calculates those numbers to reflect two contractors, we applied a cost reduction factor in the areas of training and manpower to reflect the lower cost to support only one engine type. Sustainment For sustainment costs, we multiplied the planned number of U.S. fielded aircraft by the estimated number of flight hours for each year to arrive at an annual fleet total. We then multiplied this total by JSF program office estimated cost per engine flight hour specific to each aircraft variant. Sustainment costs do not include a calculation of the cost of engine reliability or technology improvement programs. For a competitive scenario between the Pratt & Whitney F135 engine and the Fighter Engine Team (General Electric and Rolls-Royce), we calculated costs as follows: Development We used current JSF program office estimates of remaining development costs for both contractors and considered all costs for development through fiscal year 2007 to be sunk costs. Production We used JSF program office data for engine buy profiles, learning curves, and unit recurring flyaway costs to arrive at a cost for installed engine quantities on U.S. aircraft. We performed calculations for competitive production quantities under 70/30 and 50/50 production quantity award scenarios. We used JSF program office cost estimates for production support under two contractors. We assumed no change in support costs based on specific numbers of aircraft awarded under competition, as each contractor would still need to support some number of installed engines and provide some number of initial spares. Sustainment We used the same methodology and assumptions to perform the calculation for sustainment costs in a competition as in the sole-source scenario. Savings We analyzed actual cost information from past aircraft propulsion programs, especially that of the F-16 aircraft engine, in order to derive the expected benefits of competition and determine a reasonable range of potential savings. We applied this range of savings to the engine life cycle, including recurring flyaway costs, production support, and sustainment. We assumed costs to the government could decrease in any or all of these areas as a result of competitive pressures. We did not apply any savings to the system development and demonstration phase or the first five production lots because they are not fully competitive. However, we recognize that some savings may accrue as contractors prepare for competition. In response to the request to present our cost analyses in constant dollars, then year dollars, and using net present value, we: calculated all costs using constant fiscal year 2002 dollars, used separate JSF program office and Office of the Secretary of Defense inflation indices for development, production, production support, and sustainment to derive then year dollars; when necessary for the out years, we extrapolated the growth of escalation factors linearly; and utilized accepted GAO methodologies for calculating discount rates in the net present value analysis. No cost analysis was performed for the scenario where a fixed-price contract would be awarded in fiscal year 2008 for the entire life of the engine program because neither the contractor nor the Department of Defense calculates the necessary cost data. During our discussions with both DOD officials and contractor representatives, it was determined that neither viewed a fixed-price contract as a viable option for which they could quantify a risk premium. We did not perform cost analyses of alternative strategies, as we determined no other alternative could be implemented without disruption to the JSF program’s cost and schedule. Our analysis of the industrial base does not independently verify the relative health of either contractors’ suppliers or workload. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Joint Strike Fighter (JSF) is the linchpin of future Department of Defense (DOD) tactical aircraft modernization efforts because of the sheer size of the program and its envisioned role as the replacement for hundreds of aircraft that perform a wide variety of missions in the Air Force, Navy, and Marine Corps. DOD implemented the JSF alternate engine development program in 1996 to provide competition between two engine manufacturers in an effort to achieve cost savings, improve performance, and gain other benefits. This testimony focuses on GAO's cost analysis performed in response to Section 211 of the John Warner National Defense Authorization Act for Fiscal Year 2007. We examined the following areas: (1) sole-source and competitive scenarios for development, production, and sustainment of the JSF engine, (2) results of past engine programs and their related strategies, and (3) impact on the industrial base in the event of the complete cancellation of the JSF alternate engine program. DOD did not provide comments on our findings. Continuing the alternate engine program for the Joint Strike Fighter would cost significantly more than a sole-source program but could, in the long run, reduce costs and bring other benefits. The current estimated life cycle cost for the JSF engine program under a sole-source scenario is $53.4 billion. To ensure competition by continuing to implement the JSF alternate engine program, an additional investment of $3.6 billion to $4.5 billion may be required. However, the associated competitive pressures from this strategy could result in savings equal to or exceeding that amount. The cost analysis we performed suggests that a savings of 10.3 to 12.3 percent would recoup that investment, and actual experience from past engine competitions suggests that it is reasonable to assume that competition on the JSF engine program could yield savings of at least that much. In addition, DOD-commissioned reports and other officials have said that nonfinancial benefits in terms of better engine performance and reliability, improved industrial base stability, and more responsive contractors are more likely outcomes under a competitive environment than under a sole-source strategy. DOD experience with other aircraft engine programs, including the F-16 fighter in the 1980s, has shown competitive pressures can generate financial benefits of up to 20 percent during the life cycle of an engine program and/or improved quality and other benefits. The potential for cost savings and performance improvements, along with the impact the engine program could have on the industrial base, underscores the importance and long-term implications of DOD decision making with regard to the final acquisition strategy solution.
The 340B Program, which is administered and overseen by HRSA, within HHS, is named for the statutory provision authorizing it, which was added Eligibility for the program is to the Public Health Service Act in 1992.statutorily defined and is limited to entities that participate in specified federal programs and hospital types that meet certain eligibility criteria. A clinic or other site affiliated with a hospital, but not located in the main hospital building, is eligible to participate in the 340B program if it is an integral part of the hospital, which HRSA has defined as a reimbursable facility, included on the eligible hospital’s most recently filed Medicare cost report.participate in the qualifying federal programs are not eligible to participate in the 340B Program. Independent physician-based practices that do not Covered entities may use 340B drugs for patients whether or not they are low-income, uninsured, or underinsured, and covered entities may receive payments from health insurers, such as Medicare, that are higher than the drug’s discounted price, generating revenue through the program. A statutory pricing formula determines the 340B price of a drug. The amount of the 340B discount ranges from an estimated 20 to 50 percent off what the entity would have otherwise paid. Throughout calendar year 2012, there were 10,622 unique covered entities that participated in the 340B program—an increase of 20 percent since 2008. Approximately half of the increase in unique covered entities was among entities that became eligible for the program based on expanded eligibility criteria enacted by the Patient Protection and Affordable Care Act in 2010. The remaining increase was among entity types that were eligible for the program in both 2008 and 2012, including 340B DSH hospitals. In 1992, the House Energy and Commerce Committee estimated that approximately 90 DSH hospitals would have been eligible to participate in a 340B Program, had it been in effect at that time. In 2012, 1,057 DSH hospitals participated in the program. Medicare pays most hospitals through both the acute care inpatient prospective payment system (IPPS), which is covered by Medicare Part A, and the outpatient prospective payment system (OPPS), which is covered by Medicare Part B. Under these systems, Medicare pays providers a predetermined rate for a given service that is expected to cover the costs incurred by efficient providers. Within the OPPS, certain services, including most Part B drugs, are paid separately. Payments under the IPPS are adjusted to account for the beneficiary’s clinical condition and related treatment costs relative to the average Medicare case and payments under both the IPPS and the OPPS are adjusted for the market conditions in the hospital’s location relative to national conditions. Hospitals may receive additional payments if they qualify for certain adjustments, such as: DSH adjustment: The DSH adjustment generally provides supplemental payments for inpatient services to hospitals that treat a disproportionate number of low-income inpatients. To qualify for this adjustment, a hospital’s disproportionate patient percentage—the share of low-income patients treated by the hospital—must generally equal or exceed a specific threshold level determined by a statutory formula. The amount of the DSH payment adjustment varies by hospital location and size. GME and IME adjustments: Medicare reimburses teaching hospitals and academic medical centers for both the direct and indirect costs of their residency training programs. GME payments cover the direct costs of resident training, such as salaries and benefits, for both inpatient and outpatient services. The IME adjustment applies only to inpatient services, and reflects the higher patient care costs associated with resident education. The size of the IME adjustment depends on the hospital’s teaching intensity, which is generally measured by a hospital’s number of residents per bed. Outlier case payment: The outlier case payment protects hospitals from large financial losses due to unusually costly inpatient and outpatient cases. A hospital’s costs for the case must exceed a certain threshold amount and additional payments are based on a percentage of the costs above this threshold. MDH classification: The MDH classification allows small rural hospitals for which Medicare patients make up a significant percentage of inpatient days or discharges to receive adjustments to their IPPS rates. To qualify as an MDH, a hospital has to meet various criteria regarding location, size, and patient mix. In 2012, the Medicare program and its beneficiaries spent a total of $6 billion for Part B drugs in the hospital outpatient setting. Part B drugs are typically administered by a physician or under a physician’s close supervision in physicians’ offices or hospital outpatient departments. Under the OPPS, Medicare reimburses all hospitals for separately payable Part B drugs at rates determined by a statutorily defined formula regardless of the price the hospital pays for the drug. Medicare pays 80 percent of the payment rate for Part B drugs and the beneficiary is responsible for the remaining 20 percent. Typically, Part B drugs are provided with a physician service, which is also paid for by both Medicare and the patient. In general, spending for Part B drugs and other services has a financial impact on Medicare beneficiaries because monthly Part B premiums are set to cover 25 percent of total Part B expenditures. In 2012, 340B DSH hospitals were generally larger and more likely to be teaching hospitals—especially major teaching hospitals—compared with non-340B hospitals. Although 340B DSH hospitals tended to have lower total facility margins compared with non-340B hospitals, they tended to have higher total Medicare margins. Lower total facility margins among 340B DSH hospitals could be partly attributable to the tendency for these hospitals to provide more charity care and uncompensated care compared with non-340B hospitals, although there were notable exceptions. Higher total Medicare margins among 340B DSH hospitals could be partly attributable to the receipt of more Medicare payment adjustments by these hospitals. Compared with non-340B hospitals—including both non-340B DSH hospitals and other non-340B hospitals—340B DSH hospitals in our analysis tended to be larger in terms of annual total facility revenue, annual Medicare revenue, and the number of inpatient beds in 2012.The differences between 340B DSH hospitals and non-340B hospitals were most pronounced among major teaching hospitals, and among the 279 major teaching hospitals, 189 (or nearly 70 percent) were 340B DSH hospitals (see table 1). Further, the median DSH adjustment percentage among 340B DSH hospitals in our analysis was twice as high as the median DSH adjustment percentage among non-340B DSH hospitals— 18 percent compared with 9 percent. Among major teaching hospitals, the median DSH adjustment percentage was over three times as high— 28 percent compared with 8 percent. Compared with non-340B hospitals, in 2012, 340B DSH hospitals generally provided more charity care and uncompensated care, as a proportion of total facility revenue—although there were notable exceptions. In addition, we found that higher DSH adjustment percentages were often, but not always, associated with provision of greater amounts of charity care and uncompensated care by hospitals. Across all hospitals in our analysis, as hospitals’ DSH adjustment percentages increased, the average amount of charity care and uncompensated care they provided, as a proportion of total facility revenue, generally increased. (See fig. 1.) The median amount of uncompensated care provided by 340B DSH hospitals was 1.4 percentage points greater than the median amount provided by non-340B DSH hospitals, and 3.6 percentage points greater than the median amount provided by other non-340B hospitals. The median amount of charity care provided by 340B DSH hospitals was 0.8 percentage points greater than the median amount provided by non- 340B DSH hospitals, and 1.4 percentage points greater than the median amount provided by other non-340B hospitals. (See table 2.) However, there were notable numbers of 340B DSH hospitals that provided low amounts of uncompensated care and charity care. For example, while we found that 340B DSH hospitals tended to provide a larger amount of charity and uncompensated care compared with non- 340B hospitals, 12 percent of 340B DSH hospitals in our analysis were among the hospitals that provided the lowest amounts of charity care. We also found that 14 percent were among the hospitals that provided the lowest amounts of uncompensated care across all hospitals in our analysis. Additionally, among 340B DSH hospitals, the median amount of uncompensated care provided by major teaching hospitals was less than the median amount provided by all hospitals in the group, despite the fact that the major teaching hospitals in this group tended to have the highest DSH adjustment percentages. Additionally, nearly one quarter of 340B DSH hospitals that were major teaching hospitals provided low amounts of uncompensated care. (See table 3.) Compared with non-340B hospitals, 340B DSH hospitals in our analysis generally had lower overall financial margins in 2012, as measured by their total facility margins. Specifically, the median annual total facility margin among 340B DSH hospitals (3.7) was 1.8 percentage points lower than the median annual total facility margin among non-340B DSH hospitals (5.5), and 3.3 percentage points lower than the median annual total facility margin among other non-340B hospitals (7.0). This finding was generally consistent when we looked at hospitals by characteristics such as teaching status (major teaching, other teaching, or nonteaching), ownership type (public, nonprofit, or for profit), and location (urban or rural). The lower total facility margins among 340B DSH hospitals could be attributable, in part, to the tendency for 340B DSH hospitals to provide a larger amount of charity care and uncompensated care, as a proportion of total facility revenue, compared with non-340B hospitals. Compared with non-340B hospitals, 340B DSH hospitals in our analysis generally had substantially higher (i.e., less negative) total Medicare margins and inpatient Medicare margins in 2012 (see table 4). The median annual total Medicare margin that year among 340B DSH hospitals was -2.7, which was 4.6 and 13.3 percentage points higher than the median annual total Medicare margin among non-340B DSH hospitals and other non-340B hospitals, respectively. Similarly, the median annual inpatient Medicare margin among 340B DSH hospitals was 0.2, which was 7.8 and 22.1 percentage points higher than the median annual inpatient Medicare margin among non-340B DSH and other non-340B hospitals, respectively. The higher total Medicare margins and higher inpatient Medicare margins for 340B DSH hospitals may be attributable, in part, to the amount of Medicare payment adjustments they received. The 340B hospitals in our analysis were more likely to receive Medicare payment adjustments and receive higher payment adjustment amounts compared with non-340B hospitals, which resulted in increased Medicare revenue for these hospitals. For example, in 2012, 340B DSH hospitals were more likely than non-340B DSH hospitals to receive three of the five payment adjustments we examined—IME, GME, and outlier case (see table 5). Additionally, in 2012, 340B DSH hospitals received higher payment amounts, as a proportion of total Medicare revenue, for four of the five payment adjustments we examined—IME, GME, DSH, and outlier case adjustments—compared with non-340B hospitals (see fig. 2). Despite their participation in the 340B Program, 340B DSH hospitals in our analysis generally had lower outpatient Medicare margins compared with non-340B hospitals. In 2012, the median annual outpatient Medicare margin among 340B DSH hospitals was 1.8 and 1.7 percentage points lower than that of non-340B DSH hospitals and other non-340B hospitals, respectively. Lower outpatient Medicare margins among 340B DSH hospitals were likely due to a variety of factors. One potential factor is that there are fewer Medicare payment adjustments for outpatient services. Among the five payment adjustments we examined, only two—GME and outlier case—apply to outpatient payments. In both 2008 and 2012, per beneficiary Medicare Part B drug spending, including oncology drug spending, was substantially higher at 340B DSH hospitals than non-340B hospitals. This indicates that, on average, Medicare beneficiaries were prescribed more drugs, more expensive drugs, or both, at 340B DSH hospitals. The differences we found did not appear to be explained by the hospital or patient population characteristics we examined. Because Medicare pays hospitals at set rates for Part B drugs regardless of their costs for acquiring them, there is a financial incentive at hospitals participating in the 340B program to prescribe more drugs or prescribe more expensive drugs to Medicare beneficiaries. The substantially higher spending at 340B DSH hospitals may reflect a response to this incentive. Among the hospitals in our analysis that provided outpatient services and whose 340B status did not change between 2008 and 2012, on average, per beneficiary Medicare Part B drug spending was substantially higher at 340B DSH hospitals compared with non-340B hospitals in both 2008 and 2012. For example, in 2012, average per beneficiary spending at 340B DSH hospitals was $144, compared to $60 and $62 at non-340B DSH and other non-340B hospitals, respectively. (See fig. 3.) Because Medicare reimbursement rates for Part B drugs at all of the hospitals in our analysis were based on the same fee schedule, this indicates that, on average, Medicare beneficiaries at 340B DSH hospitals were prescribed more drugs or prescribed more expensive drugs, or both, than beneficiaries at the other hospitals in our analysis. The spending differences between 340B DSH hospitals and non-340B hospitals remained even after we accounted for teaching status, ownership type, or location (i.e., urban or rural). For example, among both teaching and nonteaching hospitals, average per beneficiary Part B drug spending was much higher at 340B DSH hospitals than at non-340B hospitals. (See fig. 4.) Further, these differences were not explained by the factors we examined that might disproportionately affect hospitals that treat higher proportions of low-income patients. For example, among hospitals with high levels of charity care or high levels of uncompensated care, and among hospitals with a high DSH adjustment percentage—all indicators that these hospitals treat a higher proportion of low-income patients—Part B drug spending was much higher among 340B DSH hospitals in both 2008 and 2012. (See fig. 5.) Additionally, the differences we found were likely not explained by the health status of the outpatients served. Specifically, in 2008 and 2012, the health status of outpatient beneficiaries was generally similar at 340B and non-340B hospitals. For example, in 2012 the average risk score—a measure of relative health status—of these outpatient beneficiaries at 340B DSH hospitals was 1.50, while it was 1.45 at non-340B DSH hospitals and 1.36 at other non-340B hospitals. Risk scores are based on overall health care spending and are not limited to drug spending. However, the difference between the risk scores of beneficiaries treated at 340B DSH hospitals and non-340B hospitals relative to these hospitals’ Part B drug spending suggests that the substantially higher spending at 340B DSH hospitals may not be explained by differences in patient health status. The relatively higher Part B drug spending at 340B DSH hospitals potentially could, in part, reflect a tendency for some beneficiaries to receive all of their Part B drugs in a hospital outpatient department instead of a physician’s office. To the extent this occurs, some of the higher spending at 340B DSH hospitals may not be associated with increases in overall Medicare spending for Part B drugs. However, we found that, in 2012, among patients who received Part B drugs in hospital outpatient departments, the percentage of patients who only received drugs in that setting—meaning that they did not receive any Part B drugs at a physician’s office—was only slightly higher at 340B DSH hospitals (59 percent) compared to non-340B DSH hospitals (54 percent), and other non-340B hospitals (54 percent). Moreover, when we limited our analysis to patients who only received Part B drugs in a hospital outpatient department, the substantially higher spending at 340B DSH hospitals persisted. Specifically, in 2012, average per beneficiary Part B drug spending for these patients was $2,743 in 340B DSH hospitals, compared to $1,295 in non-340B DSH hospitals and $1,634 in other non- 340B hospitals. Among the hospitals in our analysis that provided outpatient oncology services and whose 340B status did not change between 2008 and 2012, all three groups of hospitals served more oncology patients in 2012 compared to 2008. (See table 6). For both years, average per beneficiary Medicare Part B oncology drug spending was highest at 340B DSH hospitals. Higher average per beneficiary spending at 340B DSH hospitals compared to non-340B hospitals persisted regardless of teaching status or patient health status. For example, in 2008 and 2012, the health status of outpatient oncology beneficiaries that received a Part B drug was similar at 340B and non-340B hospitals. In 2012, the average risk score of these outpatient oncology beneficiaries at 340B DSH hospitals was 2.29, while it was 2.11 at non-340B DSH hospitals and 2.14 at other non-340B hospitals. Risk scores are based on overall health care spending and are not limited to oncology drug spending specifically. Nevertheless, the difference between the risk scores of beneficiaries treated at 340B and non-340B hospitals relative to these hospitals’ Part B oncology drug spending suggests that the substantially higher Part B spending at 340B DSH hospitals may not be explained by differences in patient health status. Because Medicare reimbursement rates for Part B oncology drugs at all of the hospitals in our analysis were based on the same fee schedule, this indicates that, on average, Medicare beneficiaries at 340B DSH hospitals were prescribed more oncology drugs, or prescribed more expensive oncology drugs, than beneficiaries at the other hospitals in our analysis. The average number of oncology patients served increased among all three of our hospital groups between 2008 and 2012, but 340B DSH hospitals saw the greatest increase in such patients served (83 to 120, or 45 percent). The increase across all three hospital groups in the number of oncology patients served may reflect recent trends in oncology treatment, such as where patients are treated, and could be due to multiple factors, including factors outside of the 340B program. For example, stakeholders that we spoke with noted that there is a larger trend toward integration in the health care industry. However, 340B DSH hospitals were much more likely to treat oncology patients compared with non-340B hospitals. In addition, there was a 5 percentage point increase from 2008 to 2012 in the percentage of 340B DSH hospitals that treated oncology patients, while the increases for non-340B DSH and other non- 340B hospitals were 1 and 2 percentage points, respectively. Medicare uses a statutorily defined formula to pay hospitals at set rates for drugs, regardless of their costs for acquiring them, which CMS cannot alter based on hospitals’ acquisition costs, and the 340B statute does not restrict covered entities from using drugs purchased at the 340B discounted price for Medicare Part B beneficiaries. Consequently, there is a financial incentive at these hospitals to prescribe more drugs and more expensive drugs to Medicare beneficiaries in order to maximize the revenue generated by the difference between the cost of the drug and Medicare’s reimbursement. The substantially higher per beneficiary Medicare spending for Part B drugs at 340B DSH hospitals, which did not appear to be explained by hospital characteristics or patient health status, may reflect responses to this incentive. Unnecessary spending has negative implications, not just for the Medicare program, but for Medicare beneficiaries as well, who would be financially liable for larger copayments as a result of receiving more drugs or more expensive drugs, and higher Part B premiums that reflect the increases in Medicare spending for those drugs. Moreover, there are potential concerns about the appropriateness of the health care provided to Medicare beneficiaries if it is overly influenced by financial incentives to prescribe outpatient drugs. Certain providers, including hospitals that serve a disproportionate number of low-income patients, have access to discounted prices on outpatient drugs through the 340B Drug Pricing Program. Currently, approximately 40 percent of all U.S. hospitals participate in the program, including approximately 1,000 DSH hospitals. Because DSH hospitals account for nearly 80 percent of all 340B drug purchases, it is important to understand the characteristics of the population that is served by these hospitals in order to evaluate the impact of the 340B program on hospitals and their patients. We found that 340B DSH hospitals generally provided more charity care and uncompensated care compared with non-340B hospitals. However, there were notable exceptions to this pattern. Specially, 12 percent of the 340B DSH hospitals reported providing relatively small amounts of charity care and 14 percent reported providing relatively small amounts of uncompensated care. The financial incentive to maximize Medicare revenues through the prescribing of more or more expensive drugs at 340B hospitals also raises concerns. Our work suggests that 340B DSH hospitals may be responding to this incentive to maximize Medicare revenues. On average, per beneficiary Medicare spending on Part B drugs in 2008 and 2012 was substantially higher at 340B DSH hospitals compared with non-340B hospitals—yet we did not find that these differences could be readily explained by hospital characteristics or patients’ health status. While hospitals may be financially benefitting—which is not inconsistent with the legislative design of the 340B Program—this poses potentially serious consequences to the Medicare program and its beneficiaries. Not only does excess spending on Part B drugs increase the burden on both taxpayers and beneficiaries who finance the program through their premiums, it also has direct financial effects on beneficiaries who are responsible for 20 percent of the Medicare payment for their Part B drugs. Furthermore, this incentive to prescribe these drugs raises potential concerns about the appropriateness of the health care provided to Medicare Part B beneficiaries. Absent a change in financial incentives, potentially inappropriate spending on drugs may continue. While limiting hospitals’ Medicare Part B reimbursement for 340B discounted drugs or eliminating the 340B discount for drugs provided by hospitals to Medicare Part B beneficiaries could diminish the incentive to prescribe more drugs or more expensive drugs than necessary at 340B hospitals, CMS and HRSA are unable to take such actions because they do not have the statutory authority to do so. To help ensure the financial sustainability of the Medicare program, protect beneficiaries from unwarranted financial burden, and address potential concerns about the appropriateness of the health care provided to Part B beneficiaries, Congress should consider eliminating the incentive to prescribe more drugs or more expensive drugs than necessary to treat Medicare Part B beneficiaries at 340B hospitals. We provided a draft of this report for review to HHS and received written comments that are printed in appendix I. Because of the focus on 340B hospitals in this report, we also provided 340B Health (formerly Safety Net Hospitals for Pharmaceutical Access) an opportunity to review a draft of this report and we have summarized the comments we received below. HHS and 340B Health also provided technical comments, which we incorporated, as appropriate. Following is our summary of and response to comments from HHS and 340B Health. In its comments, HHS stated that our examination of Medicare Part B outpatient drug spending is a useful initial analysis of differences in spending between 340B DSH hospitals and non-340B hospitals. HHS also noted concerns related to some of our conclusions; however, we believe our methods and findings were robust and appropriately support our conclusions, as discussed below. First, HHS noted that although we examined differences in per beneficiary spending by hospital type, we did not examine differences in patient outcomes or quality. HHS acknowledged that higher spending for Part B drugs at 340B hospitals could represent unnecessary or excess spending for these drugs. However, HHS stated that it is also possible that a higher volume of physician-administered drugs could lead to better clinical outcomes. While we did not attempt to evaluate health outcomes as part of our analysis, we have no evidence to suggest that non-340B hospitals had an incentive to provide a lower volume of Part B drugs than required to achieve positive clinical outcomes. In particular, we believe that because Medicare reimbursed all hospitals in our analysis—including non-340B hospitals—based on the drug's average sales price plus a fixed percentage above the drug's average sales price, non-340B hospitals would have no incentive to underprescribe Part B drugs. Second, HHS questioned our interpretation of the differences between the average risk scores among the three hospital groups (1.50 for 340B DSH hospitals vs. 1.45 and 1.36 for non-340B DSH and other non-340B hospitals, respectively). HHS believes that the differences in risk scores could represent a meaningful difference in the health status of beneficiaries. We acknowledge that the differences in risk scores could represent a difference in the health status of the beneficiaries served by each hospital group. However, we believe that the relative difference between the risk scores and the per beneficiary Part B drug spending at 340B DSH and non-340B hospitals indicates that the substantially higher spending at 340B DSH hospitals may not be explained by differences in patient health status. For example, based on the risk scores, overall health care spending for beneficiaries who received Part B drugs at 340B DSH hospitals in 2012 would have been expected to be, on average, 3.4 percent higher than overall health spending that year for beneficiaries who received Part B drugs at non-340B DSH hospitals. In contrast, spending for Part B drugs at 340B DSH hospitals was substantially higher—140 percent higher—than spending at non-340B DSH hospitals. While the spending expectation from the risk scores applies to overall health care spending, not just Part B drug spending, the relative percentage differences suggest that the higher spending at 340B DSH hospitals may not be explained by differences in patient health status. 340B Health noted several concerns related to the methodologies we used for our analysis. However, we believe that our methods were sound, as described below. 340B Health expressed concerns about the methodology we used to examine the amount of charity care and uncompensated care provided by hospitals. In particular, 340B Health stated that the data from worksheet S-10 in the Medicare hospital cost reports that we used for this analysis are too unreliable to serve as the basis for policy conclusions because the data are not used by CMS to determine Medicare payments. However, before we conducted our analysis, we confirmed with CMS that the agency did not have any concerns about our use of the data in the S-10 worksheet for our analysis. In addition, we performed our own data reliability assessment and concluded that the cost report data were sufficiently reliable for our study. The Medicare cost report is collected annually from all institutional providers that render services to Medicare beneficiaries. Among other things, these reports contain self-reported information on facility characteristics, utilization data, and financial statement data. We used these data to describe various characteristics of hospitals, including hospitals’ self-reported levels of charity care and uncompensated care. 340B Health also questioned whether our methods controlled for certain reasons it might be appropriate for Medicare Part B spending to be significantly higher at 340B hospitals. For example, they noted that 340B hospitals are larger, more likely to be teaching hospitals, and more likely to treat cancer patients or otherwise higher-risk patients. Our analyses controlled for each of these characteristics. To control for the size of each hospital, we calculated Part B drug spending at the per beneficiary level. To control for the effect of teaching hospital status, we examined Part B drug spending by teaching hospital level (major teaching, other teaching, and nonteaching) and we found substantially higher Part B drug spending at 340B DSH hospitals regardless of teaching status. To control for the possibility that 340B DSH hospitals were more likely to treat cancer patients, we conducted a separate analysis of Part B spending for oncology drugs at 340B DSH and non-340B hospitals and found similar results in spending. Although controlling for teaching status and conducting separate analyses of oncology drug spending may have in part controlled for the treatment of higher risk patients, we also conducted analyses to determine whether patient health status at 340B DSH hospitals may explain the substantially higher Part B drug spending at these hospitals. 340B Health expressed concerns about the methodology we used in this analysis, noting that the patient risk scores we used were not intended to predict Part B drug spending—which was a limitation we noted in our report. However, the risk scores we used are an indication of the expected overall health care spending for the beneficiaries served by the hospitals in our analysis, and we found small differences in expected overall health care spending across the hospital groups. As we noted above, we believe that the relative difference between the risk scores and the per beneficiary Part B drug spending at 340B DSH and non-340B hospitals indicates that the substantially higher spending at 340B DSH hospitals may not be explained by differences in patient health status. Additionally, in expressing concerns about the risk score measures, 340B Health referred to a Medicare Payment Advisory Commission report that questioned the usefulness of these measures for assessing expected spending for However, the same report also stated that, on individual beneficiaries.average, the risk scores are accurate predictors of patient health status, and for our report, we calculated an average risk score for each hospital group. 340B Health also questioned whether our exclusion of a group of hospitals—smaller, mostly nonteaching DSH hospitals that were in the 340B Program in 2012, but not in 2008—from our spending analysis might have skewed our findings. Our discussion in the report focused on our analysis of hospitals that participated in the 340B program in both 2008 and 2012 to ensure a like-to-like comparison. However, although we did not include a discussion of it in the report, we did separately examine Part B drug spending at DSH hospitals that participated in the 340B Program in 2012 but not in 2008. For example, we found that, in 2008, Part B drug spending at these hospitals was similar to spending at other non-340B DSH hospitals. However, in 2012, after the hospitals joined the 340B Program, Part B drug spending at these hospitals was 53 percent higher than spending at non-340B DSH hospitals (and among the nonteaching hospitals, spending at 340B DSH hospitals was 73 percent higher than non-340B DSH hospitals). Furthermore, although spending was higher at these 340B DSH hospitals in 2012, the average risk score of patients treated at these hospitals (1.41) was slightly lower than the average risk score of patients treated at non-340B DSH hospitals (1.45). These findings indicate that, like those we included in our report, these newer participants in the 340B program may have been responding to the financial incentives in the program. Finally, 340B Health expressed concern that we did not attempt to review patient outcomes or otherwise evaluate the quality of care provided to beneficiaries at 340B DSH hospitals compared with non-340B hospitals and cited research that found that increased use of outpatient drugs can reduce spending on health services. However, the research 340B Health cited was not focused on Part B drugs—which are generally drugs administered by a physician in a clinical setting—but rather on the effects of insurance coverage for prescription drugs on medical costs, so is not directly relevant to our analysis. In addition, as we noted above, while we did not attempt to evaluate health outcomes as part of our analysis, we have no evidence to suggest that non-340B hospitals had an incentive to provide a lower volume of Part B drugs than required to achieve positive clinical outcomes due to the structure of Medicare’s payment for Part B drugs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health & Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, individuals making key contributions to this report include Gerardine Brennan, Assistant Director; George Bogart; Lori Fritz; Daniel Lee; Elizabeth T. Morrison; Aubrey Naffis; and Daniel Ries.
Approximately 40 percent of all U.S. hospitals participate in the 340B Drug Pricing Program, and the majority of 340B discounted drugs are sold to hospitals. Medicare reimburses hospitals for Part B drugs under a statutory formula regardless of the prices hospitals paid for the drugs. Stakeholders have questioned the increase in hospital participation in the 340B program, and the implications for Medicare and its beneficiaries, especially regarding cancer care; and whether certain of the program's hospital eligibility criteria target hospitals appropriately. GAO was asked to review hospitals' participation in the 340B and Medicare programs. This report (1) compares 340B hospitals with non-340B hospitals in terms of financial and other characteristics and (2) compares spending for Medicare Part B drugs at 340B hospitals, for all drugs and for oncology drugs, with spending at non-340B hospitals. To examine hospital participation using the most recent data available, GAO analyzed 2008 and 2012 data from HRSA and CMS to compare characteristics and Medicare Part B drug spending for 340B hospitals and non-340B hospitals. Certain providers, including hospitals that serve a disproportionate number of low-income patients, have access to discounted prices on outpatient drugs through the 340B Drug Pricing Program, which is administered by the Health Resources and Services Administration (HRSA) within the Department of Health & Human Services (HHS). In 2012, these hospitals—referred to as 340B disproportionate share hospitals (DSH) because they are eligible for the program based on their serving a disproportionate share of low-income patients and other specified criteria—were generally larger and more likely to be teaching hospitals compared with non-340B hospitals. They also tended to provide more uncompensated and charity care than non-340B hospitals; however, there were notable numbers of 340B hospitals that provided low amounts of these types of care. For example, 12 percent of 340B DSH hospitals were among the hospitals that reported providing the lowest amounts of charity care across all hospitals in GAO's analysis. Overall financial margins for 340B DSH hospitals tended to be lower compared with non-340B hospitals, which could be attributable, in part, to the tendency for 340B DSH hospitals to provide more uncompensated and charity care. GAO found that in both 2008 and 2012, per beneficiary Medicare Part B drug spending, including oncology drug spending, was substantially higher at 340B DSH hospitals than at non-340B hospitals. This indicates that, on average, beneficiaries at 340B DSH hospitals were either prescribed more drugs or more expensive drugs than beneficiaries at the other hospitals in GAO's analysis. For example, in 2012, average per beneficiary spending at 340B DSH hospitals was $144, compared to approximately $60 at non-340B hospitals. The differences did not appear to be explained by the hospital characteristics GAO examined or patients' health status. The Centers for Medicare & Medicaid Services (CMS), which administers the Medicare program, uses a statutorily defined formula to pay hospitals for drugs at set rates regardless of hospitals' costs for acquiring the drugs. Therefore, there is a financial incentive at hospitals participating in the 340B program to prescribe more drugs or more expensive drugs to Medicare beneficiaries. Unnecessary spending has negative implications, not just for the Medicare program, but for Medicare beneficiaries as well, who would be financially liable for larger copayments as a result of receiving more drugs or more expensive drugs. In addition, this raises potential concerns about the appropriateness of the health care provided to these beneficiaries. HRSA and CMS have limited ability to counter this incentive because the 340B statute does not restrict covered entities from using drugs purchased at the 340B discounted price for Medicare Part B beneficiaries and the Medicare statute does not limit CMS reimbursement for such drugs. In commenting on a draft of this report HHS noted some concerns with GAO's conclusions and suggested that further analysis may be needed to examine patient outcomes and differences in health status. GAO believes its methods appropriately support its conclusions as further discussed in the report. Congress should consider eliminating the incentive to prescribe more drugs or more expensive drugs than necessary to treat Medicare Part B beneficiaries at 340B hospitals.
The DI program was established in 1956 to provide monthly cash benefits to individuals unable to work because of severe long-term disability. Cash benefits are payable monthly, as long as the worker remains eligible for benefits, until the worker reaches full retirement age or dies. In fiscal year 2011, more than 10 million beneficiaries received DI benefits exceeding $128 billion, and the program’s average monthly benefit was about $926. 42 U.S.C. § 423 and 20 C.F.R. § 404.1572. deceased, retired, or considered eligible for disability benefits, meaning one disability beneficiary can generate multiple monthly disability payments for dependents. To help ensure that SSA does not pay benefits to persons who do not have long-term disabilities, a DI program statute requires individuals to serve a 5-month waiting period prior to receiving DI benefits. The waiting period begins in the first full month in which the individual has been under a disability and continues for the next 4 consecutive months.determinable impairment that prevents the individual from earning SGA- level wages throughout a period of 5 consecutive calendar months. As shown in figure 1, if an individual has substantial earnings from work during any month of the waiting period, the individual is considered to be not disabled and therefore ineligible for DI benefits, and any DI payments SSA makes are potentially improper. On the basis of our analysis of SSA data on individuals who were DI beneficiaries as of December 2010 and earnings data from the NDNH, we estimate that SSA made $1.29 billion in DI benefit payments that were potentially improper to about 36,000 individuals as of January 2013. Our estimate for the amount of payments that were potentially improper has a margin of error of plus or minus $352 million, meaning the actual amount of payments that were potentially improper could be as low as $936 million and as high as $1.64 billion with a 95 percent level of confidence.Our estimate for the number of individuals has a margin of error of plus or minus 7,000 individuals, meaning the actual number of individuals to whom SSA made payments that were potentially improper could be as low as 29,000 and as high as 43,000 with a 95 percent level of confidence. As shown in figure 3, the estimated 36,000 DI beneficiaries receiving potential overpayments represent about 0.4 percent of all primary DI beneficiaries at that time. Our analysis identifies individuals who received DI benefits that were potentially improper due to work activity performed (1) during the 5-month waiting period, or (2) beyond the 9-month trial work period and the grace period. It is important to note that it is not possible to determine from data analysis alone the extent to which SSA made improper disability benefit payments to these individuals. To adequately assess an individual’s work status, a detailed evaluation of all the facts and circumstances must be conducted for each beneficiary. This evaluation would include contacting the beneficiary and the beneficiary’s employer to gather information on certain impairment-related work expenses, such as transportation costs and attendant care services, which are not considered in our analysis. On the basis of this additional information, SSA can determine whether the individual is entitled to continue to receive disability benefits or have such payments suspended. As described below, SSA had identified and established overpayments for some of the individuals we reviewed at the time of our audit. However, SSA had not identified potentially disqualifying work activity for other individuals we reviewed at the time of our audit. We estimate that SSA made payments that were potentially improper to about 21,000 individuals who were DI beneficiaries in 2010 and who had substantial earnings from work during the 5-month waiting period, resulting in potential overpayments of $920 million as of January 2013. Our estimate for the amount of payments that were potentially improper due to work activity during the waiting period has a margin of error of plus or minus $348 million, meaning the actual amount of payments that were potentially improper due to work activity during the waiting period could be as low as $571 million and as high as $1.27 billion with a 95 percent Our estimate for the number of individuals has a level of confidence.margin of error of plus or minus 7,000 individuals, meaning the actual number of individuals to whom SSA made payments that were potentially improper could be as low as 14,000 and as high as 28,000 with a 95 percent level of confidence. The exact number of individuals who received improper disability payments and the exact amount of improper payments made to those individuals cannot be determined without detailed case investigations by SSA. See appendix II for more information on the statistical estimations of overpayments for these populations. Specifically, these three beneficiaries were randomly selected from those in our sample with SGA-level earnings during the waiting period and additional SGA-level earnings after the waiting period that were earned within 1 year of their alleged disability onset date. According to SSA policy, when a beneficiary returns to work less than 1 year after onset it may indicate the 12-month duration requirement for disability was not met and thus the beneficiary’s disability claim must be denied. these examples cannot be projected to the population of individuals receiving potential DI benefit overpayments. As mentioned earlier, individuals are required to serve a 5-month waiting period prior to receiving DI benefits to ensure that SSA does not pay benefits to persons who do not have long-term disabilities. Specifically, substantial earnings from work during the 5-month waiting period may indicate that individuals are not considered disabled and therefore are not entitled to DI benefits. Thus, if SSA discovers substantial earnings from work during the waiting period prior to adjudicating the disability claim, SSA will deny the disability claim. SSA may also reopen previously awarded disability claims by revising the decision to a denial of benefits after providing due process rights. Additionally, SSA’s policies allow it the option of considering a later disability-onset date rather than denying the disability claim if it discovers the SGA-level work activity after the SGA subsequently stops. According to SSA guidance, an unsuccessful work attempt during the waiting period will not preclude a finding of disability. Waiting Period Example 1: Beneficiary Did Not Report All Earnings, SSA’s Enforcement Operation Did Not Generate Alert, and SSA Applied Trial Work Period Rather than Waiting Period Program Rules—Potential Overpayment of $90,000 The beneficiary filed for benefits in November 2009 while he had substantial earnings from working as a physician. He had substantial earnings from work in all 5 months of the waiting period, as much as $22,000 monthly, and continued to have substantial earnings from work in the month he started receiving benefits. As mentioned, SSA’s policies allow it the option of considering a later disability-onset date rather than denying the disability claim when it discovers SGA-level earnings during the waiting period. According to this SSA guidance, work activity of 6 months or less can be considered an “unsuccessful work attempt” during the waiting period, and will not preclude a finding of disability. During his initial claims interview, the beneficiary told SSA that he worked for only 2 months during the waiting period, a period short enough to be considered an unsuccessful work attempt. However, SSA did not verify the beneficiary’s wages with his employer and approved the individual for benefits beginning in January 2010. In contrast, the wages we confirmed with the beneficiary’s employer indicate that the beneficiary had earnings continuously above the SGA level for every month of the year that he applied for and was approved for benefits, including all 5 months of the waiting period. Further, because these earnings were continuously above the SGA level for more than 6 months, SSA policy indicates that his work activity cannot be considered an unsuccessful work attempt. As such, this individual’s work activity indicates that he was not disabled, and therefore was ineligible for benefits. Additionally, SSA’s enforcement operation did not generate an alert for his work activity during the waiting-period months in 2009 because his first benefit payment was in January 2010. According to SSA officials, the enforcement operation will only generate an earnings alert for a year in which a beneficiary receives a payment. In this example, because DI benefit payments began in 2010, and the waiting period was during the year prior to the first payment, no earnings alert was generated. In 2011, SSA initiated a work-related continuing disability review (CDR) as a result of an earnings alert for work activity after the waiting period. As mentioned, SSA conducts CDRs to determine if beneficiaries are working above the SGA level. However, during the CDR, SSA program staff did not request and verify wages from the beneficiary’s employer, nor did staff apply program rules regarding work activity during the waiting period. Instead, the CDR considered the work activity under the trial work period rules and determined that benefits should continue. Because the beneficiary’s impairment did not prevent him from earning SGA-level wages during the waiting period and his SGA continued after the waiting period, SSA policies indicate that SSA should have denied the individual benefits when his case was adjudicated, or reopened the determination after adjudication and revised the claim to a denial of benefits, which would have resulted in an overpayment of all benefits previously paid. As such, we estimate that SSA made $90,000 in cash benefit payments that were potentially improper to this individual over a period more than 3 years. As of May 2013, SSA had not detected or assessed any overpayments for the beneficiary and continued to pay monthly DI benefits of about $2,500. SSA officials told us they plan to conduct follow-up work on this case on the basis of the information we provided. Waiting Period Example 2: Beneficiary Did Not Report Earnings and SSA’s Enforcement Operation Did Not Generate Alert—Potential Overpayment of $21,000 The beneficiary began work in August 2009 and continued to work through January 2010, the month that SSA began making DI benefit payments due to mental disorders. The beneficiary had substantial earnings from work during 3 months of her waiting period and continued to have SGA-level earnings during the month she started receiving benefits, but she did not report any wages to SSA as required by program regulations. No enforcement operation earnings alert was generated for her work activity during the waiting period because her waiting period occurred in 2009, but her first benefit payment was not until January 2010. Additionally, no enforcement operation earnings alert was generated for her earnings in 2010 because, SSA officials told us, her earnings were too low to generate an alert.and continued to pay monthly DI benefits as of May 2013. We identified about $21,000 in benefit payments that were potentially improper through our analysis of the beneficiary’s wage records. SSA officials told us they plan to conduct follow-up work on this case. SSA had not assessed any overpayments for the beneficiary Waiting Period Example 3: SSA Did Not Follow Its Program Rules—Potential Overpayment of $25,000 The beneficiary began working in October 2004 and remained employed through at least June 2012. SSA approved the beneficiary for DI benefits starting in December 2009 for a malignant tumor. The beneficiary had substantial earnings from work during her waiting period as well as SGA-level earnings in 9 months of the first 12 months that SSA determined her impairment prevented her from having substantial earnings from work. When the beneficiary eventually self-reported earnings in 2011, SSA initiated a CDR that discovered substantial earnings from work during the waiting period. However, SSA staff did not consider this work in accordance with its own policies, and the CDR resulted in a determination that DI benefits should continue. Specifically, according to SSA, the discovery of SGA-level wages during the waiting period should have prompted SSA staff to initiate processes for determining whether benefits should have originally been denied or if the onset date should be changed to the date the SGA-level work stopped, but the SSA staff did not do so. On the basis of this individual’s substantial earnings from work during the waiting period, the revised disability determination may have resulted in a revised disability denial or revised date of disability onset. SSA ceased providing DI benefits to the beneficiary in April 2013 when the beneficiary died. Although SSA never assessed overpayments for the beneficiary, we identified about $25,000 in cash benefit payments that were potentially improper through our analysis of the beneficiary’s wage records. Because the beneficiary died, her estate, or the beneficiaries of her estate, would be responsible for repaying the overpayment. SSA officials told us they plan to conduct follow-up work on this case. We estimate that SSA made potential overpayments to 15,500 individuals who were DI beneficiaries in 2010 and who worked beyond their trial work period, resulting in potential overpayments of $368 million as of January 2013. Our estimate for the amount of payments that were potentially improper due to work activity beyond the trial work period has a margin of error of plus or minus $62 million, meaning the actual amount of payments that were potentially improper due to work activity beyond the trial work period could be as low as $306 million and as high as $430 million, with a 95 percent level of confidence. Our estimate for the number of individuals has a margin of error of plus or minus 1,500 individuals, meaning the actual number of individuals to whom SSA made payments that were potentially improper could be as low as 14,000 and as high as 17,000, with a 95 percent level of confidence. The exact number of individuals who received improper SSA disability payments cannot be determined without detailed case investigations by SSA. See appendix II for more information on the statistical estimations of overpayments for these populations. In addition to our statistical sample, we reviewed detailed DI case-file information for a nongeneralizable selection of three beneficiaries from among those in our sample that we identified to have potential overpayments for at least 36 months (3 years) due to work activity beyond their trial work period. Our case file reviews for these three beneficiaries confirmed instances in which SSA made overpayments to beneficiaries with substantial earnings from work, as discussed later in this report. SSA officials told us they plan to conduct follow-up work on these cases on the basis of the information we provided during this review. Because we selected a small number of individuals for further review, these examples cannot be projected to the population of individuals receiving potential DI benefit overpayments. As previously discussed, federal statutes and SSA regulations allow DI beneficiaries to work for a limited time without affecting their benefits. However, after completing the 9-month trial work period and entering the reentitlement period, beneficiaries who have substantial earnings from work beyond the 3-month grace period are generally no longer entitled to benefits. If SSA does not stop their benefits in a timely manner, SSA may overpay beneficiaries who are not entitled to benefits due to their work activity. Trial Work Period Example 1: No Increased Scrutiny for Known Rule Violator— Overpayment of $57,000 The individual filed for DI benefits on the basis of personality disorders and affective disorders in July 2006, and SSA approved his claim the following day. The day after he was approved for benefits, the beneficiary began working. At no point did the beneficiary report the new wages from his employment, as required by SSA regulations. SSA’s enforcement operation generated earnings alerts each year from 2008-2011, but SSA did not initiate a CDR until April 2011. Agency officials told us that SSA does not have any policies that dictate time limits for initiating a CDR on the basis of an earnings alert, nor stipulating that a CDR must be initiated if earnings alerts are generated for several consecutive years. As a result of the CDR in 2011, SSA suspended the beneficiary’s DI benefits in December 2011 and subsequently assessed an overpayment of more than $57,000 due to his work activity. SSA officials were unable to explain why a CDR was not performed until 2011, though they stated that limited resources and competing workloads may be factors that contributed to the timeliness with which the CDRs were initiated. A month after SSA assessed the overpayment, while he continued to have substantial earnings from working for the same employer, the beneficiary applied to have his benefits reinstated, and fraudulently affirmed that he did not have substantial earnings from work. We found no evidence in SSA’s files that SSA had contacted the beneficiary’s employer to confirm his statement before approving his benefits. Thus, even though SSA had information documenting that the individual did not report earnings before, the agency approved the application and continued to pay DI benefits as of May 2013. Because the individual had SGA-level wages for the entire year prior to his application for reinstatement and for at least 2 months after SSA approved him for reinstated benefits, the cash benefit payments SSA made after reinstating this individual were potentially improper, though SSA had not established an overpayment for this work activity as of April 2013. To recover the prior outstanding overpayment, SSA is withholding $75 per month from the current monthly DI benefits that SSA may be improperly paying to the beneficiary. At $75 per month, it would take 63 years for SSA to recover the $57,000 overpayment, at which time the beneficiary would be well over 100 years old.basis of the information we provided. SSA officials told us they plan to conduct follow-up work on this case on the Trial Work Period Example 2: Earnings Alerts Did Not Result in Review for 5 Years—Overpayment of $74,000 SSA approved the beneficiary for DI benefits in April 1998 for mental disorders. The beneficiary began working in October 2005 and remained employed as of August 2012. After he reported his earnings in October 2005, SSA completed a CDR and found that the beneficiary was within his trial work period. From 2007 to 2011, SSA’s enforcement operation generated earnings alerts for the beneficiary. Despite knowing the beneficiary began working in 2005 and receiving 5 additional years of earnings alerts, SSA did not perform another CDR until December 2011. SSA officials were unable to explain why a second CDR was not performed for more than 5 years when it had previously identified that the beneficiary had partially completed a trial work period. However, SSA officials told us that resource constraints may have delayed this CDR. As a result of the 2011 CDR, SSA assessed over $56,000 in overpayments and ceased providing benefits to the beneficiary in June 2012. SSA also assessed a total of over $18,000 in additional overpayments for the beneficiary’s two child dependents. SSA approved a repayment plan of $200 per month for the $74,000 in overpayments. SSA officials told us they plan to conduct follow-up work on this case. Trial Work Period Example 3: Known Work Activity Not Monitored—Overpayment of $25,000 SSA approved the beneficiary for DI benefits starting in June 2005 for mental disorders. The beneficiary began working in November 2007 and remained employed through at least August 2012. In May 2008, he provided SSA pay stubs for 2 months of earnings, which showed that he was not earning substantial wages. SSA’s enforcement operation generated earnings alerts in 2008 and 2009, and SSA completed a CDR in 2010 in which the agency determined that benefits should continue because he had not yet completed his trial work period. In the month following the completion of the CDR, SSA contacted the beneficiary to inform him that he had completed his trial work period. Despite knowing that the beneficiary had completed his trial work period, SSA did not complete a subsequent CDR for more than 2 years. SSA officials told us that resource constraints may have contributed to the delay in initiating a subsequent CDR. As a result of the CDR performed 2 years later, SSA assessed overpayments of $25,000 due to work activity and stopped paying benefits. As of April 2013, the beneficiary had not made any payments toward his overpayment debt. SSA officials told us they plan to conduct follow-up work on this case. We identified instances in which the timeliness of SSA’s process for identifying disqualifying work activity allowed DI overpayments to remain undetected and accrue; however, SSA is assessing opportunities to obtain more-timely earnings information and improve its work CDR process. Specifically, in the course of this review, we identified instances in which SSA did not obtain timely earnings information and did not act promptly when it did receive earnings alerts, which led to significant cash benefit overpayments. This is consistent with our prior work that found DI overpayments for beneficiaries who return to work may accrue over time because SSA lacks timely data on beneficiaries’ earnings and does not act promptly when it receives earnings alerts from its enforcement operation. During this review, SSA officials told us that limited resources and competing workloads may have constrained the agency’s ability to act promptly when it received earnings alerts or self-reported earnings for beneficiaries from our nongeneralizable examples described above. We also reported in April 2013 that budget decisions and the way SSA prioritizes competing demands, such as processing initial claims, contribute to challenges SSA faces in maintaining the integrity of the disability program. In 2004, we reported that SSA’s lack of timely earnings data on beneficiaries’ earnings and work activity impeded its ability to prevent and detect earnings-related overpayments. To enhance SSA’s ability to detect and prevent overpayments in the DI program, we recommended that SSA use more-timely earnings information in the NDNH in conducting program-integrity operations. Although SSA uses the NDNH to perform oversight of the SSI program, it does not use the NDNH to conduct oversight of the DI program. In 2009, SSA conducted a cost-effectiveness study on use of the NDNH, which estimated its return on investment would be about $1.40 for every $1 spent, or a 40 percent rate of return; however, SSA concluded in its 2009 study that this expected return on investment was low, and noted that a match with the NDNH would generate a large number of CDR alerts needing development that were not of high quality. In July 2011, we reported that due to overly pessimistic assumptions in SSA’s cost-effectiveness study, it is likely that the actual savings that result from SSA’s use of the NDNH could be much higher. Further, it is not clear whether this cost-benefit analysis accounted for improper payments that would be prevented by identifying work activity during the 5-month waiting period. Thus, the real return on investment could be understated. SSA agreed with our assessment and in January 2013 said that it is currently reevaluating the cost- effectiveness of using the NDNH for DI program-integrity initiatives and expects the cost-benefit analysis to be completed in the fourth quarter of fiscal year 2013. SSA officials also stated that the agency has made improvements to its CDR process, but we were unable to determine how they might reduce improper payments due to beneficiaries’ work activity because these initiatives were still being tested at the time of our review. For example, in 2010 SSA began a pilot to use what the agency refers to as a predictive model to prioritize enforcement operation earnings alerts, working cases likely to incur large work-related overpayments first. SSA officials told us the agency is planning to implement the model nationally in June 2013. Additionally, in response to a recommendation we made in a prior report, in 2012 SSA began testing a new process to use its model to identify and delay benefit increases for beneficiaries with pending work CDRs.Because these initiatives were still being tested at the time of our review, we were unable to determine how they might reduce improper payments due to beneficiaries’ work activity. As such, it is too early to assess what effect these initiatives may have on the prevalence and size of DI overpayments. We found that a limitation of SSA’s enforcement operation allows individuals with substantial earnings from work during the waiting period to be approved for DI benefits and allows resulting DI benefit payments that were potentially improper to remain undetected by SSA. Specifically, we found that SSA’s enforcement operation will not generate an alert for earnings during the waiting period if the earnings occur in a year when the beneficiary does not receive a benefit payment. For example, if a beneficiary receives her or his first benefit payment in January 2013, the enforcement operation will not generate an earnings alert for wages earned during the waiting-period months occurring in the prior year, which would be from August to December 2012. As a result, for any beneficiary whose first month of entitlement is January to May, the enforcement operation does not generate an earnings alert for at least 1 month of the waiting period. In two of the three examples we randomly selected from our sample of beneficiaries with work activity during the waiting period, SSA’s enforcement operation did not generate alerts for SGA-level earnings during the waiting period because their waiting periods occurred in the year prior to their first benefit payment. These individuals were approved for benefits despite disqualifying work activity, and SSA had not detected any overpayments for these individuals at the time of our audit. For the third beneficiary we reviewed, SSA’s enforcement operation generated an alert for earnings during the waiting period because the individual also received benefit payments in that year. However, this alert was generated more than 1 year after the work activity during the waiting period occurred, and in the resulting work CDR, SSA did not apply its own waiting period program rules to the work activity. Specifically, SSA approved the individual for benefits despite disqualifying work activity and did not detect and establish overpayments for this work activity when it later became aware of the work activity. Standards for Internal Control in the Federal Government states that internal controls should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. SSA officials acknowledged that the systemic limitation to their enforcement operation allows potentially disqualifying work activity to remain undetected, but SSA expressed concern that modifying its existing enforcement operation may be costly. However, SSA has not assessed either the costs of such a modification or the additional program savings it might realize should such a change be implemented. Such an analysis would assist SSA in making an informed decision regarding the costs and benefits of modifying its existing enforcement operation. To the extent that such an analysis determines modifying its existing enforcement operation is cost effective and feasible, establishing a mechanism to identify work activity performed during all months of the waiting period, including those that occur in a year when beneficiaries were not paid, may help provide SSA greater assurance that DI beneficiaries are eligible to receive benefits. The DI program provides an important safety net for disabled beneficiaries. However, during a time of growing concerns about the solvency of the DI trust fund, it is important that SSA take every opportunity to ensure that only eligible beneficiaries receive payments under this program and that additional actions are taken to improve the financial status of the program. Without reliable and timely earnings information on the work activity of individuals applying for DI benefits, SSA risks making overpayments to individuals whose work activity indicates they are not disabled and therefore ineligible for disability benefits. While we cannot generalize the examples we found, SSA’s inability to identify work activity during the waiting period may result in overpayments to beneficiaries who are ineligible for benefits. Assessing the costs and savings associated with establishing a mechanism to identify work activity during all months of the waiting period would help SSA to determine whether establishing such a mechanism would be cost- effective and feasible. To the extent that it is determined to be cost- effective and feasible, implementing a mechanism to identify work activity performed during all months of the waiting period, including those that occur in a year when benefits were not paid, may help provide SSA greater assurance that DI beneficiaries are eligible to receive benefits. To improve SSA’s ability to detect and prevent potential DI cash benefit overpayments due to work activity during the 5-month waiting period, we recommend that the Commissioner of Social Security take the following action: assess the costs and feasibility of establishing a mechanism to detect potentially disqualifying earnings during all months of the waiting period, including those months of earnings that the agency’s enforcement operation does not currently detect and implement this mechanism, to the extent that an analysis determines it is cost- effective and feasible. We provided a draft of this report to the Office of the Commissioner of SSA. In its written comments, SSA concurred with our recommendation and stated that it would conduct the recommended analysis. In addition, SSA expressed some concerns about our methodology for estimating potential improper payments due to beneficiaries’ work activity, which are summarized below. The agency also provided general and technical comments, which have been incorporated into the report, as appropriate. SSA’s comments are reproduced in full in appendix IV. In commenting on our recommendation to assess the costs and feasibility of establishing a mechanism to detect potentially disqualifying earnings during all months of the waiting period and to implement the mechanism, to the extent that it is cost-effective and feasible, SSA requested the data we gathered as part of this study to help the agency assess the costs and feasibility of establishing such a mechanism. At SSA’s request, we will provide SSA the population of individuals with earnings during the 5- month waiting period that we identified from our match of 2010 NDNH earnings data and SSA’s 2010 DI program data. During the course of this audit, we also provided SSA with the SSNs of the individuals in our two random samples. These data would allow SSA to perform the recommended analysis using the NDNH wage data, which we obtained from SSA. We note that SSA’s assessment would benefit from using the most-recently available wage data, such as 2013 data that are directly available to SSA from the NDNH. In addition to this data request, SSA raised several concerns about our methodology and asserted that our inability to replicate the process it uses to make SGA determinations may lead to substantial overstatement of our estimate of potentially improper payments. First, SSA noted that our review does not consider program features, such as unsuccessful work attempts and Impairment Related Work Expenses (IRWE), or whether the work involved subsidies or special conditions. As mentioned in the report, SSA’s process for determining SGA and its policies for determining whether individuals remain entitled to benefits despite potentially disqualifying work activity involve a consideration of all the facts and circumstances surrounding a case, including medical data that doctors and hospitals are not required to share with GAO for purposes of this audit. As such, our objective was to estimate the extent to which individuals received DI benefit payments that were potentially improper due to their work activity. To do this, we used wage data to identify two populations of individuals with earnings beyond program limits; we then drew a random, generalizable sample of individuals from each population and compared wage information from their employers to DI program information from SSA to develop estimates of potential overpayments in each population. Because our analysis of potential overpayments is limited to earnings data from the NDNH and DI payments from SSA, potential overpayments for each sample are estimated. Thus, we continue to believe that the methodology we applied using the data we were able to access led us to valid estimates of potentially improper payments due to beneficiaries’ work activity. Second, SSA noted that we assume that every payment made after the 5- month waiting period is likely to be an improper payment instead of reestablishing the disability onset date, as its policy allows in some instances. However, our method of calculating potential overpayments is consistent with current DI program policies and interviews with SSA officials who stated that individuals who perform substantial gainful activity during the waiting period are not disabled and therefore not entitled to benefits; thus, all DI payments made to those individuals are potentially improper payments. Further, determining which individuals in our samples, if any, should have their onset date reestablished despite disqualifying work activity in the waiting period was not reasonably possible because making such a determination would involve a consideration of medical data that doctors and hospitals were not required to share with GAO for purposes of this audit. Third, SSA stated that payment for medical leave may have been included in some of the payroll data we used for our analysis and suggested that this may have led to a substantial overstatement of estimated improper payments. However, our calculation of earned income excludes material payments for medical leave, as described in detail in appendix II. Thus, we do not expect that these payments or the other concerns SSA raises in its letter led to a substantial overstatement of potential overpayments, as SSA suggested. Finally, SSA noted that improving payment accuracy is critical to preserving the public's trust in the DI program and that available resources may affect SSA’s ability to increase its payment accuracy. We recognize SSA’s ongoing efforts to improve the program and that federal resources are currently constrained. However, without making changes to its existing processes for identifying beneficiaries’ work activity, to the extent that the benefits exceed the costs, SSA may remain unable to detect work activity in a timely manner, and SSA may continue to make improper payments to individuals whose work activity indicates they are not entitled to benefits. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Commissioner of Social Security and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. As shown in figure 4 below, in fiscal year 2010, the total amount owed to the Social Security Administration (SSA) for Disability Insurance (DI) overpayments was $5.4 billion. This debt has increased through fiscal year 2012, as individuals owed over $6 billion in overpayments of DI benefits. This report (1) estimates the extent to which individuals received disability insurance (DI) benefit payments that were potentially improper due to work activity performed during the 5-month waiting period or beyond the 9-month trial work period; and (2) assesses the extent to which the Social Security Administration’s (SSA) enforcement operation detects potentially disqualifying work activity during the waiting period. The exact number of individuals who received improper disability payments and the exact amount of improper payments made to those individuals cannot be determined without detailed case investigations by SSA. Thus, we refer to “benefit payments that were potentially improper” and “potential overpayments” throughout this report. As part of this work, we also provide examples of beneficiaries with work activity during the waiting period or beyond the trial work period to help illustrate the circumstances under which SSA made DI payments that were potentially improper to beneficiaries. We plan to assess the extent to which the National Directory of New Hires (NDNH) indicates potential overpayments in SSA’s Supplemental Security Income (SSI) program in future work, which will be available this year. To determine the extent to which the NDNH provides evidence that individuals received DI benefit payments that were potentially improper due to work activity, we matched the NDNH quarterly wage data with our extract of SSA’s Master Beneficiary Record (MBR) as of December 2010. To ensure the best quality matches, we matched only against Social Security Numbers (SSN) that the NDNH categorizes as “verified” through Thus, we did not the SSA’s Enumeration Verification System process.match against SSNs that the NDNH categorizes as “unverified” or “non- verifiable.” The match process identified two populations with potential overpayments due to work activity. The first population consisted of individuals who received potential overpayments due to substantial gainful activity (SGA) level earnings during the 5-month waiting period. The details of this match are described in the section below titled “Wait Period Overpayments.” The second overpayment population consisted of individuals who received potential overpayments due to SGA-level earnings beyond the trial work period.described in the section below titled “Trial Work Period Overpayments.” Sections 452(a)(9) and 453(a)(1) of the Social Security Act required the Secretary of Health and Human Services to establish and maintain the Federal Parent Locator Service, which includes the NDNH database.The NDNH database contains employment data on newly hired employees (W4), quarterly wage (QW) data on individual employees, and unemployment insurance (UI) data. The federal Office of Child Support Enforcement (OCSE) matches case information from state child support enforcement agencies against the NDNH and returns information on the case to the appropriate state or states. NDNH data are deleted after 24 months, as required by Section 453(i) of the Social Security Act. The data reported to OCSE for the NDNH come from several sources. Employers report W4 data to the State Directories of New Hires, which then report them to OCSE. UI data originate with the State Workforce Agencies, which then send data to the State Directories of New Hires, which send data to OCSE. The quarterly wage data are reported by employers to the State Workforce Agencies for their state, which in turn reports them to the State Directories of New Hires (sometimes colocated with the State Workforce Agencies), which then reports the information to OCSE. Federal agency W4 and QW data are reported directly to OCSE. The timing and nature of NDNH earnings data we received present limitations to the data’s capacity to identify SGA in accordance with SSA’s complex program rules. First, the quarterly wage amounts on the NDNH represent 3 months of earnings; however, the statute for evaluating SGA- level earnings requires SSA to use monthly earnings amounts. To facilitate our analysis, we calculated monthly earnings for each month in a quarter by dividing the quarterly wage amount in the NDNH by 3. For example, if the NDNH reported quarterly earnings of $3,000 in the first quarter of 2010, we calculated the monthly earnings to be $1,000 for January, February, and March of 2010. This monthly computed earnings amount could differ from the actual monthly earnings. For instance, using the previous example, the actual monthly earnings in January 2010 could be $3,000, and actual earnings in February or March could be $0. Second, when SSA evaluates earnings to determine SGA for DI beneficiaries, SSA counts earnings when they are earned, not paid; however, amounts of earnings on the NDNH for a particular quarter could be paid in that quarter, but earned in prior quarters. In addition to these timing limitations, the NDNH quarterly earnings data may contain payments not related to work activity, such as paid time off, long-term disability payments, or posttermination compensation; however, SSA’s assessment of SGA generally involves doing significant physical or mental activities, rather than receiving payments not related to work. To account for these limitations, we drew a simple random sample from each of the potential overpayment populations and contacted the employers that reported the earnings to determine the exact timing, amount, and nature of the earnings for the beneficiaries in our sample. With these simple random samples, each member of the study populations had a nonzero probability of being included, and that probability could be computed for any member. Each sample element was subsequently weighted in the analysis to account statistically for all the members of the population, including those who were not selected. Additional details on our sample work are described in the “Wait Period Overpayments” and “Trial Work Period Overpayments” sections below. As mentioned, it is impossible to determine from reported earnings alone the extent to which SSA made improper disability benefit payments to these individuals. To adequately assess an individual’s work status, a detailed evaluation of all the facts and circumstances should be conducted for all cases. This evaluation may necessitate contacting the beneficiary, the beneficiary’s employer, and the beneficiaries’ physician to evaluate the nature of the work performed. This evaluation may also consider certain impairment-related work expenses, which were not considered in our analysis. On the basis of this comprehensive evaluation of all facts and circumstances surrounding a case, SSA can determine whether the individual is entitled to continue to receive disability payments or have such payments suspended. Our analysis of the NDNH match file identified individuals who were in current pay status in the DI program as of December 2010 and had computed monthly earnings that exceeded the corresponding monthly SGA threshold for any of the 5 months prior to the individual’s DI date of We included individuals who received potential entitlement to disability.overpayments due to work activity during the waiting period on the basis of the following criteria: 1. monthly computed earnings were greater than the corresponding SGA threshold during any month of the individual’s 5-month waiting period, and 2. DI payment records on the MBR showed that DI benefits were paid to the individual during any of the 36 months for which we had DI payment data from the MBR. Thus, to be included in our Wait Period overpayment population, individuals had to be in current pay status as of December 2010 and have at least 1 month of potential overpayments as defined by the criteria above. Our analysis determined there were 83,179 individuals meeting these potential-overpayment criteria. Because SGA-level earnings during the waiting period would result in a denial of eligibility for DI benefits, we considered all the benefits paid to the individual as potential overpayments. Next, we drew a simple random sample of 133 individuals from the Wait Period overpayment population and contacted the employers to verify the timing and nature of the wages paid to the individuals. We received completed requests from employers for 98 individuals for a response rate of 75 percent. Nonresponses included sample items whose employers we could not locate, employers who were no longer in business, and employers who refused to cooperate with our requests. We asked employers who provided earnings data to identify payments that were not related to work activity, such as paid time off, extended sick leave, or posttermination compensation. Many employers provided payroll reports indicating hours and payments by payment category, such as total payments for hours in regular work, hours in overtime work, and hours of vacation time. Using the earnings data employers provided, we calculated monthly earned income for each sample item and identified whether the beneficiaries’ monthly earned income exceeded SGA during any of the 5 months of their waiting period. Because employers use different payroll cycles and provided different levels of detail in their responses, we adhered to the following guidelines to standardize our calculation of monthly earned income: 1. In consideration of SSA guidance regarding the timing of payments, we calculated monthly earned income according to the period in which payments were earned rather than when they were paid. However, if an employer’s payroll reports indicated only the dates when payments were issued, we calculated monthly earned income according to those dates. 2. In consideration of SSA guidance regarding the nature of payments, our calculation of monthly earned income excludes payments not related to work activity, such as payments for paid time off, vacation pay, and extended sick leave, if those payments covered the entire pay period, as defined by the employer. Thus, payments not related to work activity that were episodic, such as sick pay or vacation pay received during a pay period when the individual also performed work, are included in our calculation of monthly earned income. We then obtained additional DI program data on the MBR to estimate total program overpayments to-date for our sample items. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. To provide examples of the circumstances under which SSA made potential overpayments to individuals with work activity during the 5- month waiting period, we randomly selected three beneficiaries from our waiting period sample who were among those that met the following criteria: 1. the individual’s employer reported that the individual received SGA- level earnings during the 5-month waiting period, and 2. the individual’s employer reported additional SGA-level earnings that were earned after the waiting period and within 1 year of their date of disability onset. According to statutes and SSA regulations, when individuals have SGA- level work activity during the waiting period, this normally means they will be considered not disabled and therefore not eligible for benefits. For these three individuals, we obtained detailed DI case-file information from SSA to determine the facts and circumstances surrounding their potential overpayments. Because we selected a small number of individuals for further investigation, the results cannot be projected to the population of individuals receiving DI overpayments due to SGA in the 5-month waiting period. Our analysis of the NDNH match file identified individuals who were in current pay status in the DI program as of December 2010 and who had SGA-level earnings after the completion of their trial work period (TWP) and 3 grace period months. We determined a month to be a TWP month if the monthly computed earnings after the date of entitlement was greater than the TWP threshold. After we identified 9 TWP months, we identified 3 grace period months where monthly computed earnings were greater than the corresponding SGA threshold. Next, we identified potential overpayment months that met the following criteria: 1. monthly computed earnings after the 3 month grace period were greater than the corresponding SGA threshold, 2. DI payment records on the MBR showed that DI benefits were both due and paid for that month. Thus, to be included in our TWP overpayment population, beneficiaries had to be in current pay status as of December 2010 and have at least 1 month of potential overpayments as defined by the criteria above. Our analysis determined there were 19,208 individuals that met these overpayment criteria. Our analysis accounts for the higher SGA amounts for individuals whose disability is blindness. The monthly earnings amounts that demonstrate SGA are described in greater detail in app. III. SSA may hold payments to beneficiaries for a month under certain circumstances, such as when a payment address is in question. In these instances, the MBR may indicate benefits due, but not paid for that month. months may not be included in our TWP overpayment population. For example, if NDNH earnings data indicated the beneficiary completed all 9 TWP months and 3 grace period months from October 2009 to September 2010 (i.e., 12 months), but the next month of SGA-level earnings occurred in October 2010, which is outside our 15-month time frame, the beneficiary was not included in our TWP overpayment population. Figure 5 below illustrates the 15 months from July 2009 to September 2010 for which our analysis captured both DI payment records on the MBR and earnings data on the NDNH. Second, because our calculation of potential overpayment months includes only months where DI payment records on the MBR showed that DI benefits were both due and paid for that month, our TWP overpayment population does not include individuals whose only months of SGA beyond the TWP occurred in months where benefits were due, but not paid. Similarly, our TWP overpayment population does not include individuals whose only months of SGA beyond the TWP occurred in months where benefits were paid, but not due. Next, we drew a simple random sample of 130 individuals from the TWP overpayment population and contacted the employers to verify the wages paid to the individuals. We completed requests from employers for 98 individuals for a response rate of 76 percent. Nonresponses included sample items whose employers we could not locate, employers who were no longer in business, and employers who refused to cooperate with our requests. We asked employers who provided earnings data to identify payments that were not related to work activity, such as paid time off, extended sick leave, or posttermination compensation. Many employers provided payroll reports indicating hours and payments by payment category, such as total payments for hours in regular work, hours in overtime work, and hours of vacation time. Using the earnings data employers provided, we calculated monthly earned income for each sample item and identified whether the beneficiaries’ monthly earned income exceeded SGA during any month of the extended period of eligibility. Because employers use different payroll cycles and provided different levels of detail in their responses, we adhered to the following guidelines to standardize our calculation of monthly earned income: 1. In consideration of SSA guidance regarding the timing of payments, we calculated monthly earned income according to the period in which payments were earned rather than when they were paid. However, if an employer’s payroll reports indicated only the dates when payments were issued, we calculated monthly earned income according to those dates. 2. In consideration of SSA guidance regarding the nature of payments, our calculation of monthly earned income excludes payments not related to work activity, such as payments for paid time off, vacation pay, and extended sick leave, if those payments covered the entire pay period, as defined by the employer. Thus, payments not related to work activity that were episodic, such as sick pay or vacation pay received during a pay period when the individual also performed work, are included in our calculation of monthly earned income. We then obtained additional DI program data on the MBR to estimate total program overpayments to-date for our sample items. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. To provide examples of the circumstances under which SSA made potential overpayments to individuals with work activity beyond their trial work period, we randomly selected three beneficiaries from our trial work period sample who were among those beneficiaries receiving potential overpayments for at least 36 months (3 years). For these three individuals, we obtained detailed DI case-file information from SSA to determine the facts and circumstances surrounding their potential overpayments. Because we selected a small number of individuals for further investigation, the results cannot be projected to the population of individuals receiving DI overpayments due to SGA beyond the trial work period. To determine the reliability of the SSA disability records and NDNH quarterly wage records, we reviewed documentation related to these databases and interviewed officials responsible for compiling and maintaining relevant DI and NDNH data. In addition, we performed electronic testing to determine the validity of specific data elements in the databases that we used to perform our work. We also reviewed detailed wage data from employers and DI program data from SSA for the statistical samples of individuals selected as described above to confirm that quarterly wage data from the NDNH indicated payments that were potentially improper from the DI program. On the basis of our discussions with agency officials and our own testing, we concluded that the data elements used for this report were sufficiently reliable for our purposes. To assess the extent to which SSA’s enforcement operation detects potentially disqualifying work activity during the waiting period, we interviewed officials from SSA regarding the agency’s internal controls for detecting and preventing overpayments due to work activity. We interviewed agency officials from SSA’s policy offices to confirm our interpretations of SSA regulations and policies regarding work activity during the waiting period and beyond the trial work period. We also interviewed officials from SSA’s operations offices to confirm the actions SSA took while reviewing the work activity for our nongeneralizable examples. We also examined SSA’s mechanisms to detect potentially disqualifying work activity and compared them with Standards for Internal Control in the Federal Government. We conducted this performance audit from April 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. To be eligible for disability benefits, a person must be unable to engage in substantial gainful activity (SGA). A person who is earning more than a certain monthly amount (net of impairment-related work expenses) is ordinarily considered to be engaging in SGA. During a trial work period, a beneficiary receiving Social Security disability benefits may test her or his ability to work and still be considered disabled. The Social Security Administration (SSA) does not consider services performed during the trial work period as showing that the disability has ended until services have been performed in at least 9 months (not necessarily consecutive) in a rolling 60-month period and 1 additional month, at an SGA-level, after the trial work period has ended. Table 1 shows the amount of monthly earnings that trigger a trial work period month for calendar years 2001– 2012. The amount of monthly earnings considered as SGA depends on whether a person’s disability is for blindness or some other condition. The Social Security Act specifies a higher SGA amount for statutorily blind individuals and a lower SGA amount for nonblind individuals. Both SGA amounts generally change with changes in the national average wage index. Table 2 shows the amount of monthly earnings that ordinarily demonstrate SGA for calendar years 2001–2012.
SSA's DI program is the nation's largest cash assistance program for workers with disabilities. Though program rules allow limited work activity, some work activity indicates beneficiaries are not disabled and therefore not entitled to DI benefits. Consequently, SSA might overpay beneficiaries if the agency does not detect disqualifying work activity and suspend benefits appropriately. GAO was asked to study potential DI overpayments. GAO examined the extent to which (1) the NDNH indicates that individuals received potential DI overpayments; and (2) SSA's enforcement operation detects potentially disqualifying work activity during the waiting period. GAO drew random, generalizable samples of individuals from those whose earnings on the NDNH were beyond program limits and compared wages from their employers to DI program data to identify potential overpayments. To illustrate the circumstances in which SSA made potential DI overpayments, GAO reviewed case files for a nongeneralizable selection of six individuals--three who worked during their waiting period, and three who received potential overpayments for at least 3 years. On the basis of analyzing Social Security Administration (SSA) data on individuals who were Disability Insurance (DI) beneficiaries as of December 2010 and earnings data from the National Directory of New Hires (NDNH), GAO estimates that SSA made $1.29 billion in potential cash benefit overpayments to about 36,000 individuals as of January 2013. The exact number of individuals who received improper disability payments and the exact amount of improper payments made to those individuals cannot be determined without detailed case investigations by SSA. These DI beneficiaries represent an estimated 0.4 percent of all primary DI beneficiaries as of December 2010. Using a different methodology that includes additional causes of overpayments not considered in GAO's analysis, SSA estimated its DI overpayments in fiscal year 2011 were $1.62 billion, or 1.27 percent of all DI benefits in that fiscal year. GAO estimated DI program overpayments on the basis of work activity performed by two populations of individuals. The first population received potential overpayments due to work activity during the DI program's mandatory 5-month waiting period--a statutory program requirement to help ensure that SSA does not pay benefits to individuals who do not have long-term disabilities. Prior to receiving benefits, individuals must complete a 5-month waiting period, in which the individual cannot exceed a certain level of earnings, known as substantial gainful activity, during any month in order to be eligible for DI benefits. Earnings that exceed program limits during the waiting period indicate that individuals might not have long-term disabilities. The second population received potential overpayments due to work activity beyond the program's trial work period--the trial work period consists of up to 9 months in which a DI beneficiary may return to work without affecting her or his benefits. However, beneficiaries whose earnings consistently exceed program limits after completing a trial work period are generally no longer entitled to benefits. SSA uses its enforcement operation to generate alerts for potentially disqualifying earnings, but the agency's enforcement operation does not generate alerts for earnings that occur in all months of the waiting period, which allows potentially disqualifying work activity to remain undetected. Specifically, GAO found that SSA's enforcement operation will not generate an alert for earnings during the waiting period if the earnings occur in a year when the beneficiary does not receive a benefit payment. For example, in two of the nongeneralizable case studies GAO reviewed, SSA's enforcement operation did not generate an alert for potentially disqualifying work activity during the waiting period because these individuals' waiting periods occurred in the year prior to their first benefit payment. GAO obtained earnings records from these individuals' employers that show they worked continually both during and after their waiting periods at a level of work that would normally result in a denial of benefits. GAO also reviewed information for individuals who worked beyond their trial work period and found that SSA had identified and established overpayments for these individuals. SSA officials stated that modifying its enforcement operation could be costly, but the agency has not assessed the costs of doing so. To the extent that it is cost-effective and feasible, establishing a mechanism to detect earnings during all months of the waiting period would strengthen SSA's enforcement operation. GAO recommends that SSA assess the costs and feasibility of establishing a mechanism to detect potentially disqualifying earnings during all months of the waiting period and implement the mechanism as appropriate. SSA concurred, but raised concerns about GAO's estimates. GAO believes its estimates are valid as discussed in this report.
Traditionally, employers that sponsored retirement plans generally established “defined benefit” plans. Under such plans, participation is generally automatic for eligible workers, and retirement benefits are established by a formula, often based on a worker’s salary and years of service. Since the 1980s, defined contribution plans—most prominently the 401(k) plan—have supplanted defined benefit plans as the dominant type of private-sector retirement plan. Under defined contribution plans, workers typically must decide whether or not to participate, how much to contribute, and how to invest plan assets from a range of options provided under the plan. Under a 401(k) plan, if the employee does not participate, if contributions made to an employee’s account are insufficient, or if the investments that an employee chooses yield an inadequate return, the employee may have retirement income that is insufficient to maintain his or her desired standard of living. As defined contribution plans emerged as the dominant form of retirement plan, the percentage of the population covered by employer-sponsored plans changed very little—remaining at about half of the workforce. As figure 1 shows, Current Population Survey data reveal that about 48 percent of the total U.S. workforce was not covered by an employer- sponsored a plan in 2007. About 40 percent worked for an employer that did not sponsor a plan, and about 8 percent did not participate in the plan that their employer sponsored. According to the Current Population Survey, certain segments of the working population have consistently had much lower rates of employment with employers sponsoring a plan, and lower participation rates than the working population overall. As figure 2 illustrates, larger portions of certain worker groups, such as lower-income workers, younger workers, workers employed by smaller companies, and part-time workers lack coverage compared to all full-time workers. Workers may choose not to enroll, or delay enrolling, in a retirement plan for a number of reasons. For example, according to a recent Congressional Research Service presentation of data from the Survey of Income and Program Participation, most non-participating workers whose employer sponsored a plan said they thought—in some cases, incorrectly—they were ineligible. The report also found that substantial numbers of employees fail to participate because they believe they cannot afford to contribute to the plan. For example, 19 percent of nonparticipating respondents cited this reason, and 10 percent said they did not want to tie up their money. In recent years, exponents of “behavioral economics” have noted that many non-participants may not have made a specific decision, but rather fail to participate because of a tendency to procrastinate and follow the path that does not require an active decision. Further, some workers may not participate because of more immediate savings objectives—such as saving for education or a home—and, in the case of lower income workers, the prospect that Social Security benefits will replace a relatively high percentage of income in their retirement years. For example, a recent analysis by the Investment Company Institute concluded that lower income workers are less likely to save for retirement in part because Social Security benefits replace a higher proportion of their pre-retirement earnings. In recent years, automatic enrollment has been advocated as a way to encourage greater participation in 401(k) plans among the portion of the workforce who have access to such plans but opt not to participate. Typically, under 401(k)s and some other types of defined contribution plans, workers have been required to decide whether or not to join a plan, to specify their saving contribution rates, and to select investments from the range of investment options offered by the plan. Under automatic enrollment, in contrast, a worker would be enrolled in a plan unless she or he explicitly opted-out of the plan. Plan sponsors that adopt automatic enrollment must specify a default contribution rate—the portion of an employee’s salary that will be deposited in the plan—that applies to employees who do not choose a different contribution rate. Also, plan sponsors must select a default investment—the fund or other vehicle into which deferred savings will be invested—unless the employee specifies an investment or investments from those available under the plan. Employers may also adopt an automatic escalation policy, under which an employee’s contribution rates would be automatically increased at periodic intervals, such as annually. For example, if the default contribution rate is 3 percent of pay, a plan sponsor may choose to increase an employee’s rate of saving by 1 percent per year, up to some maximum, such as 6 percent. While a plan sponsor that adopts an automatic enrollment policy must specify a default contribution rate and a default investment, plan features such as these and automatic escalation may also be adopted in the absence of an automatic enrollment policy. Automatic enrollment has not been a traditional feature of 401(k) plans and, prior to 1998, plan sponsors feared that adopting automatic enrollment could lead to plan disqualification. However, in 1998, the Internal Revenue Service (IRS) addressed this issue by stating that a plan sponsor could automatically enroll newly hired employees and, in 2000, clarified that automatic enrollment is permissible for current employees who have not enrolled. Nonetheless, a number of considerations inhibited widespread adoption of automatic enrollment, including remaining concerns such as liability in the event that the employee’s investments under the plan did not perform satisfactorily, and concerns about state laws that prohibit withholding employee pay without written employee consent. More recently, provisions of the Pension Protection Act of 2006 (PPA) and subsequent regulations further facilitated the adoption of automatic enrollment by providing incentives for doing so and by protecting plans from fiduciary and legal liability if certain conditions are met. In September 2009, the Department of the Treasury announced IRS actions designed to further promote automatic enrollment and the use of automatic escalation policies. Anti-Discrimination Safe Harbor 401(k) (13) Plans that adopt automatic enrollment may be exempt from required annual testing to ensure that the plan does not discriminate in favor of highly compensated employees. To obtain safe harbor protection, plans must adopt automatic enrollment as well as other plan features and policies. For example, the plan must: Notify affected employees about automatic contributions Defer at least 3 percent of pay in the first year Automatically increase contribution by 1 percent each subsequent year, to a minimum of 6 percent and a maximum of 10 percent Invest savings in a type of investment vehicle identified in Department of Labor regulations as a Qualified Default Investment Alternative (QDIA) Match 100 percent of the first 1 percent of employee contributions, and 50 percent of contributions beyond 1 percent, up to 6 percent of wages Protection from Employee Retirement Income Security Act of 1974 (ERISA) Fiduciary Liability In the absence of direction from an employee, plans that automatically invest contributions in a QDIA are treated as if the employee exercised control over management of their savings in the plan. As a result, plans that comply with Department of Labor regulations pertaining to QDIAs will not be liable for any loss that occurs as a result of such investments. An automatically enrolled worker has 90 days to opt out and withdraw any contributions (including the earnings on those contributions.) These amounts will not be subject to the extra tax that normally applies to distributions received before age 59½. 414(w)(2)(B). Protection from State Wage-Garnishment Laws PPA preempts any state law that would directly or indirectly prohibit or restrict the inclusion of an automatic enrollment arrangement in a plan. Final regulations issued by the U.S. Department of Labor specify four categories of QDIAs. They are (1) a product with a mix of investments that takes into account an individual’s age (such as a target- date fund), (2) an investment service that allocates assets according to an individual’s age (such as a managed account), or (3) a product with a mix of investments that takes into account the characteristics of the group of employees as a whole, rather than each individual (such as a balanced fund). A sponsor may also use a capital preservation fund as a QDIA for the first 120 days of an individual’s participation to simplify administration if the worker opts out of the plan. 29 C.F.R. §2550.404c-5(c)(4). Other proposals have been put forth with the intent of broadening the practice of saving for retirement among workers whose employers do not sponsor a retirement plan. One such proposal is the automatic IRA, which would require employers that do not sponsor a retirement plan to facilitate direct deposit or payroll-deduction savings for all employees. To maximize participation, employees would be automatically enrolled, but would be permitted to opt out. Legislative proposals to establish an automatic IRA requirement were introduced in Congress in 2006 and 2007, and the concept of an automatic IRA was also mentioned in the President’s 2010 Budget proposal. In addition, legislative proposals have been introduced in some states’ legislatures that would involve state governments in facilitating payroll-deduction retirement plans or IRAs for employers that do not already offer them. According to the architect of the state plan concept, a state could play an intermediary role to pool assets and share expenses among many plan sponsors, thus lowering costs. Existing studies show that automatic enrollment significantly increases participation rates in 401(k) plans, although beneficial effects of automatic enrollment may depend on accompanying policies designed to ensure adequate savings and appropriate investment. While defined contribution plan sponsors have increasingly adopted automatic enrollment in recent years, this approach may not be suitable for all plan sponsors, and available data show that some plan sponsors have not augmented automatic enrollment policies with other policies designed to ensure adequate savings. According to analyses we reviewed, automatic enrollment policies result in considerably increased 401(k) participation rates for plans adopting them, with some participation rates reaching as high as 95 percent. For example, one study followed comparison groups hired before and after a company adopted automatic enrollment for new employees only and compared the participation rates of the two groups. The participation rate for those hired before automatic enrollment was adopted was 37 percent at 3 to 15 months of tenure, compared with an 86 percent participation rate for the group hired after automatic enrollment with a similar amount of tenure. According to the studies we reviewed, automatic enrollment has this effect because many people find it easier to delay the decision to enroll in a plan out of inertia, procrastination, feelings of intimidation about making savings and investment decisions, or other factors. Workers may intellectually understand the importance of saving for retirement but have trouble overcoming their own inertia to start saving or to save effectively. A team of researchers found that under automatic enrollment, workers’ inertia works in favor of saving for retirement because workers need to do little or nothing to participate in their employers’ plan. Further, workers may also decline to participate, in p because they believe the decision to participate in a 401(k) plan r time-consuming and complex decisions, such as choosing how muc h to contribute and how to invest their contributions. Automatic enrollment, through its default contribution rates and default investment vehicles, equires offers an easier way to start saving. Table 1 shows overall participation rate increases reported in the various studies we reviewed. Three of the studies also found that automatic enrollment has a significant effect on subgroups of workers with relatively low participation rates, such as lower-income and younger workers. For example, the Fidelity Investments study found that 30 percent of workers aged 20 to 29 were participating in plans without automatic enrollment. In plans with automatic enrollment, the participation rate for workers in that age range was 77 percent, an increase of 47 percentage points. In addition, four of the studies found that automatic enrollment policies reduced the disparities in participation rates between certain groups of workers. For example, the Madrian and Shea study examined participation rates by race and ethnicity and found that, among workers hired under a voluntary enrollment only policy, Hispanic workers had the lowest participation rate at 19.0 percent, blacks had a slightly higher participation rate of 21.7 percent, and whites had the highest participation rate at 42.7 percent. Under automatic enrollment, however, the participation rates for Hispanic and black employees nearly quadrupled to 75.1 percent and 81.3 percent, respectively, narrowing the gap with white workers, whose participation rate more than doubled, increasing to 88.2 percent. The difference in participation rates between white and Hispanic employees fell from 23.7 to 13.1 percentage points. The high level of participation rates under automatic enrollment appears to persist after such policies are adopted, according to two of the studies in our review. Both studies observed participation rates at specific companies for about 3 years after an automatic enrollment policy was adopted. The studies found that for employees hired under automatic enrollment, participation jumped almost immediately and then increased slightly over the 3-year period, but remained relatively stable. The Vanguard study modeled the participation rate over time and found a decline in the participation rate over a 3-year period, with the participation rate falling by about 10 percentage points from its peak of 91 percent but remaining high relative to participation rates for plans without automatic enrollment. Consistent with these results, plan sponsors who had adopted automatic enrollment stated that their experience also indicated that higher participation rates were sustained. One plan sponsor, a large manufacturer, reported that after 2 years of automatic enrollment, 95 percent of employees who had been automatically enrolled had stayed in the plan. Although automatic enrollment can increase participation and saving rates for most workers, it may have an adverse effect on saving rates and investment choices for other workers, depending on the nature of default contribution rates and default investment funds used, according to some of the studies in our review. Four of the six studies we reviewed found that automatically enrolled participants are likely to accept the plan’s default contribution rate. Three of the studies found that some participants would have selected a contribution rate higher than the default rate had they not been subject to automatic enrollment and had they chosen to enroll in the plan voluntarily. Further, two of the three studies also found that some participants were likely to accept a default investment fund with relatively low future prospects for return on investment, such as money-market or stable-value funds, compared to the investment fund they would have selected if they had voluntary enrolled. Thus, these studies concluded that overall savings for these particular participants were lower under automatic enrollment. Further, the Beshears et al. study calculated participation rates for a company that doubled its default contribution rate from 3 percent to 6 percent and found that the participation rates were virtually identical before and after the policy change. In addition, Fidelity Investments reported that employees accept the default contribution rate the majority of the time, regardless of how high it is. Studies have found that default policies have these effects, in part, for the same reasons that automatic enrollment increases participation rates—accepting the defaults is the path of least resistance and requires no action on the part of the worker. In addition, some studies found that some employees see default policies as implicit advice from the plan sponsor and that they imply optimal saving rates and investment choices. Automatic enrollment policies have been increasingly adopted by defined contribution plan sponsors in recent years as a result of several factors. Nonetheless, a number of considerations may limit adoption of automatic enrollment over the long term. Low default contribution rates and an apparent lag in the adoption of automatic escalation policies raise questions about the adequacy of long-term saving rates under automatic enrollment. Further, the widespread adoption of target-date funds (TDF)—funds that allocate investments among various asset classes and automatically shift to lower-risk, income-producing investments as a “target” retirement date approaches—as default investments for plans with automatic enrollment has raised concerns about investment risk and transparency of investments of TDFs for participants nearing retirement. Data from two large plan administrators, as well as discussion with retirement plan experts, indicate that plan sponsors’ adoption of such policies has grown considerably in recent years. Data from Fidelity Investments show that the percentage of defined contribution plans adopting automatic enrollment grew from about 1 percent in December 2004 to about 16 percent in March of 2009. Comparable data from Vanguard show that about 19 percent of plans had adopted automatic enrollment as of December 2008, up from 8 percent in June 2006. Data from one plan administrator show that large plan sponsors— sponsors of plans with $500 million or more in assets—have adopted automatic enrollment policies more often than smaller plans. As figure 3 illustrates, Fidelity Investments data show that about 40 percent of large plans had adopted automatic enrollment by March 2009, while only about 14 percent of small plans had done so. According to Fidelity’s data, about 47 percent of all plan participants are included in plans that offer automatic enrollment. Plan sponsors can choose to apply automatic enrollment “broadly” to all employees, or more narrowly to include only newly hired employees. Data from Fidelity Investments indicate that automatic enrollment policies are typically applied only to new or recently hired employees, but a growing percentage of plan sponsors extended automatic enrollment to existing employees as well. Among plans with automatic enrollment polices, about 58 percent of plans apply such policies to new and recently hired employees only, while about 42 percent apply automatic enrollment to existing eligible employees as well. There is considerable variation in this pattern by plan size—as figure 3 illustrates, the majority of large plans had adopted automatic enrollment only for new or recently hired employees, while nearly half of small plans applied the policy to all eligible employees. According to plan sponsors, retirement plan experts, and others we contacted, several considerations have been driving the increase in automatic enrollment. These considerations include: (1) plan sponsors’ desire to increase participation rates, (2) plan administrators’ marketing of automatic enrollment, and (3) aspects of the Pension Protection Act that facilitate and encourage the adoption of automatic enrollment. Sponsor Desire to Increase Participation Rates: Officials of each of the six plan sponsors we contacted that had adopted automatic enrollment highlighted the desire to better ensure adequate retirement savings for employees. Some plan sponsors noted that automatic enrollment was considered necessary because other methods of increasing plan participation had not been effective. For example, one plan sponsor had sent e-mails and educational materials reminding employees of the plan’s availability and provided them with analyses of the matching funds they had lost by not contributing. While these methods raised contribution rates from the 50 percent range to the 65 percent range, the company could not further increase contribution rates. However, after instituting an automatic enrollment policy, the plan’s participation rates increased to 93 percent. An official of another plan sponsor noted that an automatic enrollment policy was necessary because the company had a very young workforce, and the company believed that retirement savings was a very low priority and a distant, “abstract” benefit for these workers. Some plan sponsors also noted that adoption of an automatic enrollment policy may be particularly urgent in the case of plans that discontinue other benefits. For example, representatives of two plan sponsors stated that automatic enrollment was adopted at about the time that an existing defined benefit plan was frozen—that is, closed to new entrants. A representative of a large plan consulting firm noted that sponsors may do this with the long term in view; they want their employees to be able to retire at retirement age, partly to ensure that as productivity drops off, these workers do not have a reason to stay on indefinitely. This plan consultant also said that some plans may adopt automatic enrollment to help ensure that the sponsor can pass nondiscrimination testing. Plan Administrator Marketing: 401(k) plan administrators—firms that provide and manage retirement plans for plan sponsors—have been actively marketing and promoting adoption of automatic enrollment, according to plan administrators and others. One large plan administrator stated that it encourages automatic enrollment by conducting analyses of the effects of automatic enrollment tailored to individual plan sponsor clients. The official stated that when plan sponsors see the potential benefits to employees of automatic enrollment compared to the status quo, about 25 percent of them have adopted automatic enrollment. The Pension Protection Act: Finally, various aspects of the PPA have facilitated the trend toward automatic enrollment. An official of one of the nation’s largest 401(k) plan administrators noted that the criteria and guidelines established in the PPA streamlined and simplified the decision- making process about automatic enrollment and the related plan design features. One consultant noted that the deliberations about the PPA involved considerable industry input and had an “announcement effect” that generated considerable publicity regarding automatic enrollment. Representatives of two plan sponsors said that the PPA safe harbor protection was a consideration in adopting automatic enrollment. One plan had not adopted all of the PPA safe harbor provisions. Instead, it used the safe harbor specifications as a guide in setting features such as a matching contributions and the 3 percent default contribution rate. The other plan sponsor adopted all of the PPA safe harbor provisions. Various factors, including higher costs, management views, concerns about legal and regulatory challenges, and lack of awareness may delay or prevent the adoption of automatic enrollment policies by some plan sponsors. Greater Costs: Automatic enrollment implies greater costs for plan sponsors, including higher matching costs and greater fees paid to plan administrators. One plan administrator stated that automatic enrollment could be particularly unattractive to cost-sensitive companies that have narrow profit margins. This concern was reflected by a plan sponsor who noted that the adoption of automatic enrollment would be difficult for one of the company’s subsidiaries, which operated in a very cost-competitive environment and would therefore have difficulty passing on costs to customers. The subsidiary would probably have to absorb the additional costs through reduced profit margins. Plan consultants and plan administrators noted that the costs of automatic enrollment may be a particular concern in the current economic environment. One large plan administrator noted that the current state of the economy will slow down adoption of automatic enrollment, as many companies try to minimize additional costs. While some noted that this should be a transient consideration, one expert we contacted said that the recession would be a memorable event for some plan sponsors, which could have longer-term implications. Certain types of plan sponsors may be especially concerned about the cost impact of automatic enrollment. For example, plan sponsors that employ a low-wage and high-turnover workforce—such as retail establishments and restaurant chains—may be especially reluctant to adopt automatic enrollment because of the additional cost, administrative burden, and prospect of limited benefits for employees. One such plan sponsor we contacted explained that adopting automatic enrollment would result in a five-fold increase in the number of plan participants and associated administrative costs and that the company might lower the employer match to mitigate the associated cost increases. In addition, the plan would have greater administrative burdens related to the need for employee communication and to handle inquiries from the much larger pool of participants. A representative of this plan sponsor also explained that, even with contribution rates of 6 percent, low-wage and short-tenure staff would accumulate very small balances and likely abandon them or take a lump-sum distribution upon separation. Management Views: Apart from costs, experts noted that some plan sponsors may be reluctant to adopt automatic enrollment due to certain management views or out of concern about employee reaction or welfare. Some experts told us that some managers view automatic enrollment as overly paternalistic or do not wish to reward passivity in employees who do not voluntarily join a plan. For example, a representative of one plan sponsor told us that the company wanted all employees to participate in the plan but wanted active participation and conscientious saving. A representative of a small manufacturing company stated that management believes that the workforce would be highly distressed if the company began summarily taking 401(k) contributions from their pay, even with a well-communicated opt-out feature. The representative further noted that the sponsor believes the recent declines in the equity markets also weigh against adoption of automatic enrollment, in light of the firm’s fiduciary responsibility for the plan. Legal and Regulatory Challenges: Some experts noted that some plan sponsors may be reluctant to adopt automatic enrollment due to legal and regulatory concerns. One expert noted that small plans see the legal and regulatory environment surrounding 401(k)s as complex and may see a switch to automatic enrollment as overly risky. Another expert noted that, unlike larger plan sponsors, small plans without well-staffed legal and compliance departments may be risk-averse and slow to adopt new policies. Lack of Employer Awareness: Some smaller employers may not be aware that automatic enrollment is a plan feature available to them. Representatives of two small plan sponsors said that they were generally unaware of automatic enrollment as a plan option. For example, one small plan sponsor stated that neither she nor the organization’s 401(k) employee advisory committee was familiar with the concept of automatic enrollment. Further, the local service provider that provides the organization with plan guidance and management assistance had not suggested such a policy. Relatedly, representatives of one large plan administrator told us that small plan sponsors generally lag behind large plan sponsors in adopting innovative services, tools, and plan design features, including automatic enrollment. Available data show that many plans with automatic enrollment have not adopted default automatic escalation policies, which, in combination with low default contribution rates, could result in low saving rates for participants who do not increase contribution rates over time. Also, experts noted that a trend toward TDFs as default investments, while potentially beneficial in important respects, also raises questions about the investment risks and transparency for those close to retirement. As figure 4 illustrates, data from two large plan sponsors indicate that the majority of plans with automatic enrollment have adopted initial default contribution rates of 3 percent. Between 15 to 17 percent of plans have a default contribution rate of less than 3 percent, and between 22 and 25 percent of plans have a default contribution rate of more than 3 percent. Data from Fidelity showed that the average default contribution rate grew modestly, from about 3.0 percent in March 2005 to about 3.2 percent in March 2009. According to a Vanguard official, the average default contribution rate was 3.3 percent at the end of 2008. Available data is mixed with regard to the extent to which plan sponsors with automatic enrollment have also adopted automatic escalation policies. According to one plan administrator’s data, about 45 percent of plans with automatic enrollment had adopted a default automatic escalation feature as of March 2009, up from zero in 2005. Further, this administrator’s data shows that adoption of default automatic escalation policies are much less prevalent among large plans than small plans— about 24 percent of large plans with automatic enrollment policies have adopted such a policy, while about 51 percent of small plans have done so. Data from another plan administrator show a much greater rate of adoption of default automatic escalation policies—77 percent of all plans with automatic enrollment had adopted default automatic escalation policies in 2008, up from 31 percent in 2005. Available data indicate that plans with automatic enrollment policies overwhelmingly adopted TDFs as a default investment. TDFs allocate their investments among various asset classes and shift that allocation from equity investments to fixed income and money market investments as a “target” retirement date approaches. Eighty-seven percent of Vanguard Group plans with automatic enrollment had adopted TDFs as a default investment at the end of 2008, compared to 42 percent in 2005. Conversely, the use of balanced funds, money market funds, and stable value funds as default investments have declined significantly. This trend toward TDFs as a default investment vehicle is corroborated by data from Fidelity Investments, which shows that 96 percent of plans with an automatic enrollment policy used TDFs as of March 2009, up from 57 percent at the end of 2005. Target-date funds (TDF) offer investors certain advantages generally not offered by other types of investment vehicles. TDFs allocate their investments among various asset classes and shift that allocation to more conservative investments as a “target” date approaches. For example, a TDF could be designed for workers expecting to retire many years in the future and would typically have a much greater allocation to equities and a lesser allocation to fixed-income investments. Conversely, a fund designed for workers nearing retirement age would tend to have a greater allocation to fixed-income investments. TDFs thus offer participants a potentially beneficial long- term asset allocation strategy while lowering risks as the participant approaches retirement age. This trend is important in part because recent evidence suggests that participants who are automatically enrolled in plans with target-date fund defaults tend to have a high concentration of their savings in these funds. A recent analysis by the Employee Benefit Research Institute found that workers who were considered to be automatically enrolled in a 401(k) plan were more likely than those who voluntarily joined to invest all their assets in a TDF. The study found that, except for participants in plans with more than 10,000 participants, more than 90 percent of those automatically enrolled in TDFs had all of their allocation in such funds. While TDFs may help ensure that workers have a more age-appropriate mix of investments, some experts have stated that TDFs may pose certain challenges, as recent events in the financial markets have illustrated. As a result of the 2008 stock market decline, some TDFs designed for those expecting to retire in or around 2010 lost 25 percent or more in value. In light of concerns about a number of issues, including how plan sponsors evaluate, monitor, and use TDFs, in 2008 the Advisory Council on Employee Welfare and Pension Benefit Plans recommended that the U.S. Department of Labor provide more specific guidance regarding the complex nature of TDFs, and outline the methodology that plan fiduciaries should follow in selecting and monitoring them. The Advisory Council also stated that additional education materials would help plan participants become aware of the value and risks of TDFs. In order to promote retirement savings among the 40 percent of the workforce whose employer does not sponsor a plan, members of Congress, in recent years, have introduced bills for federally required “automatic IRAs” that would be implemented nationwide. Automatic IRAs offer the potential benefit of expanding retirement coverage but some have expressed concerns that automatic IRAs may not result in significant retirement savings, and raised questions about the costs of such a program for employers and the federal government. There have also been proposals for state-supported retirement savings programs over the past several years. These state proposals also have the potential to expand retirement coverage, but on a state-by-state basis. Concerns have been expressed regarding the cost of these state proposals, as well as employer and employee interest in such plans, and potential legal barriers. A number of existing proposals have described in general terms the concept of an automatic IRA, and they contain common elements. Under the 2006 and 2007 congressional bills for automatic IRAs, employers that do not sponsor a retirement plan would be required to defer a percentage of an employee’s pay to an IRA through payroll deduction, unless the employee opts out. The requirement would apply to all employers with more than 10 employees and who had been in business for at least 2 years, or employers with 100 or more employees regardless of the length of time the employer has been in business. Eligible workers would be those who had worked for an employer for a specified period, were at least 18 years old, and were not eligible to participate in any other qualified retirement plan the employer sponsors. Affected employers could either (1) automatically enroll eligible workers, although employees would be offered the option to affirmatively opt out or (2) require that employees make an explicit yes or no decision on whether to participate; workers not making such a decision would be automatically enrolled (see fig. 6). Employers would then transfer a portion of employees’ pay to either a traditional IRA or a Roth IRA through payroll deduction. Employers could elect to send contributions to IRAs of an employer-designated issuer, unless the employee selects his own IRA provider. If neither the employer nor the employee designates a specific IRA provider, the contributions would be deposited into federally designated default fault accounts. accounts. The automatic IRA is intended to help address the retirement security needs of those not already covered by an employer-sponsored retirement plan. Further, the automatic IRA is designed to extend the benefits of payroll-deduction savings and automatic features of 401(k)s. According to some experts, requiring employers to offer automatic IRAs is necessary for a number of reasons. First, employers have had the option to establish payroll-deduction IRAs for over 10 years, and for a number of reasons, very few employers have done so. As we previously reported, IRA providers have told us some employers may be reluctant to adopt a payroll-deduction IRA because they believe that their publicizing the payroll-deduction IRAs may be construed as an endorsement of the policy, which could potentially violate ERISA. Further, employers may not be aware of how payroll-deduction IRAs work and some small employers may not be aware that this option exists. The automatic enrollment component is necessary, according to designers of the automatic IRA, because various impediments would prevent many eligible employees from taking advantage of an available payroll-deduction IRA. For example, employees would have to decide whether to participate, select an IRA provider, select investment vehicles, and determine how much to contribute. These officials note that many workers may have difficulty overcoming inertia, and automatic enrollment would help them overcome this difficulty. Advocates for automatic IRAs and some pension industry experts reported that the automatic IRA could have a positive effect on retirement savings. According to the architects of the approach, automatic IRAs offer a powerful mechanism for accumulating retirement savings through regular payroll deposits that continue automatically. In light of the impact of automatic enrollment polices on participation in 401(k)s, an automatic IRA program that features default automatic enrollment could have a positive impact on participation rates. One study estimated that likely automatic IRA participants include younger, part-time, and lower- or moderate-income workers, as well as workers subject to higher than average job turnover. Advocates stated that automatic IRAs could help these employees overcome inertia since they would no longer need to take the initiative in order to save. Further, two pension industry experts told us that the payroll-deduction nature of automatic IRAs would ensure that employees of affected companies are saving on a regular basis. However, some experts agree that the automatic enrollment component of automatic IRAs has the potential to significantly increase the number of workers saving for retirement by including workers that currently do not have access to an employer-sponsored plan. Some caution, however, that the benefits resulting from the automatic IRA could be relatively small. A 2009 preliminary analysis funded by the Department of Labor illustrates potential outcomes of two automatic IRA scenarios using specific behavioral assumptions about participation and contribution rates, among other things. The analysis found that the resulting increase in average retirement benefits at age 70 is small (even when ignoring account fees and offsetting reductions in other savings) and is weighted toward the third of the population with the highest lifetime earnings. If actual participation rates in the automatic IRA are higher than the analysis assumes—and the experience of automatic enrollment in 401(k)s indicates this is a possibility—the resulting increase in average retirement benefits could be higher. The Department of Labor is undertaking additional analysis to illustrate the effects of participation rates similar to those achieved in 401(k) plans with automatic enrollment. Participants may also not fully benefit from the tax incentives provided by automatic IRAs. According to analysis sponsored by AARP, 50 percent of automatic IRA participants would be lower-income. Therefore, with the exception of Social Security and Medicare taxes, they would pay little, if any, income tax and may not benefit from the tax incentives traditional IRAs offer. Further, while some participants would be eligible for the Saver’s Credit, one analysis estimated that the benefit of the credit could be limited. Because the credit is nonrefundable, it can only be used to offset income tax liability. If the participant does not have income tax liability up to the full amount of the credit, the full value of the credit can not be used. For example, the study concluded that if participants made the maximum permitted contributions to an automatic IRA, then nearly 90 percent would not receive the full credit. In part to improve the tax incentive of automatic IRAs, the Department of Treasury, in August of 2009, proposed making the Saver’s Credit fully refundable and depositing it automatically into IRAs. Finally, proper administration of accounts, including record keeping, will be important for managing and maintaining the retirement savings accumulated through the IRAs, according to some analyses and experts. For example, workers would be responsible for ensuring that they do not exceed applicable limits on annual contributions to an automatic IRA. An analysis sponsored by AARP and pension industry officials noted that it might be difficult for some workers to keep track of their accounts when they move from job to job or if their employer goes out of business. A 2007 study noted that automatic IRA proposals do not impose record- keeping responsibilities on employers, beyond withholding and transmitting IRA contributions. Because of this, the report noted that other entities must assume these responsibilities, and a companion report recommended a centralized administrator be responsible for record keeping. For example, the first report found that a central administrator could help prevent lost accounts by providing participants with annual reports including their contributions and investment earnings for the year, as well as the total account balance. In addition, the architects of the automatic IRA proposed that the federal government set up a standard default account and contract with private financial institutions for record- keeping services, among other things, to make managing the IRAs easier. A variety of views exist regarding the cost and administrative burden that an automatic IRA would place on affected small employers. As one analysis of the automatic IRA concept noted, an automatic IRA has the potential to result in some costs and administrative burdens on small employers. The analysis noted that employers would need to provide employees with election forms and process paperwork with respect to employee elections or non-elections, choose an IRA provider or opt for the federally designated default investment, and withhold contributions from employees’ pay. Pension industry officials also stated that some small employers do not have payroll deduction systems and send paper or spreadsheet files to their record keepers and brokers. They told us that these employers may choose to invest in new infrastructure in order to remit automatic IRA contributions through payroll deduction. Recent legislative proposals for an automatic IRA have recognized the potential challenges for small employers and contain provisions to mitigate their impacts. For example, the 2007 bills to authorize automatic IRAs would have exempted employers with fewer than 10 employees as well as those that had been in business for less than 2 years. According to some advocates, the proposals would have therefore avoided placing additional requirements on the smallest companies, which may not have electronic payroll-deduction systems. It would also relieve employers starting a new business from additional costs and administrative burdens. In addition, the proposals would establish a tax credit in the early years for participating employers with fewer than 100 employees to mitigate some administrative and startup costs. In light of these automatic IRA features, some experts and a recent analysis have found that additional costs may be small for most small employers. The architects of the automatic IRA concept have stated that because many employers already make deductions for federal income tax and payroll tax withholdings, making IRA payroll deductions would impose little, if any, new administrative costs. Further, they said these employers would not have to bear the costs involved in maintaining a retirement plan, such as matching employee contributions. These views were supported by a 2009 report sponsored by AARP, which found that most small employers would face low costs to implement the automatic IRA. This study noted, for example, that about 97 percent of employers with 10 or more employees had automated payroll systems, and such employers would face relatively few burdens implementing the automatic IRA. The report also found that payroll software companies and payroll service providers are likely to adopt automatic IRA requirements in their services to small employers. For the estimated 3 percent of affected employers that process payroll manually, the automatic IRA would also have to be implemented manually. Past proposals have described a federal role that could mitigate some of the difficulties and risks of implementing an automatic IRA. For example, the proposals would have created a new federal entity that would, among other things, establish low-cost default investments. According to experts we contacted and analyses we reviewed, establishing an automatic IRA policy would require tax incentives to make it more affordable to some employers and federal expenditures to establish and govern the program, among other things. However, analyses sponsored by AARP found that it is not possible to determine what the costs would be to the federal government without a more detailed proposal. Further, these studies reported that the establishment of an automatic IRA would reduce federal tax revenues as a result of the tax credit available to small employers, individual tax benefits from deferred employee income, and greater use of the Saver’s Credit. Two analyses estimated that the revenue losses could amount to somewhere between $2 billion and $19 billion over 10 years. Industry officials and some experts we contacted also noted that establishment of an automatic IRA could affect the market for 401(k) plans. For example, some pension industry experts and representatives of two national organizations representing large plan sponsors noted that if automatic IRAs are made too attractive, they might displace 401(k)s and Savings Incentive Match Plan for Employees of Small Employers (SIMPLE) plans. These officials said that an employer might forego adopting a 401(k) plan if they have already been required to facilitate an automatic IRA, and that this would be unfortunate because an IRA is in many ways an inferior option for workers. The officials told us that a 401(k) plan can offer workers better benefits, such as an employer match and higher contribution rates. In addition, some of the officials reported that the existence of a 401(k) creates a workplace culture that encourages participation in retirement saving. Further, an AARP-sponsored analysis noted that small businesses will weigh the costs of establishing or maintaining a qualified retirement plan against the costs of complying with the automatic IRA requirements. To the extent that the automatic IRA approach offers significantly lower costs—including the relative costs of fiduciary liability—employers may decide against adopting a 401(k) plan or may eliminate an existing one. In light of concerns such as these, officials of one large financial services firm said that, if implemented, the automatic IRA program should be evaluated to determine if it led to a shift away from 401(k) plans. Others, however, have stated that the automatic IRA is not likely to erode the popularity of 401(k) plans, and may even promote greater adoption of such retirement vehicles. Some experts noted that the potential for automatic IRAs to supplant 401(k)s can be minimized by careful design of the program. Perhaps most importantly, they said that the maximum annual savings for an automatic IRA should be designed so that it is less attractive to a small business employer than a 401(k). Specifically, they noted that it is important that the automatic IRA not permit contributions above the current IRA dollar limits to avoid competing with qualified plans. These advocates reason that the potential tax advantage of a 401(k) enables a small business owner—who, along with his employees, may choose to participate in the 401(k) or IRA plan his business sponsors—to save a much higher percentage of his tax-deferred income than does an automatic IRA, making the 401(k) plan a more attractive option. Further, some industry experts told us that an automatic IRA would likely be a stepping stone toward adopting a 401(k) for some small employers. For example, according to an official of one large organization representing pension professionals stated that payroll deduction automatic IRA arrangements will ultimately encourage more employers to sponsor 401(k) plans, and contribute on their employees’ behalf. In recent years, 10 states have considered proposals that would involve state governments in facilitating retirement savings plans for workers whose employer does not sponsor a plan. One study on such state proposals reported that employers would like to provide a retirement savings vehicle for their employees, but are inhibited by the cost, complexity, and time that would be required to do so. Under the proposals, state governments could take a number of actions to help address some of these issues and facilitate the use of payroll- deduction or SIMPLE IRAs or the adoption of 401(k) plans. For example, a state would promote private pension coverage and facilitate retirement savings for small business, moderate-income, and lower-income workers—who are less likely to be covered by an employer-sponsored plan—by acting as an intermediary between employers and financial institutions. In addition, a state could help small employers pool their investments and administrative activities. The type of retirement plans that would be established by the programs varies. For example, California’s proposal would establish a payroll- deduction, traditional, or SIMPLE IRA, or a combination of these options; Connecticut’s proposal would establish a 401(k) or other type of defined contribution plan; and proposals in Maryland and Washington would establish a defined contribution plan and IRA options. Regardless, the state would initiate and oversee the programs, while private-sector companies under contract to the state would manage the investment vehicles and day-to-day administration. For example, a Washington study outlined an option under which the state would design the basic features of a 401(k) plan and market the plan to private-sector employers who do not currently offer a plan. The state would then contract with a private- sector plan administrator to provide access to investment funds, direct customer service, Web-based account access, and to distribute account statements and other communications. Table 2 shows examples of state roles and the intended impact of automatic IRAs under existing proposals. Program Participation: Little is known about the extent to which employers and employees would participate in a state-assisted retirement savings program. According to representatives of two organizations in favor of state-assisted retirement savings plans, employer and employee interest in such a program could be considerable. Moreover, one of the representatives noted that proposals in Washington and other states have been specifically designed to address small companies’ concerns that the cost of setting up payroll deduction prevents them from adopting a 401(k) plan or a payroll-deduction IRA. However, no known rigorous assessment exists of the extent to which small employers or employees would opt to participate in such a program. We obtained state-sponsored studies from three of the four states we examined, and while none of the studies included an analysis measuring the magnitude of the market demand for such a program, two noted concerns regarding the potential demand for a state program. For example, a study prepared for the Maryland legislature compared a Maryland proposal to a number of other state financial programs, such as a state-sponsored college savings program. In comparing the two, the report noted that the college savings program offers distinctive tax and pre-payment guarantee advantages that are not otherwise available in the private market. However, the study noted that this distinction, which helps ensure a market demand for the college savings plans, would not exist for the state-assisted retirement program. The report noted that there will be no additional employee tax benefit for participants in the proposed state-assisted retirement savings program and, for that reason, the program would have to compete on an equal footing with plans in the marketplace. The Maryland analysis concluded that the program might be difficult to establish or market in the absence of a federal requirement that all employers have a retirement savings plan. Representatives of the financial services industry also indicated that state- assisted programs to some extent could have a “zero-sum” effect. In commentary on the Connecticut proposal in March of 2008, the Small Business Council of America (SBCA) and American Society of Pension Professionals and Actuaries (ASPPA) stated that low-cost retirement options exist now, and the state-assisted program would result in little difference in cost. The SBCA and ASPPA added that state government should not compete with small private businesses unless there is a clear market failure or some inherent unfairness that disadvantages its citizens. Representatives of the Connecticut Bankers Association were concerned that rather than expand retirement savings vehicles to new employees, the state program would attract the “already served market” with initial low costs. Similarly, an SBCA representative stated that providing retirement services to small employers is not very profitable for financial services firms, and if such a proposal resulted in the state obtaining half of the small business market, for example, it is conceivable that large plan administrators would exit the business since they only profit in this sector though large volume. Further, a fiscal analysis of a Washington state proposal noted that the program would have no initial plan assets and uncertain levels of participation and, as a result, vendors may have difficulty estimating the total cost of record-keeping services. This, in turn, could affect vendor interest in providing services for the program. Program Design and Costs to State: Studies from California, Maryland, and Washington about the feasibility of state-assisted retirement savings programs identified important questions about program design and related costs. For example, state governments will need to determine the extent to which administrative and management responsibilities will be borne by the state, and how much will be contracted out to financial services providers. States would also need to determine what types of investment funds will be used, including any default funds. Further, analyses from the three states showed they would face initial and ongoing costs. In addition, a fiscal analysis of the Washington proposal identified three major costs categories—including program development and administration, communications and marketing, and record keeping. As table 3 illustrates, the Washington analysis estimated a total cost of about $4.4 million to implement and operate the program in the first 2 years. Lack of startup funding may be a significant barrier in some states. While the Maryland and Connecticut proposals allow for state budget appropriations (which may later be recovered through participant fees), the California proposal stated that initial funding could come from a state budget appropriation or a non-profit or private entity. However, state budget appropriations may be difficult to obtain given these states’ current budget shortfalls. The Washington proposal stated that its program could not be started until the state had obtained federal and/or philanthropic funds sufficient to support the first 3 years of the program. According to an advocate for the Washington proposal, his organization recognized that, due to the current economy, the states are not in a position to self-fund state-assisted retirement savings programs. He reported that his organization has been working on obtaining federal funds to cover the startup costs of such programs. Legal and Regulatory Challenges: State-assisted savings programs would also have to comply with both federal and state law that, according to the state analyses, could provide additional challenges. One analysis noted that states would need to obtain requisite federal approvals to ensure that the programs adhere to all federal requirements governing the operation of retirement plans. For example, if the program establishes 401(k) accounts, the state would need to submit plan documents to the IRS and Department of Labor for approval. Further, the Maryland and Washington analyses found that if a state sponsored the establishment of a 401(k) plan, such plans would be subject to ERISA’s fiduciary requirements and expose either the state or the employer to potential liability in the event that participants suffer financial losses. According to the Maryland analysis, participants may try to hold the state or the employer liable if they incur investment losses after being sold an investment unsuitable for their needs or if they received misleading communications about investments. However, both analyses noted that there are steps the states can take to minimize their liability. For example, according to the Maryland study, the state could limit investment options to reduce the possibility of unsuitable choices or miscommunication. The Washington and Maryland analyses discussed a number of other potential liabilities that states could face. For example, the Maryland analysis noted that a failure to file required forms and conduct transactions under applicable standards could subject the state or employers to significant penalties imposed by IRS or the Department of Labor. Officials from Washington noted that a 401(k) option could create a new and complex compliance and monitoring role for the state retirement agency and that it could be administratively difficult for the state to assume this role. They added that the state would face a steep learning curve in addressing ERISA and liability issues, and might have to contract with outside expertise to deal with compliance and oversight issues. The state role envisioned in the proposals may be precluded by some states’ constitutions. Analyses of proposals in California and Washington specifically cited aspects of the state constitutions that could affect the states’ ability to operate a plan. For example, California’s constitution prohibits the gift or loan of state credit to associations, companies or corporations, and prohibits the state from loaning its credit, subscribing to, or being interested in the stock of any company, association, or corporation. According to the California study, although California’s constitution specifically exempts the retirement board of a public retirement system from this prohibition, it is not clear whether the exemption would extend to a program for private-sector employees. The California study noted that if the program is structured in such a way that the state internally manages funds associated with the program, this could be seen as the state having a financial interest outside the limits of the public employees’ retirement fund. The California study also observed that a constitutional amendment may be needed to address this issue. Washington’s constitution has a similar prohibition. The Washington report noted that the proposed retirement savings program may be permissible because it serves a public purpose by helping individuals save for retirement and reducing the risk that individuals will rely on state assistance in the absence of adequate retirement savings. However, the report also noted that no Washington case has considered proposals similar to those discussed in the report. The report concluded that it was not possible to predict how a court would rule should the program be challenged. In addition, because no other states have enacted such programs, there is no guidance available from other courts. Automatic enrollment of workers in 401(k) plans has proven to be an effective means of increasing plan participation rates. Because such policies are being increasingly adopted by defined contribution plan sponsors in the wake of the Pension Protection Act of 2006, many additional workers will be brought into plans who might not otherwise have participated. Nonetheless, a number of considerations could potentially limit the extent or impact of such policies. First, the benefits of automatic enrollment are inherently limited to workers that have access to an employer-sponsored plan but do not participate. Second, some types of employers, such as employers with high-turnover workforces and small employers, may find automatic enrollment too costly or inappropriate for their workforce. Third, initially low default contribution rates and the absence of default automatic escalation policies at some plans may result in inadequate long-term savings for some workers. Automatic IRAs may hold promise for workers who do not have access to an employer-sponsored plan. The proposal has potential in that it could foster retirement savings among the roughly 40 percent of the workforce whose employers do not sponsor a plan. As such a policy is designed, however, a number of important issues remain to be considered. For example, it is not clear that an automatic IRA will offer low-income workers a significant benefit. Further, in order to ensure that the intended beneficiaries accumulate and retain savings, some central administration—possibly by the federal government—may be required to assume significant and long-term involvement for record keeping and administration. The nature and costs of such a role have not yet been publicly assessed or compared against the potential benefits and limitations of an automatic IRA. In addition, while state-assisted retirement savings plans may also hold some promise for expanding retirement coverage for workers, none of these proposals has been enacted and they could face significant legal barriers to implementation. Both the growth of automatic enrollment and the introduction of automatic IRA proposals have brought renewed attention to the question of how to extend retirement coverage to the half of the workforce not covered by an employer-sponsored plan. This is an important step forward, as past debate over retirement security has largely focused on increasing retirement savings for those already participating in retirement plans. As plan sponsors and participants gain more experience with automatic enrollment, it will be helpful to learn from these experiences, especially in light of the recent recession. The lessons learned may have important implications for related 401(k) plan features, such as automatic escalation, and for the potential feasibility and usefulness of automatic IRA and state-assisted retirement savings proposals. Further, it would also be helpful to carefully consider the various concerns raised in the automatic IRA debate to increase the likelihood that, if such a proposal becomes law, it is administered in an efficient and effective way. Finally, while state efforts could be helpful in increasing the number of workers saving for retirement, these efforts may not be necessary depending on the potential implementation of automatic IRAs. Further, fiscal difficulties in some states may make such proposals difficult to implement in the near future. We provided a draft of this report to the Department of Labor and the Department of the Treasury for review and comment. The Department of Labor generally agreed with our findings. With regard to the potential impacts of an automatic IRA on retirement benefits, Labor said that the Employee Benefits Security Administration is undertaking additional analysis to illustrate the effect of higher participation rates, similar to those achieved in 401(k) plans. The Department of Labor provided technical comments, which we incorporated as appropriate. The Department of Labor’s formal comments are reproduced in appendix II. The Department of the Treasury also generally agreed with our findings, and provided technical comments, which we incorporated as appropriate. The Department of the Treasury’s formal comments are reproduced in appendix III. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To determine what is known about the effect of automatic enrollment policies among the nation’s defined contribution plans, as well as the extent of and prospects for such policies, we first reviewed reports examining the impact of automatic enrollment, default contribution rates, and default investment funds on participation rates and saving patterns. Table 4 shows the six reports presenting original research that we reviewed. Officials at the Department of Labor, as well as other pension industry experts, verified our selections. The six reports include two conducted by large plan administrators that analyzed the records of their respective defined contribution plan sponsors and participants. The remaining four reports conducted case studies of companies that adopted automatic enrollment and analyzed participation rates, contribution rates, and investment fund allocations before and after the policy was implemented. For each study, we analyzed available evidence on: (1) the impact of automatic enrollment on participation rates and the durability of any increases in participation, (2) the characteristics of the workers whose participation rates are affected by automatic enrollment,(3) the impact of automatic enrollment on contribution rates, and (4) the impact of automatic enrollment on selection of investment funds. However, the findings of these case studies may not be generalizable for three reasons. First, each study examined the experience of only one to three companies. Second, many of the companies in the four case studies seem to have been facing difficulty meeting nondiscrimination testing requirements. Third, some offered matching contributions to their employees and it is unclear whether the presence of a match affects automatic enrollment participation rates. Therefore, the experiences of these companies may not be representative of all 401(k) sponsors. To determine the extent to which plans had adopted automatic enrollment policies, we obtained data from two large plan administrators—Fidelity Investment and Vanguard Investments. Data from Fidelity represents the 18,100 qualified defined contribution plans Fidelity administers, encompassing about 14 million plan participants and over $600 billion in assets. Data from Vanguard is drawn from Vanguard’s universe of defined contribution plans—more than 2,200 qualified plans that encompass more than 3 million participants and almost $200 billion in assets. We determined that these data accurately reflect the experience of Fidelity and Vanguard, but are not necessarily representative of the universe of defined contribution plans. We conducted in-depth interviews with 12 plan sponsors to obtain their perspectives on their experiences with automatic enrollment and related policies, as well as prospects for the policies. We used Form 5500 data from the Department of Labor to select a sample that included plan sponsors from a variety of industries including those that may be considered to have low wages and high turnover, and vice-versa, and small, medium, and large plans, as measured by number of participants. Six of the plan sponsors had adopted automatic enrollment, including one that had significantly narrowed the scope of its automatic enrollment policy, two that had adopted a 401(k) plan within the past 5 years, and 10 that have sponsored a 401(k) plan for more than 5 years. The remaining six plan sponsors have not adopted automatic enrollment. Table 5 shows the industries and plan sponsor sizes for the 12 sponsors that we contacted. In addition, we conducted interviews with officials at the Departments of the Treasury and Labor, as well as academic experts from the Employee Benefits Research Institute (EBRI), Brookings, The Heritage Foundation, Harvard University, and the New School of Social Research. We also interviewed 401(k) plan administrators, providers, and consultants including Deloitte, Fidelity Investments, Vanguard, Mercer, Watson Wyatt, T. Rowe Price, ADP, State Street Global Advisors, and Renaissance Institutional Management. Finally, we interviewed industry and research organizations such as the Investment Company Institute (ICI), the Pension Rights Center (PRC), AARP, the Profit Sharing/401(k) Council of America (PSCA), the American Benefits Council (ABC), American Society for Pension Professionals and Actuaries (ASPPA), the Center for American Progress (CAP), the AFL-CIO, the Small Business Council of America (SBCA), and the Committee on Investment of Employee Benefit Assets (CIEBA). To determine the potential benefits and limitations of automatic IRA proposals and state-assisted retirement savings plan proposals, we analyzed the Automatic IRA Acts of 2006 and 2007 as well as state-assisted retirement savings proposals from four states. The Economic Opportunity Institute identified nine states that have introduced state-assisted retirement savings proposals: California, Connecticut, Maryland, Massachusetts, Michigan, Pennsylvania, Rhode Island, Vermont, Virginia, and Washington. In addition, the architect of the state-assisted retirement savings concept identified Vermont as having introduced a proposal. We selected four—California, Connecticut, Maryland, and Washington—for an in-depth review because these states covered a range of plan type offerings and we were able to obtain feasibility studies or testimony prepared for state congressional hearings on their proposals. We did not conduct an independent legal review of these proposals. We analyzed the work of two researchers from Brookings and The Heritage Foundation that have developed proposals for the automatic IRA and state-assisted retirement savings plans. We then reviewed four studies sponsored by AARP examining the feasibility of automatic IRAs; survey reports by AARP and Prudential on employee and employer attitudes toward automatic IRAs; a microsimulation analysis of the impact of automatic IRAs on workers’ savings accumulations and retirement security; and three feasibility studies on California, Maryland, and Washington states’ proposals. In addition, we reviewed testimony and written materials from hearings held in Connecticut and Washington to obtain the perspectives of state officials, small business representatives, and pension industry representatives on state-assisted retirement savings proposals. We also reviewed relevant federal laws and regulations. We interviewed researchers who have focused on the topic, including those from Brookings, The Heritage Foundation, EBRI, the Economic Opportunity Institute, Harvard University, and the New School for Social Research as well as officials from AARP, PSCA, ASPPA, CIEBA, ABC, ICI, CAP, and PRC. In addition, we interviewed state officials from Washington, Maryland, and California as well as officials from pension plan administrators and consultants, including Mercer, Watson Wyatt, T. Rowe Price, ADP, State Street Global Advisors, and Renaissance Institutional Management. Finally, we interviewed a 401(k) consultant for small businesses and an official from SBCA to obtain the perspective of representatives of the small business community. David Lehrer, Assistant Director, and Michael Hartnett, Analyst-in-Charge, managed this review. Jennifer Gregory also led portions of the research and made significant contributions to this report in all aspects of the work. Edward Nannenhorn and Jay Smale provided methodological assistance. Kate van Gelder provided assistance with report preparation. Roger Thomas provided legal assistance. Ashley McCall assisted in identifying relevant literature. Cheron Brooks developed the report’s graphics. Charlene Johnson, Michaela Monaghan, and Bryan Rogowski verified our findings. Beshears, John, James J. Choi, David Laibson, and Brigitte C. Madrian. “The Importance of Default Options for Retirement Saving Outcomes: Evidence from the USA.” Chapter 3 in Lessons from Pension Reform in the Americas, edited by Stephen J. Kay and Tapen Sinha. New York: Oxford University Press Inc., 2008. Copeland, Craig. Use of Target-Date Funds in 401(k) Plans, 2007. Issue Brief 327. Washington, D.C.: Employee Benefits Research Institute, March 2009. Fidelity Investments. Building Futures: Auto Solutions Data. Boston, Mass.: September 2008. —. Building Futures Vol Contribution Plans. Boston, Mass.: 2007. ume VIII: A Report on Corporate Defined Choi, James J., David Laibson, Brigitte C. Madrian, and Andrew Metrick. “For Better or For Worse: Default Effects and 401(k) Savings Behavior.” Chapter 2 in Perspectives on the Economics of Aging, edited by David A. Wise. Chicago: University of Chicago Press, June 2004. —. “Saving for Retirement on the Path of Least Resistance.” Chapter 11 in Behavioral Public Finance, edited by Edward J. McCaffery and Joel Slemrod. New York, N.Y.: Russell Sage Foundation, 2006. Madrian, Brigitte C. and Dennis F. Shea. “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior.” The Quarterly Journal of Economics, vol. CXVI, no. 4 (November 2001): 1149-1187. Mitchell, Olivia S. and Stephen P. Utkus. “Lessons from Behavioral Finance for Retirement Plan Design.” Chapter 1 in Pension Design and Structure New Lessons from Behavioral Finance, edited by Olivia S. Mitchell and Stephen P. Utkus. New York, N.Y.: Oxford University Press Inc., 2004. Nessmith, William E., Stephen P. Utkus, and Jean A. Young. Measuring the Effectiveness of Automatic Enrollment. Valley Forge, Pa.: Vanguard Center for Retirement Research, December 2007. Swanson, Mark D. and D. Bryan Farnen. “Nationwide Savings Plan Automatic Enrollment Getting Associates PREPared for Retirement.” Benefits Quarterly, vol. 24, no. 3 (Third Quarter 2008): 13-19.
Although employer-sponsored retirement plans can be an important component of income security after retirement, only about half of all workers participate in such plans. To foster greater participation among workers who have access to such plans, Congress included provisions that facilitate plan sponsors' adoption of automatic enrollment policies in the Pension Protection Act of 2006. To foster greater retirement savings among workers who do not have access to an employer-sponsored plan, proposals have been made at the federal level for an "automatic IRA" and at the state level for state-based programs. Because of questions about the extent of retirement savings and prospects for a sound retirement for all Americans, GAO was asked to determine (1) what is known about the effect of automatic enrollment policies among the nation's 401(k) plans, and the extent of and future prospect for such policies; and (2) the potential benefits and limitations of automatic IRA proposals and state-assisted retirement savings proposals. To answer these questions, GAO reviewed available reports and data, and interviewed plan sponsors, industry groups, investment professionals, and relevant federal agencies. Automatic enrollment appears to significantly increase participation in 401(k) plans according to existing studies, but may not be suitable for all plan sponsors. Some studies found that participation rates can reach as high as 95 percent under automatic enrollment. Available data indicate that the percentage of plans with automatic enrollment policies increased from about 1 percent in 2004 to more than 16 percent in 2009, with higher rates of adoption among larger plan sponsors. In most cases, these plans automatically enroll only new employees, rather than all employees. We also found that automatic enrollment may not be suitable for all plan sponsors, such as those with a high-turnover workforce. Further, some data show that while automatic escalation policies--which automatically increase saving rates over time--are increasingly common, they lag behind adoption of automatic enrollment. In combination with low initial contribution rates, this could depress savings for some workers. Also, the emergence of target-date funds--funds that allocate investments among various asset classes and shift to lower-risk investments as a "target" retirement date approaches--as the typical default investment raises questions in light of the substantial losses such funds experienced in the past year. Other proposals could expand the portion of the workforce saving for retirement, but these proposals could face challenges. Under a federally mandated automatic IRA, certain employers could be required to enroll eligible employees in payroll-deduction IRAs, unless the worker specifically opted out. Such a proposal could broaden the population that saves for retirement at minimal cost to employers. However, this proposal faces a number of challenges, including uncertainty about the extent to which it would help low-income workers accumulate significant retirement savings. Proposals for state-assisted retirement savings programs could raise coverage and, ultimately, savings by involving state governments in facilitating retirement savings for workers without access to an employer-sponsored plan. However, such programs face uncertainty about employer and worker participation levels, as well as legal and regulatory issues.
Before I discuss these issues in detail, let me sketch the background of air service to small communities and these programs. Air service to many small communities has declined in recent years, particularly after the September 11, 2001 attacks. As of 2005, scheduled departures at small-, medium-, and large-hub airports had largely returned to 2000 levels. However, departures from nonhub airports continued to decline—the number of departures declined 17 percent at nonhub airports between July 2000 and July 2005. Small-hub airports actually had more scheduled departures in July 2005 than in July 2000, a fact that clearly distinguishes them from nonhub airports. Several factors may help explain why some small communities, especially nonhubs, face relatively limited air service. First, small communities can become cost-cutting targets of air carriers because they are often a carrier’s least profitable operation. Consequently, many network carriers have cut service to small communities and regional carriers now operate at many small communities where the network carriers have withdrawn. Second, the “Commuter Rule” that FAA enacted in 1995 brought small commuter aircraft under the same safety standards as larger aircraft—a change that made it more difficult to economically operate smaller aircraft, such as 19-seat turboprops. For example, the Commuter Rule required commuter air carriers who flew aircraft equipped with 10 or more seats to improve ground deicing programs and carry additional passenger safety equipment. Additionally, the 2001 Aviation and Transportation Security Act instituted the same security requirements for screening passengers at smaller airports as it did for larger airports, sometimes making travel from small airports less convenient than it had been. Third, regional carriers had reduced the use of turboprops in favor of regional jets, which had a negative effect on small communities that have not generated the passenger levels needed to support regional jet service. Finally, many small communities experience passenger “leakage”—that is, passengers choosing to drive longer distances to larger airports instead of using closer small airports. Low-cost carriers have generally avoided flying to small communities but have offered low fares that encourage passengers to drive longer distances to take advantage of them. Mr. Chairman, as you know, Congress established EAS as part of the Airline Deregulation Act of 1978 to help areas that face limited service. The act guaranteed that communities served by air carriers before deregulation would continue to receive a certain level of scheduled air service. In general, the act guaranteed continued service by authorizing DOT to require carriers to continue providing service at these communities. If an air carrier could not continue that service without incurring a loss, DOT could then use EAS funds to award that carrier a subsidy. Under the Airline Deregulation Act, EAS was scheduled to sunset, or end, after 10 years. In 1987, Congress extended the program for another 10 years, and in 1998, it eliminated the sunset provision, thereby permanently authorizing EAS. Funding for EAS comes from a combination of permanent and annual appropriations. The Federal Aviation Reauthorization Act of 1996 (P.L. 104-264) permanently appropriated the first $50 million of such funding— for EAS and safety projects at rural airports—from the collection of overflight fees. Congress can appropriate additional funds from the general fund on an annual basis. To be eligible for this subsidized service, communities must meet three general requirements. They (1) must have received scheduled commercial passenger service as of October 1978, (2) may be no closer than 70 highway miles to a medium- or large-hub airport, and (3) must require a subsidy of less than $200 per person (unless the community is more than 210 highway miles from the nearest medium- or large-hub airport, in which case no average per-passenger dollar limit applies). Federal law also defines the service that subsidized communities are to receive under EAS. For example, carriers providing EAS flights are required to use aircraft with at least 15 seats unless the community seeks a waiver. In addition, flights are to occur at “reasonable times” and at prices that are “not excessive.” EAS operations to communities in Alaska are subject to different requirements (e.g., carriers may use smaller aircraft). Air carriers apply directly to DOT for EAS subsidies. Air carriers set the subsidy application process in motion when they file a 90-day notice of intent to suspend or terminate service. If no air carrier is willing to or able to profitably provide replacement air service without a subsidy, DOT solicits proposals from carriers who are willing to provide service with a subsidy. DOT requires that air carriers submit historical and projected financial data, such as projected operating expenses and revenues, sufficient to support a subsidy calculation. DOT then reviews these data in light of the aviation industry’s pricing structure, the size of aircraft required, the amount of service required, and the number of projected passengers who would use this service in the community. Finally, DOT selects a carrier and sets a subsidy amount to cover the difference between the carrier’s projected cost of operation and its expected passenger revenues, while providing the carrier with a profit element equal to 5 percent of total operating expenses, according to statute. Turning now to SCASDP, Congress authorized SCASDP as a pilot program in the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21), to help small communities enhance their air service. AIR-21 authorized the program for fiscal years 2002 and 2003, and subsequent legislation reauthorized the program through fiscal year 2008 and eliminated the “pilot” status of the program. The Office of Aviation Analysis in DOT’s Office of the Secretary is responsible for administering the program. The law establishing SCASDP allows DOT considerable flexibility in implementing the program and selecting projects to be funded. The law defines basic eligibility criteria and statutory priority factors, but meeting a given number of priority factors does not automatically mean DOT will select a project. DOT also considers many other relevant factors in making decisions on projects, and the final selection of projects is at the discretion of the Secretary of Transportation. (See app. I for a list of the factors used in DOT selections.) SCASDP grants may be made to single communities or a consortium of communities, although no more than four grants each year may be in the same state. Consortiums are considered one project for the purpose of this program. Inclusion of small hubs for eligibility means that some relatively large airports qualify for this program. For example, Buffalo Niagara International Airport in Buffalo, New York, and Norfolk International Airport in Norfolk, Virginia, are eligible for the program; these airports enplaned over 2.4 million and over 1.9 million passengers in 2005, respectively. In contrast, small nonhub airports, such as those in Moab, Utah (with about 2,600 enplanements) or Owensboro, Kentucky (with about 3,600 enplanements) are also eligible. SCASDP grants are also available in the 50 states, the District of Columbia, Puerto Rico, and U.S. territories and possessions. As shown in appendix II, DOT’s awards have been geographically spread out—covering all states except Delaware, Hawaii, Maryland, New Jersey, and Rhode Island. To date, no communities in Delaware or Rhode Island have applied for a grant. Appendix III includes information on all SCASDP grants awarded as of August 31, 2006. Mr. Chairman, demand for EAS subsidies has been growing over the past 10 years, as has the amount of funds appropriated for the program. As shown in table 1, for fiscal year 2006, EAS is providing subsidies to air carriers to serve 154 communities—an increase of 57 communities over the 1997 low point. The funding for EAS has also grown from $25.9 million in 1997 to $109.4 million in 2006. This amounts to an average of about $720,000 per EAS community in fiscal year 2006. Appendix II includes a map showing the locations of current EAS communities and appendix IV lists EAS communities and their current subsidy amounts. In addition, in recent years, the number of communities and states receiving EAS funding has increased. Since 1998, when a $50 million funding level was established, eight additional states now have EAS communities. These states include Alabama, Georgia, Kentucky, Maryland, Mississippi, Oregon, Tennessee and Virginia. Excluding Alaska, where different program rules apply, four states now have had significant increases in the total number of communities served by EAS, compared to 1998. The number of EAS communities in Pennsylvania increased by five, West Virginia and Wyoming increased by four, and New York increased by three. These states are now among the largest participants in the program, in terms of the number of communities served. In 2004, slightly more than 1 million passengers enplaned at airports that received EAS-subsidized service—about 0.15 percent of the more than 706 million passenger enplanements in the United States that year. As of May 1, 2006, 13 regional air carriers served the subsidized communities in the continental United States, and 15 served those in Alaska, Hawaii, and Puerto Rico. The carriers serving the communities in the continental United States typically used turboprop aircraft seating 19 passengers, whereas in Alaska, Hawaii, and Puerto Rico, the most commonly used aircraft seated 4 to 9 passengers. If EAS subsidies were removed, air service may end at many small communities. EAS subsidies have helped communities that were served by air carriers before deregulation continue to receive scheduled air service. Since air carriers have to show financial data to support a subsidy calculation, it is likely that if the subsidy is no longer available commercial air service would also end. Furthermore, according to a DOT official, once a community receives subsidized air service it is rare for an air carrier to offer to provide unsubsidized air service. Finally, in previous work, we reported that subsidies paid directly to air carriers have not provided an effective transportation solution for passengers in many small communities. Mr. Chairman, our previous work was not able to evaluate the overall effectiveness of SCASDP; however, we found that SCASDP grantees pursued several goals and strategies to improve air service, and that the projects have obtained mixed results. In addition, the number of applications for SCASDP has declined each year. As shown in figure 1, in 2002 (the first year SCASDP was funded) DOT received 179 applications for grants; and by 2006 the number of applications had declined to 75. DOT officials said that this decline was, in part, a consequence of several factors, including: (1) many eligible airport communities had received a grant and were still implementing projects at the time; (2) the airport community as a whole was coming to understand the importance DOT places on a fulfilling the local contribution commitment part of the grant proposal; and (3) legislative changes in 2003 that prohibited communities or consortiums from receiving more than one grant for the same project, and that established the timely use of funds as a priority factor in awarding grants. There have been 182 grant awards made in the 5 years of the program. Of these, 56 grants are now completed—34 from 2002, 15 from 2003, and seven from 2004. Finally, as of August 31, 2006, DOT had terminated seven grants it initially awarded. Although at the time of our review it was too soon to determine the overall effectiveness of the program, our review of the 23 projects completed by September 30, 2005, found mixed results. The kinds of improvements in service that resulted from the grants included adding an additional air carrier, destination, or flights; or changing the type of aircraft serving the community. In terms of numbers, airport officials reported that 19 of the 23 grants resulted in service or fare improvements during the life of the grant. In addition, during the course of the grant, enplanements rose at 19 of the 23 airports. However, after the 23 SCASDP grants were completed, 11 grants resulted in improvements that were self-sustaining. Three additional improvements were still in place, although not self-sustaining; thus 14 improvements were in place after the grants were completed. (See fig. 2.) Charleston, West Virginia provides an example of a successful project. With the aid of a SCASDP grant, Charleston was able to add a new carrier and new nonstop service to a major market, Houston. At the time of our review, and after the grant was completed, this service was continuing at the level the grant provided. Finally, for SCASDP grants awarded from 2002 though 2004, we surveyed airport officials to identify the types of project goals they had for their grants. We found that grantees had identified a variety of project goals to improve air service to their community. These goals included adding flights, airlines, and destinations; lowering fares; upgrading the aircraft serving the community; obtaining better data for planning and marketing air service; increasing enplanements; and curbing the loss of passengers to other airports. (See fig. 3 for the number and types of project goals identified by airport directors.) To achieve these goals, grantees have used many strategies, including subsidies and revenue guarantees to the airlines, marketing, hiring personnel and consultants, and establishing travel banks in which a community guarantees to buy a certain number of tickets. (See fig. 4.) In addition, grantees have subsidized the start-up of an airline, taken over ground station operations for an airline, and subsidized a bus to transport passengers from their airport to a hub airport. Incorporating marketing as part of the project was the most common strategy used by airports. Some airline officials said that marketing efforts are important for the success of the projects. Airline officials also told us that projects that provide direct benefits to an airline, such as revenue guarantees and financial subsidies, have the greatest chance of success. According to these officials, such projects allow the airline to test the real market for air service in a community without enduring the typical financial losses that occur when new air service is introduced. They further noted that, in the current aviation economic environment, carriers cannot afford to sustain losses while they build up passenger demand in a market. The outcomes of the grants may be affected by broader industry factors that are independent of the grant itself, such as a decision on the part of an airline to reduce the number of flights at a hub. Mr. Chairman, let me now turn to a discussion of options both for the reform of EAS and the evaluation of SCASDP. I raise these options, in part, because they link to our previous report on the challenges facing the federal government in the 21st century, which notes that the federal government’s long-term fiscal imbalance presents enormous challenges to the nation’s ability to respond to emerging forces reshaping American society, the United States’ place in the world, and the future role of the federal government. In our previous report, we call for a more fundamental and periodic reexamination of the base of government, ultimately covering discretionary and mandatory programs as well as the revenue side of the budget. In light of these challenges, Congress may wish to weigh options for reforming EAS and obtaining additional information about SCASDP’s effectiveness—information that could be obtained if DOT follows our recommendation to evaluate the program’s effectiveness once more grant projects have been completed. In previous work, we have identified options for enhancing the effectiveness of EAS and controlling cost increases. These options include targeting subsidized service on more remote communities than is currently the case, improving the matching of capacity with community use, consolidating service to multiple communities into regional airports, and changing the form of federal assistance from carrier subsidies to local grants; all of these options would require legislative changes. Several of these options formed the basis for reforms passed as part of Vision-100. For various reasons these pilot programs have not progressed, so it is premature to assess their impact. Let me now briefly discuss each option, stressing at the outset that each presents potential negative, as well as positive, effects. The positive effects might include lowered federal costs, increased passenger traffic at subsidized communities, and enhanced community choice of transportation options. Potential negative effects might include increased passenger inconvenience and an adverse effect on local economies that may lose scheduled airline service. The first option would be to target subsidized service to more remote communities. This would mean increasing the highway distance criteria between EAS-eligible communities and the nearest qualifying airport, and expanding the definition of qualifying nearby airports to include small hubs. Currently, to be eligible for EAS-subsidized service, a community must be more than 70 highway miles from the nearest medium- or large- hub airport. We found that, if the distance criterion was increased to 125 highway miles and the qualifying airports were expanded to include small- hub airport with jet service, 55 EAS-subsidized communities would no longer qualify for subsidies—and travelers at those communities would need to drive to the nearby larger airport to access air service. Limiting subsidized service to more remote communities could potentially save federal subsidies. For example, we found that about $24 million annually could be saved if service were terminated at 30 EAS airports that were within 125 miles of medium- or large-hub airports. This estimate assumed that the total subsidies in effect in 2006 at the communities that might lose their eligibility would not be obligated to other communities and that those amounts would not change over time. On the other hand, the passengers who now use subsidized service at such terminated airports would be inconvenienced because of the increased driving required to access air service at the nearest hub airport. In addition, implementing this option could potentially negatively impact the economy of the affected communities. For instance, officials from some communities, such as Brookings, South Dakota, told us that they are able to attract and retain local businesses because of several factors relating to the quality of life there—with one important factor being its scheduled air service. Another option is to better match capacity with community use. Our past analysis of passenger enplanement data indicated that relatively few passengers fly in many EAS markets, and that, on average, most EAS flights operate with aircraft that are largely empty. To better match capacity with community use, air carriers could reduce unused capacity— either by using smaller aircraft or by reducing the number of flights. Carriers could use smaller aircraft. For example, we reported that from 1995 to 2002, total passenger traffic dropped at 9 of 24 EAS communities where carriers added flight frequencies. Better matching capacity with community use could save federal subsidies. For instance, reducing the number of required daily subsidized departures could save federal subsidies by reducing carrier costs in some locations. Federal subsidies could also be lowered at communities where carriers used smaller—and hence less costly—aircraft. On the other hand, there are a number of potential disadvantages. For example, passenger acceptance is uncertain. Representatives from some communities, like Beckley and Bluefield, West Virginia, told us that passengers who are already somewhat reluctant to fly on 19-seat turboprops would be even less willing to fly on smaller aircraft. Such negative passenger reaction may cause more people to drive to larger airports—or simply drive to their destinations. Additionally, the loss of some daily departures at certain communities would likely further inconvenience some passengers. Lastly, reduced capacity may have a negative impact on the economy of the affected community. Another option is to consolidate subsidized service at multiple communities into service at regional airports. As of July 1, 2002, 21 EAS subsidized communities were located within 70 highway miles of at least one other subsidized community. We reported that if subsidized service to each of these communities were regionalized, 10 regional airports could serve those 21 communities. Regionalizing service to some communities could generate federal savings. However, those savings may be marginal, because the total costs to serve a single regional airport may be only slightly less than the cost to serve two or three neighboring airports. For example, in 2002, DOT provided $1.9 million in annual subsidies to Air Midwest, Inc., to serve Ogdensburg and Massena, New York, with stops at another EAS-subsidized community (Watertown, New York) before arriving at its final destination of Pittsburgh, Pennsylvania. According to an official with Air Midwest, the marginal cost of operating the flight segments to Massena and Ogdensburg are small in relation to the cost of operating the flight from Pittsburgh to Watertown. Another potential positive effect is that passenger levels at the proposed regional airports could grow because the airline(s) would be drawing from a larger geographic area, which could prompt the airline(s) to provide better service (i.e., larger aircraft or more frequent departures). There are also a number of disadvantages to implementing this option. First, local passengers would be inconvenienced, since they would likely have to drive longer distances to obtain local air service. Moreover, the passenger response to regionalizing local air service is unknown. Passengers faced with driving longer distances may decide that driving to an altogether different airport is worthwhile, if it offers better service and air fares. Additionally, as with other options, the potential impact on the economy of the affected communities is unknown. Regionalizing air service has sometimes proven controversial at the local level, in part because regionalizing air service would require some communities to give up their own local service for the hypothetical benefits of a less convenient regional facility. Even in situations where one airport is larger and better equipped than others (e.g., where one airport has longer runways, a superior terminal facility, and better safety equipment on site), it is likely to be difficult for the other communities to recognize and accept surrendering their local control and benefits. Another option is to change carrier subsidies into local grants. We have noted that local grants could enable communities to match their transportation needs with individually tailored transportation options to connect them to the national air service system. As we previously discussed, DOT provides grants to help small communities to enhance their air service via SCASDP. Our work on SCASDP identified some positive aspects of the program that could be beneficial for EAS communities. First, in order for communities to receive a Small Community grant, they had to develop a proposal that was directed at improving air service locally. In our discussion with some of these communities, it was noted that this required them to take a closer look at their air service and better understand the market they serve—a benefit that they did not foresee. In addition, in one case developing the proposal caused the airport to build a stronger relationship with the community. SCASDP also allows for flexibility in the strategy a local community can choose to improve air service, recognizing that local facts and circumstances affect the chance of a successful outcome. In contrast, EAS has one approach—a subsidy to an air carrier. However, there are also differences between the two programs that make the grant approach problematic for some EAS communities; these differences should be considered. First, because the grants are provided on a one-time basis, their purpose is to create self-sustaining air service improvements. The grant approach is therefore best applicable where a viable air service market can be developed. This could be difficult for EAS communities to achieve because, currently, the service they receive is not profitable unless there is a subsidy. While some EAS communities might be able to transition to self-sustaining air service through use of one of the grants, for some communities this would not be the case. In addition, the grant program normally includes a local cash match, which may be difficult for some EAS communities to provide. This could systematically eliminate the poorest communities, unless other sources of funds—such as state support or local industry support—could be found. In Vision-100, Congress authorized several programs relevant to small communities. These programs have not progressed for various reasons. The Alternate Essential Air Service Pilot Program allows the Secretary of Transportation to provide assistance directly to a community, rather than paying compensation to an air carrier. Under the pilot program, communities could provide assistance to air carriers using smaller aircraft, fund on-demand air taxi service, provide transportation services to and from several EAS communities to a single regional airport or other transportation center, and purchase aircraft. Vision-100 also authorized the Community Flexibility Pilot Program, which requires the Secretary of Transportation to establish a program for up to 10 communities that agree to forgo their EAS subsidy for 10 years in exchange for a grant twice the amount of the EAS subsidy. The funds may be used to improve airport facilities. (The grants can be used for things other than general aviation.) DOT has solicited proposals for projects in both of these programs. However, according to a DOT official, no communities expressed any interest in participating in these programs. Finally, the EAS Local Participation Program allows the Secretary of Transportation to select no more than 10 designated EAS communities within 100 miles, by road, of a small hub (and within the contiguous states) to assume 10 percent of their EAS subsidy costs for a 4-year period. However, Congress has prohibited DOT from obligating or expending any funds to implement this program since Vision-100 was enacted. We recently recommended that DOT examine the effectiveness of this program when more projects are complete. Such an evaluation would provide DOT and Congress with information about whether additional or improved air service was not only obtained, but whether it continues after the grant support has ended. This may be particularly important since our work on the limited number of completed projects found that, 11 of 23 grantees reported that the improvements were self-sustaining after the grant was complete. In addition, our prior work on the air service to small communities found that once financial incentives are removed, additional air service may be difficult to maintain. Since our report, an additional 33 grants have been completed and DOT’s plans to examine the results from these completed grants should provide a clearer and more complete picture of the value of this program. Any improved service achieved from this program could then be weighed against the cost to achieve those gains. This information will be important as Congress considers the reauthorization of this program in 2008. In addition to the benefit of providing Congress with information upon which to evaluate the merits of SCASDP, the evaluation would likely have additional benefits. In conducting this evaluation, DOT could potentially find that certain strategies the communities used were more effective than others. For example, during our work, we found some opposing views on the usefulness of travel banks and some marketing strategies as incentives for attracting improved service. As DOT officials identify strategies that have been effective in starting self-sustaining improvements in air service, they could share this information with other small community airports and, perhaps, consider such factors in its grant award process. In addition, DOT might find some best practices and could develop some lessons learned from which all small community airports could benefit. For example, one airport used the approach of assuming airline ground operations such as baggage handling and staffing ticket counters. This approach served to maintain airline service of one airline and to attract additional service from another airline. Sharing information on approaches like this that worked (and approaches that did not) may help other small communities improve their air service, perhaps even without federal assistance. In conclusion, Mr. Chairman, Congress is faced with many difficult choices as it tries to help improve air service to small communities, especially given the fiscal challenges the nation faces. Regarding EAS, I think it is important to recognize that for many of the communities, air service is not—and might never be—commercially viable and there are limited alternative transportation means for nearby residents to connect to the national air system. In these cases, continued subsidies will be needed to maintain that capability. In some other cases, current EAS communities are within reasonable driving distances to alternative airports that can provide that connection to the air system. It will be Congress’ weighing of priorities that will ultimately decide whether this service will continue or whether other, less costly options will be pursued. In looking at SCASDP, I would emphasize that we have seen some instances in which the grant funds provided additional service, and some in which the funds did not work. When enough experience has been gained with this program, the Congress will be in a position to determine if the air service gains that are made are worth the overall cost of the program. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Gerald L. Dillingham at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony and related work include Robert Ciszewski, Catherine Colwell, Daniel Concepcion, Brandon Haller, Dave Hooper, Stuart Kaufman, Alex Lawrence, Bonnie Pignatiello Leer, Maureen Luna-Long, John Mingus, and Glen Trochelman. 1. How many carriers are serving the community? 2. How many destinations are served? 3. What is the frequency of flights? 4. What size aircraft service the community? 5. Has the level of servic e been increasing or decreasing over the past 3 years? 6. ng over the past 3 years? 7. Is the Metropolitan Statistical Area population increasing or decreasing? 8. Is the per-capita income increasing or decreasing? 9. Are the number of busin increasing or decreasing? 10. What is the proximity to larger air service centers? 11. What is the quality of road access to other air service centers? 12. Does the community lac markets? ed top origin and destination 13. hts? 14. If this is an air service proje willing and committed ct, has the community selected a carrier that is to serve? 15. If his is an air service would serve? project, does the community have a targeted carrier that 1. Do demographic indicators and the business environment support the project? 2. Does the community have a demonstrated track record of implementing air service development projects? 3. Does the project address the stated problem? 4. Does the community have a firm plan for promoting the service? 5. Does the community hav terminating the projec e a definitive plan for monitoring, modifying, and t, if necessary? 6. Does the community sufficiency or completi have a plan for continued support of the project on is not attained after the grant expires? 7. If it is mainly a marketing implementation plan i does the community have a firm n place? 8. Is the applicant a participating consortium? 9. Is the project innovative? 10. Does the project have unique geographical traits or other considerations? 11. Is the amount of funding amount of funding ava requested reasonable compared with the total ilable? 12. Is the local contribution r easonable compared with the amount requested ? 13. Can the project be compl ng the funding period requested? 14. Is the applicant a small hub now? 15. Is the applicant a large nonhub now? 16. Is the applicant a small nonhub now? 17. Is the applicant currently subsidized through Essential Air Service? 18. Is the project for marketing only? 19. Is the project a study only? 20. Does the project involve intermodal services? 21. Is the project primarily a carrier incentive? 22. Is the project primarily air fare focused? 23. Does the project involve a low-fare service provider? 24. ft costs from the local or state level to t he federal level? 25. Does the proposal show that proximity to other service would detract from it? 26. Is the applicant geographically close to a past grant recipient? Status as of August 31, 2006 1. 2. 3. Aleutians East Borough, AK 4. 5. 6. 7. 8. 9. 10. 11. Brainerd, St Cloud, MN 12. 13. 14. Casper, Gillette, WY 15. 16. 17. 18. 19. 20. 21. 22. Lake Havasu City, AZ 23. 24. 25. Manhattan, KS 26. Marion, IL 27. Mason City, IA 28. Meridian, MS 29. Moab, UT 30. Mobile, AL 31. 32. 33. Status as of August 31, 2006 34. 35. 36. 37. 38. 39. 40. Status as of August 31, 2006 1. 2. Status as of August 31, 2006 1. 2. 3. 4. 5. 6. 7. 8. 9. Clarksburg/Morgantown (reallocation), WV 11. Ongoing Status as of August 31, 2006 12. 13. 14. 15. 16. 17. Hot Springs (reallocation), AR 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. Rutland (reallocation), VT 32. 33. 34. 35. 36. 37. 38. 39. Syracuse (reallocation), NY 40. 41. Visalia (reallocation), CA 42. 43. 44. 45. Worcester (reallocation), MA 46. Ongoing Status as of August 31, 2006 Status as of August 31, 2006 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Greenville, NC 11. Gulfport/Biloxi, MS 12. Hancock/Houghton, MI 13. Hibbing, MN 14. Huntington, WV 15. 16. 17. 18. Killeen, TX 19. Knox County, ME 20. Ongoing Status as of August 31, 2006 28. Oregon/Washington Consortium, OR/WA 29. Rockford, IL 30. Ruidoso, NM 31. Somerset, KY 32. Stewart (Newburgh), NY 33. Vernal, UT 34. Williamsport, PA 35. Wyoming Consortium, WY 1. 2. Big Sandy Region, KY 3. 4. 5. 6. 7. 8. Garden City/Dodge City/Liberal, KS 9. 10. Grand Forks, ND 11. Harrisburg, PA 12. 13. 14. Airline Deregulation: Reregulating the Airline Industry Would Reverse Consumer Benefits and Not Save Airline Pensions. GAO-06-630 Washington, D.C.: June 9, 2006. Commercial Aviation: Initial Small Community Air Service Development Projects Have Achieved Mixed Results. GAO-06-21 Washington, D.C: November 30, 2005. Commercial Aviation: Survey of Small Community Air Service Grantees and Applicants. GAO-06-101SP. Washington, D.C.: November 30, 2005 Commercial Aviation: Bankruptcy and Pension Problems Are Symptoms of Underlying Structural Issues. GAO-05-945 Washington, D.C.: September 30, 2005. Commercial Aviation: Legacy Airlines Must Further Reduce Costs to Restore Profitability. GAO-04-836 Washington, D.C.: August 11, 2004 Commercial Aviation: Issues Regarding Federal Assistance for Enhancing Air Service to Small Communities. GAO-03-540T. Washington, D.C.: March 11, 2003 Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: April l0, 2003. Commercial Aviation: Factors Affecting Efforts to Improve Air Service at Small Community Airports. GAO-03-330 Washington, D.C.: January 17, 2003 Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Options to Enhance the Long-term Viability of the Essential Air Service Program. GAO-02-997R. Washington, D.C.: August 30, 2002. Commercial Aviation: Air Service Trends at Small Communities Since October 2000. GAO-02-432. Washington, D.C.: March 29, 2002. Proposed Alliance Between American Airlines and British Airways Raises Competition Concerns and Public Interest Issues. GAO-02-293R. Washington, D.C.: December 21, 2001. “State of the U.S. Commercial Airlines Industry and Possible Issues for Congressional Consideration,” Speech by Comptroller General of the United States David Walker. The International Aviation Club of Washington: November 28, 2001. Financial Management: Assessment of the Airline Industry’s Estimated Losses Arising From the Events of September 11. GAO-02-133R. Washington, D.C.: October 5, 2001. Commercial Aviation: A Framework for Considering Federal Financial Assistance. GAO-01-1163T. Washington, D.C.: September 20, 2001. Aviation Competition: Restricting Airline Ticketing Rules Unlikely to Help Consumers. GAO-01-832 Washington, D.C.: July 31, 2001. Aviation Competition: Challenges in Enhancing Competition in Dominated Markets. GAO-01-518T. Washington, D.C.: March 13, 2001. Aviation Competition: Regional Jet Service Yet to Reach Many Small Communities. GAO-01-344 Washington, D.C.: February 14, 2001. Airline Competition: Issues Raised by Consolidation Proposals. GAO-01- 402T. Washington, D.C.: February 7, 2001. Aviation Competition: Issues Related to the Proposed United Airlines-US Airways Merger. GAO-01-212, Washington, D.C.: December 15, 2000. Essential Air Service: Changes in Subsidy Levels, Air Carrier Costs, and Passenger Traffic. RCED-00-34, Washington, D.C.: April 14, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the last decade, significant changes have occurred in the airline industry. Network carriers are facing challenging financial conditions and low-cost carriers are attracting passengers away from some small community airports. These changes, and others, have challenged the ability of small communities to attract adequate commercial air service. In response to these challenges, Congress has established two key funding programs--the Essential Air Service (EAS) and the Small Community Air Service Development Program (SCASDP)--to help small communities retain or attract air service. However, the sustainability of such funding could be affected by the federal government's fiscal imbalance. In addition, GAO reports have raised questions about how these programs support commercial air service to small communities. Given this environment, this testimony discusses (1) the development and impact of EAS, (2) the status of SCASDP and (3) options for reforming EAS and evaluating SCASDP. The testimony is based on previous GAO research and interviews related to these programs, along with program updates. The EAS program guarantees that communities that were served by air carriers before deregulation continue to receive a certain level of scheduled air service, under certain conditions. A growing number of communities are receiving subsidies under this program and funding for the EAS program has risen more than four-fold over the past 10 years. The federal subsidies have resulted in continued air service to the EAS communities, but if the subsidies were removed, air service might end at many of these communities. SCASDP grantees have used their grants to pursue a variety of goals and have used a variety of strategies, including marketing and revenue guarantees, to improve air service. The program has had mixed results: 11 of the 23 projects completed as of September 30, 2005, showed self-sustaining improvements to air service; while the remaining 12 grantees either discontinued the improvement or the improvement was not self-sustaining. Finally, the number of applications for SCASDP grants has declined--from 179 in 2002 to 75 in 2006. There are options for reforming EAS such as consolidating service into regional airports, which might make it more cost-effective, but also could reduce service to some communities. In 2003, Congress established several programs as alternatives for EAS, but these programs have not progressed. The Department of Transportation has agreed to evaluate completed SCASDP projects, an effort that will be useful when Congress considers the reauthorization of this program in 2008; this could also identify "lessons learned" from successful projects.
Long-term care includes many types of services needed when a person has a physical or mental disability. Individuals needing long-term care have varying degrees of difficulty in performing some activities of daily living without assistance, such as bathing, dressing, toileting, eating, and moving from one location to another. They may also have trouble with instrumental activities of daily living, which include such tasks as preparing food, housekeeping, and handling finances. They may have a mental impairment, such as Alzheimer’s disease, that necessitates supervision to avoid harming themselves or others or assistance with tasks such as taking medications. Although a chronic physical or mental disability may occur at any age, the older an individual becomes, the more likely a disability will develop or worsen. According to the 1999 National Long-Term Care Survey, approximately 7 million elderly had some sort of disability in 1999, including about 1 million needing assistance with at least five activities of daily living. Assistance takes place in many forms and settings, including institutional care in nursing homes or assisted living facilities, home care services, and unpaid care from family members or other informal caregivers. In 1994, approximately 64 percent of all elderly with a disability relied exclusively on unpaid care from family or other informal caregivers; even among elderly with difficulty with five activities of daily living, about 41 percent relied entirely on unpaid care. Nationally, spending from all public and private sources for long-term care for all ages totaled about $137 billion in 2000, accounting for nearly 12 percent of all health care expenditures. Over 60 percent of expenditures for long-term care services are paid for by public programs, primarily Medicaid and Medicare. Individuals finance almost one-fourth of these expenditures out-of-pocket and, less often, private insurers pay for long- term care. Moreover, these expenditures do not include the extensive reliance on unpaid long-term care provided by family members and other informal caregivers. Figure 1 shows the major sources financing these expenditures. Medicaid, the joint federal-state health-financing program for low-income individuals, continues to be the largest funding source for long-term care. Medicaid provides coverage for poor persons and to many individuals who have become nearly impoverished by “spending down” their assets to cover the high costs of their long-term care. For example, many elderly persons become eligible for Medicaid as a result of depleting their assets to pay for nursing home care that Medicare does not cover. In 2000, Medicaid paid 45 percent (about $62 billion) of total long-term care expenditures. States share responsibility with the federal government for Medicaid, paying on average approximately 43 percent of total Medicaid costs. Eligibility for Medicaid-covered long-term care services varies widely among states. Spending also varies across states—for example, in fiscal year 2000, Medicaid per capita long-term care expenditures ranged from $73 per year in Nevada to $680 per year in New York. For the national average in recent years, about 53 to 60 percent of Medicaid long- term care spending has gone toward the elderly. In 2000, nursing home expenditures dominated Medicaid long-term care expenditures, accounting for 57 percent of its long-term care spending. Home care expenditures make up a growing share of Medicaid long-term care spending as many states use the flexibility available within the Medicaid program to provide long-term care services in home- and community- based settings. Expenditures for Medicaid home- and community-based services grew ten-fold from 1990 to 2000—from $1.2 billion to $12.0 billion. Other significant long-term care financing sources include: Individuals’ out-of-pocket payments, the second largest payer of long-term care services, accounted for 23 percent (about $31 billion) of total expenditures in 2000. The vast majority (80 percent) of these payments were used for nursing home care. Medicare spending accounted for 14 percent (about $19 billion) of total long-term care expenditures in 2000. While Medicare primarily covers acute care, it also pays for limited stays in post-acute skilled nursing care facilities and home health care. Private insurance, which includes both traditional health insurance and long-term care insurance, accounted for 11 percent (about $15 billion) of long-term care expenditures in 2000. Less than 10 percent of the elderly and an even lower percentage of the near elderly (those aged 55 to 64) have purchased long-term care insurance, although the number of individuals purchasing long-term care insurance increased during the 1990s. Before focusing on the increased burden that long-term care will place on federal and state budgets, it is important to look at the broader budgetary context. As we look ahead we face an unprecedented demographic challenge with the aging of the baby boom generation. As the share of the population 65 and over climbs, federal spending on the elderly will absorb a larger and ultimately unsustainable share of the federal budget and economic resources. Federal spending for Medicare, Medicaid, and Social Security are expected to surge—nearly doubling by 2035—as people live longer and spend more time in retirement. In addition, advances in medical technology are likely to keep pushing up the cost of health care. Moreover, the baby boomers will be followed by relatively fewer workers to support them in retirement, prompting a relatively smaller employment base from which to finance these higher costs. Under the 2001 Medicare trustees’ intermediate estimates, Medicare will double as a share of gross domestic product (GDP) between 2000 and 2035 (from 2.2 percent to 5.0 percent) and reach 8.5 percent of GDP in 2075. The federal share of Medicaid as a percent of GDP will grow from today’s 1.3 percent to 3.2 percent in 2035 and reach 6.0 percent in 2075. Under the Social Security trustees’ intermediate estimates, Social Security spending will grow as a share of GDP from 4.2 percent to 6.6 percent between 2000 and 2035, reaching 6.7 percent in 2075. (See fig. 2.) Combined, in 2075 a full one-fifth of GDP will be devoted to federal spending for these three programs alone. To move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government. Our long-term budget simulations serve to illustrate the increasing constraints on federal budgetary flexibility that will be driven by entitlement spending growth. Assume, for example, that last year’s tax reductions are made permanent, revenue remains constant thereafter as a share of GDP, and discretionary spending keeps pace with the economy. Under these conditions, spending for net interest, Social Security, Medicare, and Medicaid would consume nearly three-quarters of federal revenue by 2030. This will leave little room for other federal priorities, including defense and education. By 2050, total federal revenue would be insufficient to fund entitlement spending and interest payments. (See fig. 3.) Beginning about 2010, the share of the population that is age 65 or older will begin to climb, with profound implications for our society, our economy, and the financial condition of these entitlement programs. In particular, both Social Security and the Hospital Insurance portion of Medicare are largely financed as pay-as-you-go systems in which current workers’ payroll taxes pay current retirees’ benefits. Therefore, these programs are directly affected by the relative size of populations of covered workers and beneficiaries. Historically, this relationship has been favorable. In the near future, however, the overall worker-to-retiree ratio will change in ways that threaten the financial solvency and sustainability of these entitlement programs. In 2000, there were 4.9 working-age persons (18 to 64 years) per elderly person, but by 2030, this ratio is projected to decline to 2.8. This decline in the overall worker-to-retiree ratio will be due to both the surge in retirees brought about by the aging baby boom generation as well as falling fertility rates, which translate into relatively fewer workers in the near future. Social Security’s projected cost increases are due predominantly to the burgeoning retiree population. Even with the increase in the Social Security eligibility age to 67, these entitlement costs are anticipated to increase dramatically in the coming decades as a larger share of the population becomes eligible for Social Security, and if, as expected, average longevity increases. As the baby boom generation retires and the Medicare-eligible population swells, the imbalance between outlays and revenues will increase dramatically. Medicare growth rates reflect not only a rapidly increasing beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. While advances in science and technology have greatly expanded the capabilities of medical science, disproportionate increases in the use of health services have been fueled by the lack of effective means to channel patients into consuming, and providers into offering, only appropriate services. Although Medicare cost growth had slowed in recent years, in fiscal year 2001 Medicare spending grew by 10.3 percent and is up 7.8 percent for the first 5 months of fiscal year 2002. To obtain a more complete picture of the future health care entitlement burden, especially as it relates to long-term care, we must also acknowledge and discuss the important role of Medicaid. Approximately 71 percent of all Medicaid dollars are dedicated to services for the aged, blind, and disabled individuals, and Medicaid spending is one of the largest components of most states’ budgets. At the February 2002 National Governors Association meeting, governors reported that during a time of fiscal crisis for states, the growth in Medicaid is creating a situation in which states are faced with either making major cuts in programs or being forced to raise taxes significantly. Further, in a 2001 survey, 24 states cited increased costs for nursing homes and home- and community-based services as among the top factors in Medicaid cost growth. Over the longer term, the increase in the number of elderly will add considerably to the strain on federal and state budgets as governments struggle to finance increased Medicaid spending. In addition, this strain on state Medicaid budgets may be exacerbated by fluctuations in the business cycle, such as the recent economic slowdown. State revenues decline during economic downturns, while the needs of the disabled for assistance remain constant. In coming decades, the sheer number of aging baby boomers will swell the number of elderly with disabilities and the need for services. These overwhelming numbers offset the slight reductions in the prevalence of disability among the elderly reported in recent years. In 2000, individuals aged 65 or older numbered 34.8 million people—12.7 percent of our nation’s total population. By 2020, that percentage will increase by nearly one-third to 16.5 percent—one in six Americans—and will represent nearly 20 million more elderly than there are today. By 2040, the number of elderly aged 85 years and older—the age group most likely to need long- term care services—is projected to more than triple from about 4 million to about 14 million (see fig. 4). It is difficult to precisely predict the future increase in the number of the elderly with disabilities, given the counterbalancing trends of an increase in the total number of elderly and a possible continued decrease in the prevalence of disability. For the past two decades, the number of elderly with disabilities has remained fairly constant while the percentage of those with disabilities has fallen between 1 and 2 percent a year. Possible factors contributing to this decreased prevalence of disability include improved health care, improved socioeconomic status, and better health behaviors. The positive benefits of the decreased prevalence of disability, however, will be overwhelmed by the sheer numbers of aged baby boomers. The total number of disabled elderly is projected to increase to between one- third and twice current levels, or as high as 12.1 million by 2040. The increased number of disabled elderly will exacerbate current problems in the provision and financing of long-term care services. Approximately one in five adults with long-term care needs and living in the community reports an inability to receive needed care, such as assistance in toileting or eating, often with adverse consequences. In addition, disabled elderly may lack family support or the financial means to purchase medical services. Long-term care costs can be financially catastrophic for families. Services, such as nursing home care, are very expensive; while costs can vary widely, a year in a nursing home typically costs $50,000 or more, and in some locations can be considerably more. Because of financial constraints, many elderly rely heavily on unpaid caregivers, usually family members and friends; overall, the majority of care received in the community is unpaid. However, in coming decades, fewer elderly may have the option of unpaid care because a smaller proportion may have a spouse, adult child, or sibling to provide it. By 2020, the number of elderly who will be living alone with no living children or siblings is estimated to reach 1.2 million, almost twice the number without family support in 1990. In addition, geographic dispersion of families may further reduce the number of unpaid caregivers available to elderly baby boomers. Currently, public and private spending on long-term care is about $137 billion for persons of all ages, and for the elderly alone is projected to increase two-and-a-half to four times in the next 40 to 50 years—reaching as much as $379 billion in constant dollars for the elderly alone, according to one source. (See fig. 5.) Estimates of future spending are imprecise, however, due to the uncertain effect of several important factors, including how many elderly will need assistance, the types of care they will use, and the availability of public and private sources of payment for care. Absent significant changes in the availability of public and private payment sources, however, future spending is expected to continue to rely heavily on public payers, particularly Medicaid, which estimates indicate pays about 36 to 37 percent of long-term care expenditures for the elderly. One factor that will affect spending is how many elderly will need assistance. As I have previously discussed, even with continued decreases in the prevalence of disability, aging baby boomers are expected to have a disproportionate effect on the demand for long-term care. Another factor influencing projected long-term care spending is the type of care that the baby boom generation will use. Currently, expenditures for nursing home care greatly exceed those for care provided in other settings. Average expenditures per elderly person in a nursing home can be about four times greater than average expenditures for those receiving paid care at home. The past decade has seen increases in paid home care as well as in assisted living facilities, a relatively newer and developing type of housing in which an estimated 400,000 elderly with disabilities resided in 1999. It is unclear what effect continued growth in paid home care, assisted living facilities, or other care alternatives may have on future expenditures. Any increase in the availability of home care may reduce the average cost per disabled person, but the effect could be offset if there is an increase in the use of paid home care by persons currently not receiving these services. Changes in the availability of public and private sources to pay for care will also affect expenditures. Private long-term care insurance has been viewed as a possible means of reducing catastrophic financial risk for the elderly needing long-term care and relieving some of the financial burden currently falling on public long-term care programs. Increases in private insurance may lower public expenditures but raise spending overall because insurance increases individuals’ financial resources when they become disabled and allows the purchase of additional services. The number of policies in force remains relatively small despite improvements in policy offerings and the tax deductibility of premiums. However, as we have previously testified, questions about the affordability of long-term care policies and the value of the coverage relative to the premiums charged have posed barriers to more widespread purchase of these policies. Further, many baby boomers continue to assume they will never need such coverage or mistakenly believe that Medicare or their own private health insurance will provide comprehensive coverage for the services they need. If private long-term care insurance is expected to play a larger role in financing future generations’ long-term care needs, consumers need to be better informed about the costs of long-term care, the likelihood that they may need these services, and the limits of coverage through public programs and private health insurance. With or without increases in the availability of private insurance, Medicaid and Medicare are expected to continue to pay for the majority of long-term care services for the elderly in the future. Without fundamental financing changes, Medicaid can be expected to remain one of the largest funding sources for long-term care services for aging baby boomers, with Medicaid expenditures for long-term care for the elderly reaching as high as $132 billion by 2050. As I noted previously, this increasing burden will strain both federal and state governments. Given the anticipated increase in demand for long-term care services resulting from the aging of the baby boom generation, the concerns about the availability of services, and the expected further stress on federal and state budgets and individuals’ financial resources, some policymakers and advocates have called for long-term care financing reforms. As further deliberation is given to any long-term care financing reforms, I would like to close by suggesting several considerations for policymakers to keep in mind. At the outset, it is important to recognize that long-term care services are not just another set of traditional health care services. Meeting acute and chronic health care needs is an important element of caring for aging and disabled individuals. Long-term care, however, encompasses services related to maintaining quality of life, preserving individual dignity, and satisfying preferences in lifestyle for someone with a disability severe enough to require the assistance of others in everyday activities. Some long-term care services are akin to other health care services, such as personal assistance with activities of daily living or monitoring or supervision to cope with the effect of dementia. Other aspects of long-term care, such as housing, nutrition, and transportation, are services that all of us consume daily but become an integral part of long-term care for a person with a disability. Disabilities can affect housing needs, nutritional needs, or transportation needs. But, what is more important is that where one wants to live or what activities one wants to pursue also affects how needed services can be provided. Providing personal assistance in a congregate setting such as a nursing home or assisted living facility may satisfy more of an individual’s needs, be more efficient, and involve more direct supervision to ensure better quality than when caregivers travel to individuals’ homes to serve them one on one. Yet, those options may conflict with a person’s preference to live at home and maintain autonomy in determining his or her daily activities. Keeping in mind that policies need to take account of the differences involved in long-term care, let me offer several considerations as you seek to shape effective long-term care financing reforms. These include: Determining societal responsibilities. A fundamental question is how much the choices of how long-term care needs are met should depend upon an individual’s own resources or whether society should supplement those resources to broaden the range of choices. For a person without a disability requiring long-term care, where to live and what activities to pursue are lifestyle choices based on individual preferences and resources. However, for someone with a disability, those lifestyle choices affect the costs of long-term care services. The individual’s own resources—including financial resources and the availability of family or other informal supports—may not be sufficient to preserve some of their choices and also obtain needed long-term care services. Societal responsibilities may include maintaining a safety net to satisfy individual needs for assistance. However, the safety net may not provide a full range of choices in how those needs are met. Persons who require assistance multiple times a day and lack family members to provide some share of this assistance may not be able to have their needs satisfied in their own homes. The costs of meeting such extensive needs may mean that sufficient public support is available only in settings such as assisted living facilities or nursing homes. More extensive public support may be extended, but decisions to do so should carefully consider affordability in the context of competing demands for our nation’s resources. Considering the potential role of social insurance in financing. Government’s role in many situations has extended beyond providing a safety net. Sometimes this extended government role has been a result of efficiencies in having government undertake a function, and in other cases this role has been a policy choice. Some proposals have recommended either voluntary or mandatory social insurance to provide long-term care assistance to broad groups of beneficiaries. In evaluating such proposals, careful attention needs to be paid to the limits and conditions under which services will be provided. In addition, who will be eligible and how such a program will be financed are critical choices. As in defining a safety net, it is imperative that any option under consideration be thoroughly assessed for its affordability over the longer term. Encouraging personal preparedness. Becoming disabled is a risk. Not everyone will experience disability during his or her lifetime and even fewer persons will experience a severe disability requiring extensive assistance. This is the classic situation in which having insurance to provide additional resources to deal with a possible disability may be better than relying on personally saving for an event that may never occur. Insurance allows both persons who eventually will become disabled and those who will not to use more of their economic resources during their lifetime and to avoid having to put those resources aside for the possibility that they may become disabled. The public sector has two important potential roles in encouraging personal preparedness. The first is to adequately educate people about the boundaries between personal and societal responsibilities. Only if the limits of public support are clear will individuals be likely to take steps to prepare for a possible disability. Currently, one of the factors contributing to the lack of preparation for long-term care among the elderly is a widespread misunderstanding about what services Medicare will cover. The second public sector role may be to assure the availability of sound private long-term care insurance policies and possibly to create incentives for their purchase. Progress has been made in improving the value of insurance policies through state insurance regulation and strengthening the requirements for policies qualifying for favorable tax treatment through the Health Insurance Portability and Accountability Act of 1996. However, long-term care insurance is still an evolving product, and given the flux in how long-term care services are delivered, it is important to monitor whether long-term care insurance regulations need adjustments to ensure that consumers receive fair value for their premium dollars. Recognizing the benefits, burdens, and costs of informal caregiving.
As more and more of the baby boomers enter retirement age, spending for Medicare, Medicaid, and Social Security is expected to absorb correspondingly larger shares of federal revenue and crowd out other spending. The aging of the baby boomers will also increase the demand for long-term care and contribute to federal and state budget burdens. The number of disabled elderly who cannot perform daily living activities without assistance is expected to double in the future. Long-term care spending from public and private sources--about $137 billion for persons of all ages in 2000--will rise dramatically as the baby boomers age. Without fundamental financing changes, Medicaid--which pays more than one-third of long-term care expenditures for the elderly--can be expected to remain one of the largest funding sources, straining both federal and state governments.